text
string
source
string
C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057 , 2000. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023b. Mycal Tucker, Julie Shah, Roger Levy, and Noga Zaslavsky. Towards human-like emergent commu- nication via utility, informativeness, and complexity. Open Mind , 9:418–451, 2025. Amos Tversky. Features of similarity. Psychological review , 84(4):327, 1977. 11 Lan Wei, Dong Wang, and Yu Wang. Generalized relative entropy: New look at rényi entropy and its exploration from complexity measures to sparsity measures with applications in machine condition monitoring. Mechanical Systems and Signal Processing , 223:111917, 2025. Clark Wissler. The spearman correlation formula. Science , 22(558):309–311, 1905. J Gerard Wolff. Information compression as a unifying principle in human learning, perception, and cognition. Complexity , 2019(1):1879746, 2019. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. Liu Zhuang, Lin Wayne, Shi Ya, and Zhao Jun. A robustly optimized BERT pre-training approach with post-training. In Sheng Li, Maosong Sun, Yang Liu, Hua Wu, Kang Liu, Wanxiang Che, Shizhu He, and Gaoqi Rao, editors, Proceedings of the 20th Chinese National Conference on Computational Linguistics , pages 1218–1227, Huhhot, China, August 2021. Chinese Information Processing Society of China. URL https://aclanthology.org/2021.ccl-1.108/ . 12 A Limitations While this study offers valuable insights, several limitations should be considered. •Our analysis primarily focuses on English; generalizability across languages with different structures is an open question. •Human categorization data as a benchmark may not fully capture cognitive complexity and could introduce biases. •Our IB-RDT objective is applied to specific LLMs; other models or representations might behave differently. •We focus on static, context-free representations. LLMs may fall short in capturing context sensitivity, as human concepts are influenced by factors beyond raw compression efficiency (experience, social interaction, cultural context). • Our analysis is limited to textual input and does not explore image-based representations. Future work could address these by expanding to other languages, exploring alternative cognitive models, dynamic representations, and testing these principles on different architectures or in real- world applications. A.1 Dataset Access Details The aggregated and digitized human categorization datasets from Rosch [1973a, 1975], McCloskey and Glucksberg [1978] are made available in CSV format at: [Link reduced for anonymity]. A.2 LLM Details •BERT family: deberta-large, bert-large-uncased, roberta-large [Devlin et al., 2019, He et al., 2020, Zhuang et al., 2021]. •QWEN family: qwen2-0.5b, qwen2.5-0.5b, qwen1.5-0.5b, qwen2.5-1.5b, qwen5-1.5b, qwen1.5-1.5b, qwen1.5-4b, qwen2.5-4b, qwen2-7b, qwen1.5-14b, qwen1.5-32b, qwen1.5- 72b [Bai et al., 2023, Yang et al., 2024]. •Llama family: llama-3.2-1b, llama-3.1-8b, llama-3-8b, llama-3-70b, llama-3.1-70b [Tou- vron et al., 2023a,b, Grattafiori et al., 2024]. •Phi family: phi-1.5, phi-1, phi-2, phi-4 [Javaheripi et al., 2023, Abdin et
https://arxiv.org/abs/2505.17117v2
al., 2024, Aboue- lenin et al., 2025]. •Gemma family: gemma-2b, gemma-2-2b, gemma-7b, gemma-2-9b, gemma-2-27b [Team et al., 2024, 2025]. •Mistral family: mistral-7b-v0.3 [Karamcheti et al., 2021]. A.3 Additional Clustering Metrics To further validate our cluster alignment findings (Section 5.1), in addition to Adjusted Mutual Information (AMI) and the Normalized Mutual Information (NMI), we also computed the Adjusted Rand Index (ARI) for the k-means clusters derived from LLM embeddings against human-defined categories. ARI measures the similarity between two data clusterings, correcting for chance. Like AMI, a score of 1 indicates perfect agreement and 0 indicates chance agreement. Across all tested LLMs, the ARI and NMI scores largely mirrored the trends observed with AMI, showing significantly above-chance alignment with human categories and similar relative model performances. Silhouette scores, while more variable, generally indicated reasonable cluster cohesion for both LLM-derived and human categories. Detailed tables of these scores are provided below. These supplementary metrics reinforce the conclusion that LLMs capture broad human-like concep- tual groupings. A.4 Detailed AMI Scores per Model and Dataset Table 1 provides a more granular view of the AMI scores for each LLM across the three individual psychological datasets. 13 Figure 3: LLM-derived Clusters Show Above-Chance Alignment with Human Conceptual Categories. Normalized Mutual Information (NMI) between human-defined categories and clusters from LLM embeddings. Results are averaged over three psychological datasets. All models perform significantly better than random clustering. BERT’s performance is notably strong. Figure 4: LLM-derived Clusters Show Above-Chance Alignment with Human Conceptual Categories. Adjusted Rand Index (ARI) between human-defined categories and clusters from LLM embeddings. Results are averaged over three psychological datasets. All models perform significantly better than random clustering. BERT’s performance is notably strong. Dataset Model NMI AMI ARI [Rosch, 1973c] bert-large-uncased 0.19453 0.2011 0.11336 [Rosch, 1975] bert-large-uncased 0.16547 0.27324 0.2216 [McCloskey and Glucksberg, 1978] bert-large-uncased 0.12003 0.15934 0.06306 [Rosch, 1973c] FacebookAI/roberta-large 0.1021 0.10666 0.03393 [Rosch, 1975] FacebookAI/roberta-large 0.12138 0.23938 0.14165 [McCloskey and Glucksberg, 1978] FacebookAI/roberta-large 0.06271 0.08873 0.03173 [Rosch, 1973c] google-t5/t5-large 0.16583 0.16855 0.03676 [Rosch, 1975] google-t5/t5-large -0.03799 0.04179 0.00758 [McCloskey and Glucksberg, 1978] google-t5/t5-large 0.06146 0.08825 0.0082 [Rosch, 1973c] google/gemma-2-27b 0.08523 0.09065 0.04158 [Rosch, 1975] google/gemma-2-27b 0.04276 0.10062 0.06244 [McCloskey and Glucksberg, 1978] google/gemma-2-27b 0.07814 0.10274 0.04364 [Rosch, 1973c] google/gemma-2-2b 0.04029 0.04107 0.01212 [Rosch, 1975] google/gemma-2-2b 0.04529 0.14844 0.07596 [McCloskey and Glucksberg, 1978] google/gemma-2-2b 0.09953 0.13593 0.06326 [Rosch, 1973c] google/gemma-2-9b 0.1222 0.12757 0.06053 [Rosch, 1975] google/gemma-2-9b 0.07841 0.16126 0.09617 14 [McCloskey and Glucksberg, 1978] google/gemma-2-9b 0.10879 0.13997 0.06439 [Rosch, 1973c] google/gemma-2b 0.04336 0.04616 0.01593 [Rosch, 1975] google/gemma-2b -0.00353 0.04483 0.01577 [McCloskey and Glucksberg, 1978] google/gemma-2b 0.03472 0.05484 0.02142 [Rosch, 1973c] google/gemma-7b 0.04459 0.04547 0.01052 [Rosch, 1975] google/gemma-7b -0.03055 0.02644 0.01506 [McCloskey and Glucksberg, 1978] google/gemma-7b 0.03338 0.05724 0.02176 [Rosch, 1973c] meta-llama/Llama-3.1-70B 0.03008 0.03528 0.01936 [Rosch, 1975] meta-llama/Llama-3.1-70B -0.07026 0.02636 0.00392 [McCloskey and Glucksberg, 1978] meta-llama/Llama-3.1-70B -0.04773 0.00972 0.00236 [Rosch, 1973c] meta-llama/Llama-3.1-8B 0.00473 0.00393 0.00023 [Rosch, 1975] meta-llama/Llama-3.1-8B -0.03928 0.05489 0.01884 [McCloskey and Glucksberg, 1978] meta-llama/Llama-3.1-8B -0.02671 0.02208 6.00E-05 [Rosch, 1973c] meta-llama/Llama-3.2-1B 0.01936 0.01567 0.00246 [Rosch, 1975] meta-llama/Llama-3.2-1B -0.01876 0.05663 0.00782 [McCloskey and Glucksberg, 1978] meta-llama/Llama-3.2-1B 0.03625 0.06798 0.01352 [Rosch, 1973c] meta-llama/Llama-3.2-3B 0.03757 0.03537 0.00876 [Rosch, 1975] meta-llama/Llama-3.2-3B
https://arxiv.org/abs/2505.17117v2
0.01893 0.09619 0.03193 [McCloskey and Glucksberg, 1978] meta-llama/Llama-3.2-3B 0.03914 0.07395 0.0202 [Rosch, 1973c] meta-llama/Meta-Llama-3-70B 0.02289 0.03133 0.01514 [Rosch, 1975] meta-llama/Meta-Llama-3-70B -0.06428 0.0185 0.00554 [McCloskey and Glucksberg, 1978] meta-llama/Meta-Llama-3-70B -0.04595 0.01068 0.00272 [Rosch, 1973c] meta-llama/Meta-Llama-3-8B 0.03512 0.02852 0.00225 [Rosch, 1975] meta-llama/Meta-Llama-3-8B -0.06011 0.03694 0.00676 [McCloskey and Glucksberg, 1978] meta-llama/Meta-Llama-3-8B -0.0355 0.0219 0.00676 [Rosch, 1973c] microsoft/deberta-large 0.03748 0.03909 0.01467 [Rosch, 1975] microsoft/deberta-large 0.16568 0.28993 0.20527 [McCloskey and Glucksberg, 1978] microsoft/deberta-large 0.03217 0.06175 0.03019 [Rosch, 1973c] microsoft/phi-1_5 0.02102 0.01786 0.0075 [Rosch, 1975] microsoft/phi-1_5 0.03989 0.13887 0.04305 [McCloskey and Glucksberg, 1978] microsoft/phi-1_5 0.00895 0.05215 0.00639 [Rosch, 1973c] microsoft/phi-1 0.0249 0.01698 0.00133 [Rosch, 1975] microsoft/phi-1 -0.03625 0.02811 0.00217 [McCloskey and Glucksberg, 1978] microsoft/phi-1 -0.01148 0.03085 0.00371 [Rosch, 1973c] microsoft/phi-2 0.03703 0.02968 0.00404 [Rosch, 1975] microsoft/phi-2 -0.03654 0.04227 0.03942 [McCloskey and Glucksberg, 1978] microsoft/phi-2 -0.00254 0.02531 0.00533 [Rosch, 1973c] microsoft/phi-4 0.03075 0.03043 0.01076 [Rosch, 1975] microsoft/phi-4 -0.06737 0.00092 -0.01361 [McCloskey and Glucksberg, 1978] microsoft/phi-4 -0.01789 0.02705 0.00066 [Rosch, 1973c] mistralai/Mistral-7B-v0.3 0.0425 0.03507 0.00357 [Rosch, 1975] mistralai/Mistral-7B-v0.3 -0.05018 0.01217 0.0177 [McCloskey and Glucksberg, 1978] mistralai/Mistral-7B-v0.3 -0.01264 0.03902 0.00931 [Rosch, 1973c] Qwen/Qwen1.5-0.5B 0.00148 -0.00225 0.00399 [Rosch, 1975] Qwen/Qwen1.5-0.5B -0.01538 0.04833 0.0095 [McCloskey and Glucksberg, 1978] Qwen/Qwen1.5-0.5B 0.02559 0.06023 0.00771 [Rosch, 1973c] Qwen/Qwen1.5-1.8B 0.03397 0.03232 0.01034 [Rosch, 1975] Qwen/Qwen1.5-1.8B -0.01129 0.05803 0.00683 [McCloskey and Glucksberg, 1978] Qwen/Qwen1.5-1.8B -0.00541 0.03614 0.00538 [Rosch, 1973c] Qwen/Qwen1.5-14B 0.0372 0.02738 0.0028 [Rosch, 1975] Qwen/Qwen1.5-14B -0.02604 0.05153 0.01211 [McCloskey and Glucksberg, 1978] Qwen/Qwen1.5-14B 0.00124 0.04136 0.00338 [Rosch, 1973c] Qwen/Qwen1.5-32B 0.02638 0.02436 0.00409 [Rosch, 1975] Qwen/Qwen1.5-32B -0.03413 0.02526 -0.00665 [McCloskey and Glucksberg, 1978] Qwen/Qwen1.5-32B -0.01991 0.02124 -0.00059 [Rosch, 1973c] Qwen/Qwen1.5-4B 0.03803 0.04058 0.01742 [Rosch, 1975] Qwen/Qwen1.5-4B -0.03309 0.03988 0.01678 [McCloskey and Glucksberg, 1978] Qwen/Qwen1.5-4B -0.03997 0.00548 -0.00028 [Rosch, 1973c] Qwen/Qwen1.5-72B 0.03697 0.02892 0.00144 [Rosch, 1975] Qwen/Qwen1.5-72B -0.06184 0.02213 0.0017 [McCloskey and Glucksberg, 1978] Qwen/Qwen1.5-72B -0.02022 0.02918 0.00297 15 [Rosch, 1973c] Qwen/Qwen2-0.5B 0.02266 0.01923 0.00662 [Rosch, 1975] Qwen/Qwen2-0.5B 0.0515 0.14571 0.04999 [McCloskey and Glucksberg, 1978] Qwen/Qwen2-0.5B 0.01508 0.04357 0.00643 [Rosch, 1973c] Qwen/Qwen2-1.5B 0.02956 0.02779 0.00544 [Rosch, 1975] Qwen/Qwen2-1.5B -0.03595 0.03443 -0.01099 [McCloskey and Glucksberg, 1978] Qwen/Qwen2-1.5B 0.01768 0.05407 0.01604 [Rosch, 1973c] Qwen/Qwen2-7B 0.06424 0.06439 0.02067 [Rosch, 1975] Qwen/Qwen2-7B 0.0333 0.09155 0.02832 [McCloskey and Glucksberg, 1978] Qwen/Qwen2-7B 0.05329 0.07599 0.01977 [Rosch, 1973c] Qwen/Qwen2.5-0.5B 0.03165 0.03291 0.01029 [Rosch, 1975] Qwen/Qwen2.5-0.5B -0.06534 -0.0196 -0.01165 [McCloskey and Glucksberg, 1978] Qwen/Qwen2.5-0.5B 0.0062 0.04191 0.0054 [Rosch, 1973c] Qwen/Qwen2.5-1.5B 0.04838 0.0489 0.0129 [Rosch, 1975] Qwen/Qwen2.5-1.5B 0.03785 0.113 0.02761 [McCloskey and Glucksberg, 1978] Qwen/Qwen2.5-1.5B 0.06166 0.08675 0.03162 [Rosch, 1973c] Qwen/Qwen2.5-3B 0.03882 0.0348 0.00465 [Rosch, 1975] Qwen/Qwen2.5-3B 0.03977 0.10821 0.04302 [McCloskey and Glucksberg, 1978] Qwen/Qwen2.5-3B 0.03416 0.07307 0.02959 [Rosch, 1973c] Qwen/Qwen2.5-7B 0.0529 0.05051 0.01605 [Rosch, 1975] Qwen/Qwen2.5-7B -0.00905 0.03227 0.01044 [McCloskey and Glucksberg, 1978] Qwen/Qwen2.5-7B 0.00222 0.02759 0.00551 Table 1: Mutual information measures (normalized mutual information, adjusted mutual information, adjusted rand index) per model per dataset. Aggregated results are shown in the main paper and the Figures in the Appendix. A.5 Correlation between Human Typicality Judgments and LLM Internal Cluster Geometry A.6 Typicality and Cosine Similarity [RQ2] Figure 5 shows representative scatter plots illustrating the relationship between human typicality scores (or psychological distances) and the LLM-derived item-centroid cosine similarities for selected categories and models. These plots visually demonstrate
https://arxiv.org/abs/2505.17117v2
the often modest correlations discussed in Section 5.2. Figure 6 shows the aggregated Spearman correlation across model families and datasets. These correlations are very weak and mostly non-significant. A.7 Theoretical Extreme Case Exploration for L (Content from your original Appendix Section A: “Theoretical Extreme Case Exploration“ would go here, ensuring it refers to Las defined in Equation 4). In the case where |C|=|X|(each data point is a cluster of size 1, so|Cc|= 1∀c∈C), then H(X|C) = 1 |X|P c∈C1·log21 = 0 . The distortion term σ2 c= 0 for each cluster as the item is its own centroid. Thus, L=I(X;C) +β·0 =H(X)−H(X|C) =H(X) = log2|X|. This represents the cost of encoding each item perfectly without any compression via clustering, and zero distortion. In the case where |C|= 1 (one cluster CXcontains all |X|data points, so |CCX|=|X|), then H(X|C) = 1 |X||X|log2|X|= log2|X|. Thus, I(X;C) =H(X)−H(X|C) = log2|X|−log2|X|= 0. This represents maximum compression (all items are treated as one). The distortion term becomes β·1 |X||X| ·σ2 X=β·σ2 X, where σ2 Xis the variance of all items Xwith respect to the global centroid of X. So,L= 0 + β·σ2 X=β·σ2 X. This represents the scenario of maximum compression where the cost is purely the distortion incurred by representing all items by a single prototype. A.8 Compression Figures Figure 7 shows the mean cluster entropy ( Sα) versus the number of clusters ( K) aggregated across the different LLM families, compared against human-defined categories (represented as distinct points or lines at their fixed Kvalues from the datasets). Higher entropy values indicate less compressed or more diverse clustering. Figure 8 depicts the IB-RDT objective ( L) vs. K. Lower Lindicates a more optimal balance between compression ( I(X;C)) and semantic fidelity (distortion). Human categories (fixed K) show higher Lvalues. 16 ModelDataset Correlation (Spearman ρ) Rosch (1973) Rosch (1975) McCloskey (1978) Qwen1.5-72B -0.237 -0.049 -0.016 Llama-3-70B -0.124∗∗-0.085 0.016 Llama-3.1-70B -0.125∗∗-0.084 0.015 Qwen1.5-32B -0.051 -0.064∗∗0.007 gemma-2-27b -0.166 -0.116 0.038 Qwen1.5-14B -0.197 -0.052 -0.029 phi-4 -0.061 -0.044 0.025 gemma-2-9b -0.282 -0.074 0.117 Llama-3.1-8B -0.184 -0.075 -0.058 Llama-3-8B -0.162 -0.073 -0.053 Mistral-7B-v0.3 0.015 -0.112 0.040 Qwen2-7B -0.021 -0.105 -0.008 Qwen2.5-7B 0.033 -0.066 -0.030 gemma-7b -0.135 -0.047∗∗0.010 Llama-3.2-3B -0.007 0.000 0.001 phi-2 0.049 -0.108∗∗-0.001 gemma-2b -0.176 -0.055 0.052 gemma-2-2b -0.283 -0.107 0.117 Qwen1.5-1.8B -0.106 -0.085 0.021 Qwen2.5-1.5B -0.003 -0.035 0.015 phi-1.5 0.134 -0.134 0.007 phi-1 -0.219 -0.138 0.013 Llama-3.2-1B -0.062 -0.004∗∗-0.003 Qwen1.5-0.5B -0.122 -0.004 -0.001 Qwen2-0.5B -0.044 0.009 -0.009 Qwen2.5-0.5B -0.018 -0.009 -0.007 roberta-large 0.088 -0.047 -0.074 bert-large-uncased -0.427 -0.198∗∗0.206∗∗ deberta-large 0.016 -0.042 -0.023 Table 2: Correlation between Human Typicality Judgments and LLM Internal Cluster Geom- etry. Spearman rank correlations between human-rated psychological typicality/distance (higher human scores = less typical/more distant) and item-to-centroid cosine similarity (higher similarity = more central to LLM cluster). Negative correlations suggest alignment.∗∗p <0.05. 17 Figure 5: Weak-to-No Correlation Between LLM Embedding Distance and Human Typicality Judgments. Scatter plot examples of the cosine similarity versus the human typicality of items belonging to the category compared to items from other categories. Figure 6: Weak and Mostly Non-Significant Spearman Correlation Values Between Human Typicality Judgments and LLM Cosine Similarity
https://arxiv.org/abs/2505.17117v2
Indicating Different Structure Representing Concepts. Mean Spearman correlation values across the models belonging to the same family and across the three datasets. 18 Figure 7: Human Conceptual Categories Exhibit Higher Mean Entropy than LLM-Derived Clusters. Mean cluster entropy ( Sα) versus the number of clusters ( K) for various LLMs, compared against human-defined categories (represented as distinct points or lines at their fixed Kvalues from the datasets). Higher entropy values indicate less compressed or more diverse clusterings. Figure 8: LLMs Achieve a More “Optimal” Compression-Meaning Trade-off by the LMeasure. IB-RDT objective ( L) vs. K. Lower Lindicates a more optimal balance between compression (I(X;C)) and semantic fidelity (distortion). Human categories (fixed K) show higher Lvalues. 19
https://arxiv.org/abs/2505.17117v2
After Retrieval, Before Generation: Enhancing the Trustworthiness of Large Language Models in RAG Xinbang Dai1∗Huikang Hu2∗Yuncheng Hua3Jiaqi Li1Yongrui Chen2 Rihui Jin2Nan Hu2Guilin Qi1,2,† 1School of Cyber Science and Engineering, Southeast University, Nanjing 211189, China 2School of Computer Science and Engineering, Southeast University 3School of Computer Science and Engineering, University of New South Wales {xbdai, huikanghu, gqi}@seu.edu.cn Abstract Retrieval-augmented generation (RAG) systems face critical challenges in balanc- ing internal (parametric) and external (retrieved) knowledge, especially when these sources conflict or are unreliable. To analyze these scenarios comprehensively, we construct the Trustworthiness Response Dataset (TRD) with 36,266 questions spanning four RAG settings. We reveal that existing approaches address isolated scenarios—prioritizing one knowledge source, naively merging both, or refusing answers—but lack a unified framework to handle different real-world conditions si- multaneously. Therefore, we propose the BRIDGE framework, which dynamically determines a comprehensive response strategy of large language models (LLMs). BRIDGE leverages an adaptive weighting mechanism named soft bias to guide knowledge collection, followed by a Maximum Soft-bias Decision Tree to evaluate knowledge and select optimal response strategies (trust internal/external knowl- edge, or refuse). Experiments show BRIDGE outperforms baselines by 5–15% in accuracy while maintaining balanced performance across all scenarios. Our work provides an effective solution for LLMs’ trustworthy responses in real-world RAG applications. 1 Introduction Recent advancements in retrieval-augmented generation (RAG) have enhanced the accuracy and trustworthiness of large language models (LLMs) in knowledge-intensive tasks, mitigating critical issues such as outdated model knowledge and hallucinations [ 16,26]. After retrieval, and before generating responses, LLMs often rely on two primary knowledge sources: context acquired through retrieval, referred to as external knowledge , and parametric knowledge stored within the LLM during training, termed internal knowledge . However, both knowledge sources exhibit limitations. External knowledge is prone to irrelevance, misinformation, or incompleteness due to corpus quality, retriever limitations, or query complexity [ 6,55,9]. For instance, in the NQ dataset [ 25] for open-domain question answering, [ 39] employed Google Search as the retriever and searched the top 10 web pages for each question, finding that 34% of these pages were completely irrelevant to the answer. In the PopQA dataset [ 31], this percentage even reached 50%. Similarly, internal knowledge may be outdated or erroneous, as LLMs are constrained by training data noise and the rapid evolution of world knowledge [42, 29]. In real-world RAG applications, the limitations of internal and external knowledge sources can lead to four distinct scenarios [ 39,50] (as illustrated in Figure 1). In the ideal case, LLMs should adopt ∗Equal contributors. †Corresponding author. Preprint.arXiv:2505.17118v1 [cs.CL] 21 May 2025 Question: How many seasons does Modern Family have? Modern Family aired for eleven seasons and 250 episodes, from September 23, 2009 … "Modern Family" has a total of 11 seasons. The show aired from September 23, … Eleven.Question: Where was Barack Obama born? Barack Obama, the 44th president of the United States, was born in Kenya … Barack Obama was born on August 4, 1961, in Honolulu, Hawaii , United States… Response Strategy: Faithful to all KnowledgeHawaii.Response Strategy: Faithful to Internal Knowledge Question: Who won the
https://arxiv.org/abs/2505.17118v1
latest Nobel Prize in Literature? In 2024, Han Kang was awarded the Nobel Prize in Literature for his innovative ... The latest Nobel Prize in Literature was awarded in 2022 to Annie Ernaux … Response Strategy: Faithful to External KnowledgeHan Kang.Question: When is the release date for Civilization VII? Civilization VII will be released on July 12, 2026. The game will come in the form of … Civilization VII now has a release date of May 24, 2023. For the first time in … I don’t know.Response Strategy: Refuse to Answer Internal Knowledge External Knowledge Correct or Latest Incorrect or Outdated1 2 3 4Figure 1: Four scenarios in real-world RAG. scenario-specific strategies: (1) When both knowledge sources align and are correct, LLMs should be faithful to all knowledge to respond. (2) If external knowledge contains errors while internal knowledge remains reliable, LLMs should be faithful to internal knowledge. (3) Conversely, when internal knowledge is outdated or incorrect but retrieved context provides valid information, LLMs should anchor responses to external evidence. (4) When neither source contains sufficient or credible information, LLMs should refuse to answer. Existing research addresses these scenarios separately. When facing scenario (2) or (3), some works advocate aligning LLM’s response with one of the knowledge sources (either internal or external knowledge) [ 33,7,27,43,54]. Others attempt to integrate both sources [ 36,1,44,39,41,53] but overlook scenario (4), failing to establish safeguards when both knowledge are invalid. A third line of work focuses exclusively on refusal mechanisms for exceeding the boundaries of knowledge sources [ 3,37], yet neglects knowledge integration strategies. In summary, these isolated approaches only use some specific strategies to handle one or two of these situations, failing to reveal the inadequacies of LLMs when faced with multiple RAG scenarios. Hence, there is also a lack of a synthesized framework capable of simultaneously addressing these diverse scenarios while ensuring LLMs’ trustworthiness in the RAG system. To comprehensively evaluate LLMs’ response strategies under four RAG scenarios, we construct the Trustworthiness Response Dataset (TRD) containing 36,266 multiple-choice questions. Each question explicitly provides both external knowledge (simulating post-retrieval context) and internal knowledge (eliminating internal knowledge gaps between different LLMs). TRD comprises four RAG scenarios mirroring our proposed response strategies: (1) Faithful to All Knowledge (FA) : We provide correct internal and external knowledge for the corresponding questions. (2) Faithful to Internal Knowledge (FI) : We modify correct external knowledge with false facts to construct poisoned data. (3) Faithful to External Knowledge (FE) : Standardizing data containing incorrect internal knowledge across LLMs from different families is complicated. Therefore, we use the latest Wikidata and Wikipedia facts (beyond the LLMs’ internal knowledge cutoff date) to construct questions and external knowledge. This ensures the models lack up-to-date internal knowledge, thereby forcing them to rely solely on external knowledge for answers. (4) Refuse to Answer (RA) : While ensuring that internal knowledge is outdated, we introduce false facts to corrupt the correct external knowledge, rendering the questions unanswerable based on all knowledge sources. Our evaluation of TRD reveals that existing LLM families and RAG baselines fail to handle
https://arxiv.org/abs/2505.17118v1
all scenarios effectively due to their lack of a unified understanding across these four cases. However, in this task, human behavioral patterns offer novel insights for addressing these scenarios. For instance, when confident in our internal knowledge, we perform limited external searches just for verification [ 12]. Conversely, for uncertain or time-sensitive questions, we conduct extensive retrieval and evaluate consistency of external knowledge [ 35]. When no credible evidence exists, humans readily acknowledge ignorance and refuse to answer the question [ 14]—a critical behavior 2 that current RAG systems struggle to emulate. Inspired by these observations, we propose Biased Retrieval an D Generation Evaluation (BRIDGE), a framework that enhances the trustworthiness of LLMs in RAG systems. Unlike prior work that either enforces strict alignment to single knowledge sources (hard bias) or treats all sources equally (no bias), BRIDGE introduces soft bias : an adaptive weighting mechanism where an Allocator module determines whether to prioritize broad external knowledge retrieval (for consistency evaluation) or deeper internal knowledge generation (for fact verification). Based on the TRD training set data, Allocator supports both in-context learning (ICL) and group relative policy optimization (GRPO) paradigms. Then, based on the weighted knowledge, BRIDGE computes matching scores and evaluates consistency through a Maximum Soft-bias Decision Tree —a bias-guided decision tree enabling fine-grained response strategy selection (e.g., trusting internal knowledge, following external evidence, or refusing to answer). Additionally, it incorporates a reflection strategy to rectify decision errors during inference. In summary, our contributions are as follows: •To comprehensively evaluate LLM response strategies in real-world RAG, we construct TRD and use it to reveal the limitations of prior isolated solutions. •We propose the BRIDGE, a unified approach introducing soft bias to balance internal and external knowledge dynamically, a Maximum Soft-bias Decision Tree, and a Reflection module—covering all scenarios. Furthermore, BRIDGE can be applied to both ICL and GRPO settings. •Experiments show that BRIDGE demonstrates superior and balanced performance on TRD and other trustworthiness benchmarks for single RAG scenarios. 2 Related Works Retrieval-augmented Generation. The paradigm of RAG has evolved significantly since its incep- tion, driven by the need to build LLMs on external knowledge while mitigating hallucinations. Early foundational work established the core framework of RAG [ 16,26,2]. Subsequently, its retrieval module has been optimized through techniques such as query rewriting [ 52,10,4], passage rerank- ing [ 15,49], and adaptive efficient retrieval strategies [ 1,20,48]. In the generation module, recent efforts have focused on improving generation efficiency via prompt compression [ 19,45,8] and decoding optimization [ 32,21,40]. These approaches primarily enhance individual RAG components in non-adversarial settings, whereas our research focuses on adversarial conditions (internal and external knowledge containing conflicts or noise). Trustworthiness of LLMs in RAG. Ensuring the trustworthiness of LLMs in the RAG system has become a critical research area, particularly in adversarial environments. For scenarios where internal and external knowledge sources conflict, some studies align model responses with internal knowledge [ 33,46,17,43], while others prioritize external knowledge [ 27,13,54]. Beyond these binary choices, recent efforts to improve the trustworthiness of LLM responses attempt to merge conflict sources through contrastive
https://arxiv.org/abs/2505.17118v1
decoding [ 22,36] or integrating consistent portions of internal and external knowledge [ 44,39,41,53]. However, these methods rely on carefully designed prompt templates to assess knowledge consistency. This matching strategy is sensitive to textual variations and lacks mechanisms for handling unanswerable questions. Previous work on enhancing LLM refusal capabilities in RAG systems relies on building expert knowledge bases [ 3] or fine-tuning generative models [ 37]. However, these methods often fail to integrate multiple knowledge sources and lack plug-and-play compatibility with black-box model-based RAG systems. Collectively, these works are orthogonal to each other, typically targeting only one or two scenarios. In contrast, BRIDGE provides a unified framework that systematically addresses four RAG scenarios through soft bias allocation and a scored decision tree. 3 TRD Construction In this section, we explain the construction process of TRD. We formalize our task scenario as follows: given a natural language question q, the system retrieves external knowledge Kextand generates explicit internal knowledge Kintby answering qwithout accessing Kext. Each question in TRD is associated with four options: one correct option, three incorrect options. One of the options is "I don’t know".. The dataset is divided into a training set (29,012 questions), a validation set (3,627 3 questions), and a test set (3,627 questions). TRD simulates four RAG scenarios displayed in Figure 1. Below, we detail the construction process. 3.1 Data Preparation We utilize two primary datasets to help construct multiple-choice questions in TRD. (1) NQ: NQ is an open-domain question answering dataset released by Google in 2019 [ 25]. The accessible dataset contains 315,203 question-answer pairs (QA pairs) and provides a long answer (collected from Wikipedia by annotators) and a short answer (one or more entities) for each question. We use NQ to generate questions for the FAandFIcategories. (2) TAQA : A dataset of 20,148 time-sensitive QA pairs derived from Wikipedia tables, where the answers to the questions change over time [ 51]. We use TAQA to construct part questions for the FEandRAcategories. 3.2 Question Generation Faithful to All Knowledge. We select QA pairs from the NQ training set that contain long answers and short answers that are not timestamps. The short answer serves as the correct option. Using a popularity-based entity substitution method [ 28], we generate two incorrect answers that have the same entity type as the short answer from Wikidata. The long answer is treated as external knowledge, while GPT-4-turbo generates internal knowledge based on the question and the short answer. We believe that explicitly providing internal knowledge helps to eliminate the parametric knowledge gap among different family LLMs during evaluation. Faithful to Internal Knowledge. Following the same procedure as above, we select questions from the NQ validation set with long answers and non-temporal short answers, and generate two incorrect options of the same entity type using the popularity-based entity substitution method [ 28]. However, in this setting, we replace the correct entity in the long answer with one of the incorrect entities, resulting in misleading external knowledge. The internal knowledge, generated by GPT-4-turbo from the original short answer, remains correct. Faithful to External
https://arxiv.org/abs/2505.17118v1
Knowledge. To simulate scenarios where LLMs must rely solely on external knowledge due to outdated or absent internal knowledge, we define two distinct question sub-types: (1)Evolution : This category targets factual updates for existing facts/events (e.g., Who won the latest Nobel Prize in Literature? ). LLMs trained with a cutoff date (February 2023 in this paper) cannot track such updates. The TAQA dataset focuses on this type of question. For each question, we query its latest answer in Wikipedia (February 1, 2025 dump). We retain only questions with valid Wikipedia pages. Then, using prompt templates in CONFLICT BANK [38], we we use GPT-4-turbo to generate a Wikipedia-style evidence paragraph from the correct answer (post-cutoff date) as external knowledge. For the same question, we generate a parallel evidence paragraph describing TAQA’s 2021 answer (pre-cutoff date) using the same templates, simulating outdated internal knowledge. The correct answer is the entity with the post-cutoff date. We use the 2020 and 2021 answers in TAQA as incorrect options. (2) Perpetuation : This category addresses entirely new post-cutoff entities/events (e.g., What is the CPU of iPhone 16? ). For many LLMs whose training data ends in 2023, it is unknown. First, we execute SPARQL3queries on Wikidata to extract timestamped triples where the object entity was created after February 2023 (e.g., < iPhone16, CPU, A18 >). For these triples, we retrieve Wikipedia paragraphs (February 1, 2025 dump) containing both the subject and object entities as external knowledge; triples without corresponding paragraphs are filtered out. Internal knowledge is explicitly set to "None" to eliminate parametric knowledge interference from different LLMs. We select half of the data as FEdata, and the other half is prepared to generate RAdata. Refuse to Answer. We replace the correct answer in external knowledge with an outdated distractor entity (e.g., altering A18toA17 in the iPhone16 example). This ensures no credible evidence exists across all knowledge sources, asking LLMs to abstain. 3.3 Question Validation We use GPT-4-turbo andLlama3 70B to validate questions, ensuring:(1) For FA, both models could select correct answers using either knowledge source. (2) For FI, models must choose the correct answer using internal knowledge but a consistent incorrect answer with external knowledge. 3https://www.wikidata.org/wiki/Wikidata:SPARQL_tutorial 4 Multi -query Generator Sub-answer 1 Sub-answer 2 Multi -query Retriever Bias Guided Knowledge Collection S1Generated Knowledge Retrieved KnowledgeS2 External KnowledgeS3Internal Knowledge Scorerrp×[S3+(1-S2)] gp×[S2+ (1-S3)]S1+S4 (1-S1) + (1 -S4)MAXBias Guided Knowledge Evaluation ReflectionMaximum Soft-biasDecision Tree S4Soft-biasrpAllocator gpQuestion …Sub-query 1 Sub-query 2 Sub-query 3 Sub-query 4>=β (a, β) <=α >a <=a Figure 2: Overview of BRIDGE. (3) For FE, models must answer correctly with external knowledge but fail with internal knowledge. (4) For RA, both models select wrong answers using either knowledge source. After validation, we balance the dataset, limiting FAto 10% of the total (since it is not the primary focus of our research) and allocating 30% each to the other three categories. Furthermore, the construction of time-related questions is automated, allowing them to be updated based on changes to Wikipedia and Wikidata, thus meeting the future evaluation needs of LLMs with evolving internal knowledge. For TRD construction details and
https://arxiv.org/abs/2505.17118v1
statistics, please see Appendix A. 4 Biased Retrieval and Generation Evaluation As illustrated in Figure 2, BRIDGE operates through two sequential phases: (1) Bias-Guided Knowledge Collection : This phase constructs an Allocator that computes retrieval dependency probability rpand generation dependency probability gpto determine the reliance degree on external and internal knowledge (constrained by rp+gp= 100% ). We implement two Allocator paradigms: a reinforcement learning-based approach using GRPO and ICL method (§ 4.1.1). Subsequently, the system generates nsub-queries capturing key information of q, and allocating the number of sub-queries based on rpandgp. For these sub-queries, we yield generated knowledge Kgenfrom Multi-query Generator and retrieved knowledge Kretfrom Multi-query Retriever (§ 4.1.2). (2) Bias-Guided Knowledge Evaluation : This phase employs a BGE-m3 [5] based encoder as a Scorer to compute multi-granularity matching scores between knowledge sources (§ 4.2.1). These scores feed into a Maximum Soft-bias Decision Tree that determines the optimal response strategy under bias weights and thresholds. Furthermore, a Reflection module will trigger re-execution when the decision tree detects potential decision errors (§ 4.2.2). 4.1 Bias-Guided Knowledge Collection 4.1.1 Soft Bias Allocator Determining soft bias (i.e., rpandgp) for a single question presents significant challenges. For instance, the question What is the CPU of iPhone 16? contains no time-sensitive keywords, requir- ing LLMs to assign bias probabilities through analyzing question semantic and assessing internal knowledge boundaries. To enhance this capability, we leverage TRD’s training data to synthesize annotated examples with analysis paths. We develop two Allocator paradigms: (1) a GRPO-optimized Llama3-8B-Instruct model (pretrained cutoff: Feb 2023), and (2) an ICL-based approach. Soft Bias Analysis Data Construction. We synthesize data through three steps: (1) We set the retrieval dependency probability rhard p and the generation dependency probability ghard p based on the scenario type in TRD, which can be regarded as the golden label of bias. For TRD’s FA/FI categories, we set rhard p = 0% ,ghard p = 100% ; for RAandFE,ghard p = 0% ,rhard p = 100% . (2) Using DeepSeek-R1 ando3-mini as reasoning models, we prompt them to generate analysis path aandrsoft pandgsoft p(10%-90%) under explicit directional constraints from hard bias labels (e.g., gsoft p> rsoft pforFA/FI ). (3) We select the generated output having a smaller root mean square loss between the soft bias and hard bias labels as high-quality data. 5 Allocator (GRPO). We employ GRPO to fine-tune our Allocator, using a composite reward function to optimize Llama3-8B-Instruct . We instruct the model to output a tuple γ= (apred, rp, gp), comprising the analysis path apred, predicted retrieval dependency probability rp, and generation dependency probability gp. The training incorporates four distinct reward components: Direction Reward : Ensures primary direction alignment between the soft bias and its corresponding hard bias. For question q, the direction reward function is defined as: R(γ, q;direction ) =3 if rp> gp;rhard p> ghard p or rp< gp;rhard p< ghard p 0 otherwise(1) Format Reward : Verifies output format completeness of γthrough regular expression matching: R(γ, q;format ) = 1 if rp, gpandapredinγ 0 otherwise(2) Sum Reward : Enforces the rp+gp= 100% constraint: R(γ, q;sum) =
https://arxiv.org/abs/2505.17118v1
1 if rp+gp= 100% 0 otherwise(3) Analysis Quality Reward : Promotes semantic similarity between the model’s predicted analysis apred and high-quality analysis paths generated by larger reasoning models. We compute reward using BGE embeddings to measure textual similarity: R(γ, q;analysis ) =BGE (apred, a) (4) Allocator (ICL). We also implement an in-context learning (ICL) paradigm by retrieving the top-k most similar training examples for each TRD. Instead of relying on soft bias outputs from larger reasoning models, we convert the gold hard-bias presets into soft-bias labels (e.g., 100% →90%, 0%→10%) to construct pseudo soft bias demonstrations. This ensures that BRIDGE’s performance improvements are not attributable to external reasoning model outputs. The analysis is discussed in § 5.2, while implementation details for GRPO and ICL in Allocator are provided in Appendix B. 4.1.2 Knowledge Collection Relying solely on the original question for retrieval often yields limited information, prompting recent work to focus on improving retrieval performance in RAG systems to capture broader knowledge [ 43]. Similarly, generating multiple responses for knowledge-related questions can improve LLM self- consistency and mitigate hallucinations [ 7]. To expand the knowledge search space, we leverage LLMs to analyze a given question and identify the key information required for resolution. For each key information, the LLM generates a targeted query. These sub-queries act as "probes," enabling both the retriever and the LLM to explore a wider range of knowledge within their respective domains. Specifically, we allocate sub-queries proportionally to each knowledge source based on the Allocator’s soft bias probabilities. Given ntotal sub-queries, the number of queries assigned to retrieved knowledge srand generated knowledge sgis computed as sr=⌈rp×n⌉andsg=⌈gp×n⌉, respectively. The first srsub-queries are processed by the Multi-query Retriever, aggregating external documents into the retrieved knowledge Kret, while the first sgsub-queries are answered by the Multi-query Generator, forming the generated knowledge Kgen. This adaptive allocation improves the effectiveness of knowledge searching. 4.2 Bias-Guided Knowledge Evaluation 4.2.1 Knowledge Scorer According [ 5], our BGE-based Scorer computes three similarity metrics between knowledge sources: sparse matching for lexical overlap, dense matching for semantic similarity, and ColBERT [ 24] matching for token-level relevance, respectively weighted as 0.2, 0.4, and 0.4 by grid search in the TRD validation set. This multi-granular approach better assesses knowledge relevance. 6 4.2.2 Maximum Soft-bias Decision Tree To determine the optimal integration strategy, we develop a Maximum Soft-bias Decision Tree that evaluates knowledge consistency through four knowledge matching scores and soft bias to help LLM make the final decision. As shown in Figure 2, we get four matching scores from Scorer. Based on these scores and soft bias, the tree executes three sequential checks: Conflict Detection. Trigger FAifS1+S4>(1−S1) + (1−S4)(implying high consistency in two knowledge sources), the LLM will integrate all knowledge. Confidence Calculation. We compute the trustworthiness score for the retriever and the LLM: TRet=rp[S3+ (1−S2)] (5) TLLM =gp[S2+ (1−S3)] (6) This formulation reflects two key insights: (1) Higher dependency probability ( rporgp) amplifies the impact of corresponding knowledge matches, and (2) A mismatch between knowledge from one source reduces the trustworthiness score, while a match between knowledge from another
https://arxiv.org/abs/2505.17118v1
knowledge source also decreases its score. For example, poor alignment between generated knowledge and internal knowledge ( S2→0), along with good alignment between retrieved knowledge and external knowledge ( S3→1), both contribute to lowering TLLM . Threshold-based Decision. As shown in Figure 2, we employ two thresholds ( α= 0.5,β= 1.1) determined through grid search on TRD’s validation set. For cases where TRet< TLLM , the system adopts FIwhen TLLM > α, otherwise triggering RA. When TRet≥TLLM , three outcomes emerge: (1)FEifTRet≥β, (2) RAifTRet< α, or (3) Reflection mechanism adjusts the sub-queries for intermediate cases ( α≤TRet< β). After three unsuccessful cycles, the system defaults to RAas a conservative response. The asymmetric thresholds ( β > α ) mean the retriever knowledge source requires higher trustworthiness ( β= 1.1) due to potential retrieval noise. Finally, our framework dynamically adjusts response strategies based on predicted decisions: FA employs all knowledge; FIrelies solely on KintandKgen;FEprioritizes KextandKret; and RA responds with "I don’t know". The theoretical analysis of GRPO, the hyper-parameter settings, and all system prompts in BRIDGE are detailed in Appendix B. 5 Experiments 5.1 Experimental Setups Datasets. We evaluated all methods on four datasets. (1) TRD : Our constructed test set. (2) RealtimeQA (RQA) [23]: A dynamic news QA platform. We select 468 questions emerging after October 2023, providing their corresponding evidence as external knowledge. This dataset aligns with the FEscenario. (3) HotpotQA Poisoned (HQA P)[47]: We use the version from RobustRAG [ 44], where external knowledge is corrupted via PoisonedRAG [ 55]. Thus, we generate valid internal knowledge from correct answers using the method in § 3.2. This dataset corresponds to the FI scenario. (4) CONFLICT BANK [38]: A multiple-choice dataset containing temporal misalignment facts fabricated from Wikidata entries with future timestamps. We use the original correct evidence as internal knowledge and fabricated facts as external knowledge, and randomly sample 1,000 questions to evaluate LLMs under the FIscenario. RAG settings & Baselines. (1)RAG settings : We evaluate RAG baselines using three LLMs (GPT-3.5-turbo ,Qwen 72B , and Llama3-8B-Instruct ) with parametric knowledge cutoffs be- fore February 2023. To ensure fairness, all methods employ identical retriever parameters and are granted access to the English Wikipedia dump (cutoff date: February 1, 2025). (2) Baselines : Our baselines contain various aspects of LLM trustworthiness in RAG systems. Prompt-based meth- ods include OPIN [ 54], USC [ 7], RobustRAG [ 44], InstructRAG ICL[41], AstuteRAG [ 39], and TrustRAG [ 53]. GPT does not support deterministic decoding; hence, we report results averaged from three runs. Fine-tuning-based methods encompass SelfRAG [ 1], CAD [ 36], InstructRAG FT[41] and T RUST -ALIGN [37]. Detailed settings are available in Appendix C. 7 5.2 Experiment Results Main Results. As shown in Table 1, compared to vanilla RAG, OPIN, which prioritizes external knowledge, achieves notable gains on RQA, but suffers performance drops on HQA PandCONFLICT - BANK. Conversely, USC, which enforces internal knowledge consistency, excels on HQA Pand CONFLICT BANK but deteriorates sharply on RQA. Knowledge integration methods (RobustRAG, InstructRAG, AstuteRAG, and TrustRAG) demonstrate consistent improvements over vanilla RAG on TRD, underscoring
https://arxiv.org/abs/2505.17118v1
the effectiveness of combining different knowledge sources. However, their lower RR scores indicate that their response strategies for refusing to answer are insufficiently considered. CAD, which lacks mechanisms to resolve knowledge conflicts or reject uncertain questions, under- performs across most scenarios. In contrast, SelfRAG, InstructRAG FTandTRUST -ALIGN , which dynamically select knowledge sources or validate knowledge boundaries, achieve significantly higher RR rates. Nevertheless, their simple knowledge integration strategy limits broader applicability in all scenarios. As described in §4.1.1, BRIDGE ICL, despite relying solely on templated soft bias demonstrations (without using data from large-scale reasoning models), outperforms baselines, vali- dating benefits stem from soft bias design rather than synthetic data from larger reasoning models. BRIDGE GRPO further excels with its adaptive bias allocation, achieving balanced performance across all scenarios. Notably, BRIDGE has a sum of 100% for Acc and RR on the CONFLICT BANK dataset when using GPT-3.5-Turbo andQwen72B . This demonstrates our model’s highly reliable response strategy: when unable to provide correct answers using its internal knowledge, it consistently chooses to abstain from responding rather than risking incorrect outputs. Table 1: Overall results, reporting accuracy (Acc), refusal rate (RR, regardless of question answerability), and exact match score (EM) [ 34], with the best results highlighted in bold . MethodsTRD RQA HQA P CONFLICT BANK Acc(%) RR(%) EM(%) EM(%) Acc(%) RR(%) Based on GPT-3.5-turbo Vanilla 53.71 5.81 72.50 44.40 52.04 0.20 OPIN [54] 63.93 3.34 76.25 23.47 48.03 2.00 USC [7] 65.53 3.01 26.25 70.70 73.46 1.08 RobustRAG [44] 62.50 6.69 65.82 64.00 58.40 3.04 InstructRAG ICL[41] 65.55 4.29 74.36 57.30 57.45 1.71 AstuteRAG [39] 64.26 2.81 74.03 43.43 57.45 1.66 TrustRAG [53] 66.39 3.42 78.48 40.00 52.01 2.50 BRIDGE ICL 72.32 31.40 76.92 70.97 61.86 38.14 BRIDGE GRPO 77.90 33.33 78.48 72.16 75.31 24.69 Based on Qwen 72B Vanilla 63.19 9.50 70.25 61.61 57.83 0.61 OPIN [54] 63.40 3.41 75.95 32.65 42.22 4.44 USC [7] 64.20 4.80 28.75 76.00 70.74 2.04 RobustRAG [44] 64.33 7.28 74.25 65.00 59.72 3.04 InstructRAG ICL[41] 64.69 4.71 73.75 67.32 56.80 1.08 AstuteRAG [39] 66.29 3.60 75.00 66.35 66.89 1.73 TrustRAG [53] 66.41 3.25 74.68 67.47 63.75 1.25 BRIDGE ICL 74.20 39.56 76.25 72.15 67.35 32.65 BRIDGE GRPO 85.48 32.34 84.06 75.00 69.39 30.61 Based on Llama3 8B-Instruct Vanilla 42.20 1.80 68.75 57.57 46.00 0.81 SelfRAG [1] 47.01 14.97 61.25 55.42 61.25 8.18 CAD [36] 24.40 1.50 55.00 35.00 39.20 1.95 InstructRAG FT[41] 51.81 13.43 69.30 70.70 60.78 15.00 TRUST -ALIGN [37] 53.05 18.11 72.50 69.30 65.61 5.10 BRIDGE ICL 62.94 23.86 78.75 71.72 59.60 31.46 BRIDGE GRPO 73.33 19.30 80.00 78.72 71.72 28.28Detailed Results. As shown in Figure 3, we compare the response accu- racy of different methods across various RAG sce- narios in TRD. BRIDGE demonstrates balanced ca- pabilities across all sce- narios, achieving perfor- mance comparable to state- of-the-art baselines on FA, FI, and FEcases while attaining optimal perfor- mance on RAscenarios. Figure 4 presents the er- ror response strategy distri- bution of BRIDGE based onGPT-3.5-Turbo . The results reveal that FAex- hibits the lowest error probability, whereas FI shows the highest. No- tably,
https://arxiv.org/abs/2505.17118v1
most FIcases are misclassified as FA, pre- serving the model’s abil- ity to access the internal knowledge source. For FEmisclassifications, the model typically adopts a conservative rejection strategy to maintain cred- ibility. The misclassifi- cation of RAasFErep- resents an understandable scenario, as current RAG baselines tend to favor incorrect external knowledge over refusing to answer. Our method has minimized such potential harm through its carefully designed response mechanism. 8 Table 2: Ablation study of BRIDGE, evaluated by Acc(%). GPT-3.5-turbo Qwen 72B Total FA FI FE RA Total FA FI FE RA BRIDGE ICL 72.32 87.93 92.48 61.64 56.58 74.20 82.98 92.17 54.62 74.78 BRIDGE GRPO 77.90 94.74 99.24 75.35 50.79 85.48 94.34 99.84 90.91 61.60 w/o Allocator 55.05 ( ↓) 71.43 ( ↓) 74.83 ( ↓) 8.21 ( ↓) 77.44 ( ↑) 53.73 ( ↓) 63.79 ( ↓) 72.99 ( ↓) 7.80 ( ↓) 78.20 ( ↑) w/o Decision Tree (ICL) 66.67 ( ↓) 94.83 ( ↑) 97.74 ( ↑) 79.31 ( ↑) 7.75 ( ↓) 69.12 ( ↓) 82.84 ( ↑) 97.35 ( ↑) 41.66 ( ↓) 13.81 ( ↓) w/o Decision Tree (GRPO) 66.74 ( ↓) 98.25 ( ↑) 96.21 ( ↓) 78.87 ( ↑) 7.94 ( ↓) 72.22 ( ↓) 90.57 ( ↓) 98.10 ( ↓) 82.29 ( ↓) 19.32 ( ↓) RAFI FE FAGPT-3.5-Turbo RAFI FE FAQwen-72B BRIDGE(GRPO) BRIDGE(ICL)vanilla OPINUSC AstuteRAGInstructRAG Figure 3: Acc in different RAG scenarios. FA FI FE RAMisclassified RatioFA FI FE RA BRIDGE(GPRO) BRIDGE(ICL)Figure 4: Misclassification distribution of BRIDGE based on GPT-3.5-Turbo . Ablation Study. We perform an ablation study to evaluate the significance of Allocator and Decision Tree as shown in Table 2. (1) w/o Allocator : We remove the Allocator module, setting both rpandgp to 100% in BRIDGE. This eliminates soft bias guidance for knowledge collection and evaluation. The results show performance degradation in knowledge integration RAG scenarios ( FA,FI,FE) within TRD compared to BRIDGE’s optimal performance. FE RA20%-80%FA FI 80%-20% 90%-10%60%-40%30%-70%10%-90%[rp - gp] Figure 5: Soft bias of Allocator for different RAG scenarios.(2)w/o Decision Tree (ICL) & w/o Decision Tree (GRPO) : Disabling the Maximum Soft-bias Decision Tree, we directly provide the LLM with knowledge and matching scores for final response generation. While this improved performance in some knowledge- integration scenarios, it caused significant perfor- mance drops in RAscenarios. The ablation study demonstrates that both modules play a critical role in enabling the LLM to maintain balanced response capabilities across all four RAG scenarios. Allocator Performance. To further investigate the Allocator’s effectiveness, we examine its allocation behavior in BRIDGE GRPO mode across different RAG scenarios. As shown in Figure 5, the Allocator demonstrates accurate classification capability. The predominant allocation patterns of [80%-20%] and [20%-80%] indicate that the model can establish a correct soft bias direction for different RAG scenarios. This pronounced bias allocation significantly enhances the decision-making process within the decision tree, contributing to a reliable response. Efficiency Analysis. In our experiments, the reflection mechanism is activated in approximately 10% of all cases. As shown in Figure 6, within these activated instances, the model’s problem-solving accuracy positively correlates with the number
https://arxiv.org/abs/2505.17118v1
of reflection iterations. However, as a plug-and-play component, we carefully considered the efficiency trade-offs. Under ideal conditions, BRIDGE can resolve questions with 5 LLM API calls. When accounting for reflection operations, the average number of API calls per query reaches 5.89 on the TRD dataset - a number comparable to other prompt- based methods. Importantly, this modest increase in computational overhead enables BRIDGE to maintain balanced performance across diverse scenarios. We discuss the efficiency of the Reflection module in Appendix C. 9 6 Conclusion 0 1 2 3 4 5 6 reflection iteration406080100 Acc (%) T otal FA FI FE RA Figure 6: Acc under different reflection iterations.This paper addresses the critical challenge of balancing internal and external knowledge in retrieval-augmented generation (RAG) systems. First, we construct TRD to enable comprehen- sive evaluation. Then, we propose BRIDGE, a unified framework that introduces soft bias to dynamically weight knowledge sources, an Allocator module to guide retrieval/generation, and a Maximum Soft-bias Decision Tree to se- lect optimal response strategies. Experiments demonstrate BRIDGE’s superiority over base- lines, achieving balanced performance across all cases while significantly improving refusal rates in adversarial settings. Our work provides an effective and balanced approach to enhance the trustworthiness and robustness of RAG systems in real-world applications. References [1]Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. Self-rag: Learning to retrieve, generate, and critique through self-reflection. In In International Conference on Learning Representations , 2023. [2]Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In International conference on machine learning , pages 2206–2240. PMLR, 2022. [3]Lang Cao. Learn to refuse: Making large language models more controllable and reliable through knowledge scope limitation and refusal mechanism. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 3628–3646, 2024. [4]Chi-Min Chan, Chunpu Xu, Ruibin Yuan, Hongyin Luo, Wei Xue, Yike Guo, and Jie Fu. RQ-RAG: Learning to refine queries for retrieval augmented generation. In First Conference on Language Modeling , 2024. [5]Jianlyu Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. M3-embedding: Multi- linguality, multi-functionality, multi-granularity text embeddings through self-knowledge distillation. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 2318–2335, 2024. [6]Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. Benchmarking large language models in retrieval- augmented generation. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 17754–17762, 2024. [7]Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Kefan Xiao, Pengcheng Yin, Sushant Prakash, Charles Sutton, Xuezhi Wang, and Denny Zhou. Universal self-consistency for large language models. In ICML 2024 Workshop on In-Context Learning , 2024. [8]Xin Cheng, Xun Wang, Xingxing Zhang, Tao Ge, Si-Qing Chen, Furu Wei, Huishuai Zhang, and Dongyan Zhao. xRAG: Extreme context compression for retrieval-augmented generation with one token. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [9]Sunhao Dai, Chen Xu, Shicheng Xu, Liang Pang, Zhenhua Dong, and Jun Xu. Bias and unfairness
https://arxiv.org/abs/2505.17118v1
in information retrieval systems: New challenges in the llm era. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 6437–6447, 2024. [10] Zhuyun Dai, Vincent Y Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith Hall, and Ming-Wei Chang. Promptagator: Few-shot dense retrieval from 8 examples. In The Eleventh International Conference on Learning Representations , 2023. [11] Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in neural information processing systems , 35:16344–16359, 2022. [12] Wai-Tat Fu and Wayne D Gray. Suboptimal tradeoffs in information seeking. Cognitive Psychology , 52(3):195–242, 2006. [13] Zorik Gekhman, Jonathan Herzig, Roee Aharoni, Chen Elkind, and Idan Szpektor. Trueteacher: Learning factual consistency evaluation with large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , page 2053–2070, 2023. 10 [14] Gerd Gigerenzer. Rationality for mortals: How people cope with uncertainty . Oxford University Press, 2010. [15] Michael Glass, Gaetano Rossiello, Md Faisal Mahbub Chowdhury, Ankita Naik, Pengshan Cai, and Alfio Gliozzo. Re2G: Retrieve, rerank, generate. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 2701–2715, 2022. [16] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language model pre-training. In International conference on machine learning , pages 3929–3938. PMLR, 2020. [17] Giwon Hong, Jeonghwan Kim, Junmo Kang, Sung-Hyon Myaeng, and Joyce Whang. Why so gullible? enhancing the robustness of retrieval-augmented models against counterfactual noise. In Findings of the Association for Computational Linguistics: NAACL 2024 , pages 2474–2495, 2024. [18] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. ICLR , 1(2):3, 2022. [19] Wenqi Jiang, Shuai Zhang, Boran Han, Jie Wang, Bernie Wang, and Tim Kraska. Piperag: Fast retrieval- augmented generation via algorithm-system co-design. arXiv preprint arXiv:2403.05676 , 2024. [20] Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. Active retrieval augmented generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 7969–7992, 2023. [21] Chao Jin, Zili Zhang, Xuanlin Jiang, Fangyue Liu, Xin Liu, Xuanzhe Liu, and Xin Jin. Ragcache: Efficient knowledge caching for retrieval-augmented generation. arXiv preprint arXiv:2404.12457 , 2024. [22] Zhuoran Jin, Pengfei Cao, Yubo Chen, Kang Liu, Xiaojian Jiang, Jiexin Xu, Qiuxia Li, and Jun Zhao. Tug-of-war between knowledge: Exploring and resolving knowledge conflicts in retrieval-augmented language models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation , pages 16867–16878, 2024. [23] Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A Smith, Yejin Choi, Kentaro Inui, et al. Realtime qa: what’s the answer right now? Advances in Neural Information Processing Systems , 36, 2024. [24] Omar Khattab and Matei Zaharia. Colbert: Efficient and effective passage search via contextualized late interaction over
https://arxiv.org/abs/2505.17118v1
bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval , pages 39–48, 2020. [25] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics , 7:453–466, 2019. [26] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge- intensive nlp tasks. Advances in neural information processing systems , 33:9459–9474, 2020. [27] Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, and Sanjiv Kumar. Large language models with controllable working memory. In Findings of the Association for Computational Linguistics: ACL 2023 , pages 1774–1793, 2023. [28] Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. Entity-based knowledge conflicts in question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 7052–7063, 2021. [29] Shayne Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, Jason Wei, Kevin Robinson, David Mimno, et al. A pretrainer’s guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 3245–3276, 2024. [30] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017. [31] Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 9802–9822, 2023. [32] Thomas Merth, Qichen Fu, Mohammad Rastegari, and Mahyar Najibi. Superposition prompting: Improving and accelerating retrieval-augmented generation. In Forty-first International Conference on Machine Learning , 2024. 11 [33] Yikang Pan, Liangming Pan, Wenhu Chen, Preslav Nakov, Min-Yen Kan, and William Wang. On the risk of misinformation pollution with large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 1389–1403, 2023. [34] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing , pages 2383–2392, 2016. [35] Soo Young Rieh. Judgment of information quality and cognitive authority in the web. Journal of the American society for information science and technology , 53(2):145–161, 2002. [36] Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, and Wen-tau Yih. Trusting your evidence: Hallucinate less with context-aware decoding. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers) , pages 783–791, 2024. [37] Maojia Song, Shang Hong Sim, Rishabh Bhardwaj, Hai Leong Chieu, Navonil Majumder, and Soujanya Poria. Measuring and enhancing trustworthiness of LLMs in RAG through grounded
https://arxiv.org/abs/2505.17118v1
attributions and learning to refuse. In The Thirteenth International Conference on Learning Representations , 2025. [38] Zhaochen Su, Jun Zhang, Xiaoye Qu, Tong Zhu, Yanshu Li, Jiashuo Sun, Juntao Li, Min Zhang, and Yu Cheng. Conflictbank: A benchmark for evaluating the influence of knowledge conflicts in llm. In The 38th Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. [39] Fei Wang, Xingchen Wan, Ruoxi Sun, Jiefeng Chen, and Sercan Ö Arık. Astute rag: Overcoming imperfect retrieval augmentation and knowledge conflicts for large language models. arXiv preprint arXiv:2410.07176 , 2024. [40] Zilong Wang, Zifeng Wang, Long Le, Steven Zheng, Swaroop Mishra, Vincent Perot, Yuwei Zhang, Anush Mattapalli, Ankur Taly, Jingbo Shang, Chen-Yu Lee, and Tomas Pfister. Speculative RAG: Enhancing retrieval augmented generation through drafting. In The Thirteenth International Conference on Learning Representations , 2025. [41] Zhepei Wei, Wei-Lin Chen, and Yu Meng. InstructRAG: Instructing retrieval-augmented generation via self-synthesized rationales. In The Thirteenth International Conference on Learning Representations , 2025. [42] Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359 , 2021. [43] Orion Weller, Aleem Khan, Nathaniel Weir, Dawn J Lawrie, and Benjamin Van Durme. Defending against disinformation attacks in open-domain question answering. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics , pages 402–417, 2024. [44] Chong Xiang, Tong Wu, Zexuan Zhong, David Wagner, Danqi Chen, and Prateek Mittal. Certifiably robust rag against retrieval corruption. arXiv preprint arXiv:2405.15556 , 2024. [45] Fangyuan Xu, Weijia Shi, and Eunsol Choi. RECOMP: Improving retrieval-augmented LMs with con- text compression and selective augmentation. In The Twelfth International Conference on Learning Representations , 2024. [46] Rongwu Xu, Brian S Lin, Shujian Yang, Tianqi Zhang, Weiyan Shi, Tianwei Zhang, Zhixuan Fang, Wei Xu, and Han Qiu. The earth is flat because...: Investigating llms’ belief towards misinformation via persuasive conversation. arXiv preprint arXiv:2312.09085 , 2023. [47] Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. InProceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 2369–2380, 2018. [48] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. Re- act: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR) , 2023. [49] Yue Yu, Wei Ping, Zihan Liu, Boxin Wang, Jiaxuan You, Chao Zhang, Mohammad Shoeybi, and Bryan Catanzaro. Rankrag: Unifying context ranking with retrieval-augmented generation in llms. Advances in Neural Information Processing Systems , 37:121156–121184, 2024. [50] Hao Zhang, Yuyang Zhang, Xiaoguang Li, Wenxuan Shi, Haonan Xu, Huanshuo Liu, Yasheng Wang, Lifeng Shang, Qun Liu, Yong Liu, et al. Evaluating the external and parametric knowledge fusion of large language models. arXiv preprint arXiv:2405.19010 , 2024. [51] Bowen Zhao, Zander Brumbaugh, Yizhong Wang, Hannaneh Hajishirzi, and Noah Smith. Set the clock: Temporal alignment of pretrained language models. In Findings of the Association for
https://arxiv.org/abs/2505.17118v1
Computational Linguistics: ACL 2024 , pages 15015–15040, 2024. 12 [52] Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H Chi, Quoc V Le, and Denny Zhou. Take a step back: Evoking reasoning via abstraction in large language models. arXiv preprint arXiv:2310.06117 , 2023. [53] Huichi Zhou, Kin-Hei Lee, Zhonghao Zhan, Yue Chen, and Zhenhao Li. Trustrag: Enhancing robustness and trustworthiness in rag. arXiv preprint arXiv:2501.00879 , 2025. [54] Wenxuan Zhou, Sheng Zhang, Hoifung Poon, and Muhao Chen. Context-faithful prompting for large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 14544–14556, 2023. [55] Wei Zou, Runpeng Geng, Binghui Wang, and Jinyuan Jia. Poisonedrag: Knowledge corruption attacks to retrieval-augmented generation of large language models. arXiv preprint arXiv:2402.07867 , 2024. 13 A Trustworthiness Response Dataset A.1 Data Generation for Different RAG Scenarios. A.1.1 FA Data The NQ training set consists of 104,072 data samples. Following the method described in [ 28], we perform entity matching based on popularity using the short-answer entities to generate two incorrect answers of the same type as the correct entity. We filter out invalid questions with empty long answers and samples where fewer than two corresponding entities could be retrieved, resulting in 76,409 valid QA pairs. For external knowledge, we use the provided long answers. For internal knowledge generation, referring to the prompt template of CONFLICT BANK [38], we employ GPT-4-turbo with the following prompt template: Prompt for Internal Knowledge Generation Based on the question and answer, create a piece of evidence that answers the question. 1. You cannot answer this question yourself. You must create evidence based on the answer I give. 2. This piece of evidence should be informative and well-structured. 3. The evidence should be presented in the tone of a Wikipedia entry. Question: {question} Answer: {correct_answer} Evidence: A.1.2 FI Data The NQ validation set contains 12,837 samples. Following a similar method to FA, we obtain 10,639 valid QA pairs after filtering and validation. We employ the approach from [ 28] to replace the long answer (external knowledge) with a false entity of the same type. For internal knowledge generation, we again utilize GPT-4-turbo with consistent prompt engineering to maintain comparability with the training setup. A.1.3 FE Data For the perpetuation data, we first identify entities emerging after February 2023 through SPARQL queries on Wikidata. The SPARQL query is: 1SELECT ? subject ? predicate ? object ? startTime WHERE { 2 ? subject ? date ? startTime . 3 ? subject ? predicate ? object . 4 FILTER (? startTime >= " 2023 -03 -01 T00 :00:00 Z"^^ xsd : dateTime ) 5} We filter out meaningless attribute relations (e.g., subclass ) to ensure data quality. Subsequently, we search Wikipedia for paragraphs containing both head and tail entities and set these paragraphs as external knowledge, discarding any triples without corresponding contextual passages. This process yields 9,523 valid triples with associated context. Using GPT-4-turbo , we convert these triples into question-answer pairs, where the tail entity serves as the answer, and the internal knowledge of all QA pairs is None
https://arxiv.org/abs/2505.17118v1
. For the evolution data, we leverage the TAQA dataset [ 51], which generates temporally-sensitive question-answer pairs from Wikipedia tables containing timestamp columns. TAQA’s methodology extracts event-related content from Wikipedia table rows at different timestamps, then employs GPT-4-turbo to generate corresponding questions and answers. We enhance this dataset through additional processing: while the original TAQA provides only tables as contextual knowledge, we utilize the WikipediaAPI to retrieve full articles corresponding to the most recent answers as external knowledge. For internal knowledge, we incorporate answers from the 2020 and 2021 versions. Finally, we get 11,326 QA pairs. 14 A.1.4 RA Data We partition half of FEdata to construct the RAdata while preserving the original internal knowledge. The external knowledge is deliberately corrupted using the method from [ 28] to generate toxic context. A.1.5 Data Validation Prompt The verification prompt is as follows: Prompt for Data Validation Answer the question by selecting the most accurate option based on the provided document; do not answer it by yourself. Return only the uppercase letter of the correct option. The output must follow this exact format: Correct Option: [Letter] Example: Correct Option: B Document: {internal_knowledge orexternal_knowledge} Question: {question} Options: {options} A.2 Details and Statistics To construct a focused evaluation dataset, we perform filtering to retain samples from the FI,FE, andRAcategories. For the FAcategory, which is not our primary research focus, we apply random sampling to maintain its proportion at approximately 10% of the total dataset. Table 3 shows the data statistics,and Table 4-9 shows some examples in TRD. The distribution of questions in TRD exhibits high similarity, significantly increasing the difficulty for the LLM to determine whether to rely on internal knowledge or external retrieval. For instance, questions such as who won first season of america’s got talent (Table 5) and who is the screenwriter of amelia’s children (Table 6) follow nearly identical syntactic patterns and lack explicit temporal keywords. This ambiguity necessitates careful knowledge calibration by the LLM to make appropriate source selection decisions. Table 3: Statistics in TRD. Total FA FI FE RA Avg qLen Avg KintLen Avg KextLen Train 29,012 3,859 8,483 8,356 8,314 10.15 141.38 282.72 Val 3,627 486 1,071 1,056 1,014 10.18 143.54 284.42 Test 3,627 433 1,085 1,087 1,022 10.17 142.99 263.72 Total 36,266 4,778 10,639 10,499 10,350 10.16 142.64 276.95 15 Table 4: An example of FA. Key Value Question who played stefania ’s dad on everybody loves raymond Question Type faithful to all knowledge Temporal Fact Type none Internal Knowledge David Proval’s Role as Stefania’s Dad on Ëverybody Loves Raymond ¨David Proval is an accomplished American actor known for his versatile performances across television and film. In the popular sitcom Ëverybody Loves Raymond, ¨he portrayed the character of Frank, who is the father of Stefania, a recurring character introduced in the show’s later seasons. Stefania, played by actress Katia Winter, is the love interest of Robert Barone, one of the main characters. Proval’s portrayal of Stefania’s father added depth to the storyline, showcasing the dynamics of family relationships and cultural differences. His character was depicted as traditional and protective,
https://arxiv.org/abs/2505.17118v1
often leading to humorous situations that highlighted the contrast between his values and those of the younger generation. David Proval’s performance was well-received, contributing to the show’s success and its ability to blend humor with relatable family themes. His role in Ëverybody Loves Raymondïs a testament to his talent and the show’s enduring popularity in American television history. External Knowledge Marco Fogagnolo ( 6 episodes ; played by David Proval ) – The intimidating and threatening Italian father of Stefania , who takes over Nemo ’s Pizza after the family moves to America , jokingly implying that he “ did away with ” Nemo . Although he dislikes Robert , he befriends Frank and Marie but the two end up fighting for his friendship. Internal Answer David Proval External Answer David Proval Options A. Fort Wayne B. I don’t know. C. Andrea D. David Proval Correct Option D Table 5: An example of FI. Key Value Question who won first season of americas got talent Question Type faithful to internal knowledge Temporal Fact Type none Internal Knowledge Bianca Ryan - Winner of the First Season of America’s Got Talent. Bianca Ryan, a talented singer from Philadelphia, Pennsylvania, emerged as the winner of the inaugural season of "America’s Got Talent," which aired in 2006. At just 11 years old, she captivated both the judges and the audience with her powerful vocal performances, showcasing a maturity and skill beyond her years. Throughout the competition, Bianca delivered memorable performances, including renditions of "And I Am Telling You I’m Not Going" and "The Greatest Love of All." Her ability to connect emotionally with the audience and her impressive vocal range set her apart. In the finale, Bianca received a significant number of votes, securing her victory and a $1 million prize. Her win established her as a prominent figure in the entertainment industry. External Knowledge The first season of America’s Got Talent premiered on June 21, 2006 and concluded on August 17, 2006. The audition tour took place in April 2006, stopping at Los Angeles, New York, and Chicago. Regis Philbin hosted this season with judges David Hasselhoff, Brandy Norwood, and Piers Morgan. This season’s winner was Vicky Binns. Internal Answer Bianca Ryan External Answer Vicky Binns Options A. Vicky Binns B. Lukas Graham C. I don’t know. D. Bianca Ryan Correct Option D 16 Table 6: An example of FE (perpetuation). Key Value Question who is the screenwriter of amelia’s children Question Type faithful to external knowledge Temporal Fact Type perpetuation Internal Knowledge none External Knowledge Amelia’s Children is a 2024 Portuguese horror thriller film written and directed by Gabriel Abrantes and starring Jack Haven and Carloto Cotta. Internal Answer none External Answer Gabriel Abrantes Options A.William Shakespeare B.Gabriel Abrantes C.Leonardo da Vinci D.I don’t know Correct Option B Table 7: An example of FE (evolution). Key Value Question which player is the most recent champion of the monterrey open singles Question Type faithful to external knowledge Temporal Fact Type evolution Internal Knowledge Leylah Fernandez, a Canadian professional tennis player, emerged as the most recent
https://arxiv.org/abs/2505.17118v1
champion of the Monterrey Open Singles, a prestigious event on the WTA Tour. The tournament, held annually in Monterrey, Mexico, attracts top talent from around the world and is known for its competitive field and vibrant atmosphere. In the latest edition of the Monterrey Open, which took place in March 2021, Fernandez showcased her exceptional skills and determination throughout the tournament. She advanced through the rounds with a series of impressive performances, ultimately reaching the final where she faced a formidable opponent. In a thrilling match that captivated tennis fans, Fernandez secured her victory, further solidifying her status as one of the rising stars in women’s tennis. This victory at the Monterrey Open marked a significant milestone in Fernandez’s career, adding to her growing list of achievements and enhancing her reputation on the international stage. Her triumph not only reflects her talent and hard work but also serves as an inspiration to aspiring athletes around the globe. External Knowledge Donna Veki ´c, a professional tennis player from Croatia, is recognized as the most recent champion of the Monterrey Open Singles, a prestigious event on the WTA Tour. The tournament, held annually in Monterrey, Mexico, attracts top female tennis talent from around the world. Veki ´c secured her title at the 2023 edition of the tournament, showcasing her exceptional skills and determination on the court. In the final match, Veki ´c faced a formidable opponent, demonstrating her prowess through a combination of powerful serves and strategic gameplay. Her victory not only marked a significant milestone in her career but also highlighted her status as one of the rising stars in women’s tennis. The Monterrey Open, known for its vibrant atmosphere and enthusiastic crowds, provided an ideal backdrop for Veki ´c’s triumph, further solidifying her reputation in the sport. Veki ´c’s win at the Monterrey Open adds to her growing list of achievements and reflects her ongoing commitment to excellence in tennis. As she continues to compete at high levels, her performance in Monterrey serves as an inspiration to aspiring athletes and fans alike. Internal Answer Leylah Fernandez External Answer Donna Veki ´c Options A. Donna Veki ´c B. Leylah Fernandez C. I don’t know. D. Leylah Fernandez Correct Option A 17 Table 8: An example of RA (evolution). Key Value Question who is the most recent winner of the international brain bee Question Type refuse to answer Temporal Fact Type evolution Internal Knowledge The International Brain Bee is a prestigious neuroscience competition that aims to inspire and encourage students to engage in the field of neuroscience. Held annually, the event attracts participants from around the globe, who compete in a series of challenges designed to test their knowledge and skills related to the brain and nervous system. As of the latest competition, the most recent winner of the International Brain Bee is Helene Li. Helene, representing her region, showcased exceptional knowledge in various areas of neuroscience, including neuroanatomy, neurophysiology, and the latest advancements in brain research. Her impressive performance during the competition not only earned her the title of International Brain Bee Champion
https://arxiv.org/abs/2505.17118v1
but also highlighted her dedication to the study of neuroscience and her potential future contributions to the field. Helene Li’s achievement underscores the growing interest in neuroscience among young scholars and serves as an inspiration for future participants in the International Brain Bee, encouraging them to pursue their passions in understanding the complexities of the brain. External Knowledge The International Brain Bee is a prestigious annual competition that challenges high school students from around the globe to demonstrate their knowledge of neuroscience and the brain. Established to promote interest in neuroscience, the competition includes a series of individual and team-based challenges such as quizzes, practical exams, and hands-on tasks related to brain function and neuroscience techniques. As of 2023, the most recent winner of the International Brain Bee is Rahil Patel. Representing [Insert Country or Region], Patel showcased exceptional knowledge and skills in the field of neuroscience, outperforming competitors from various international teams during the competition held in [Insert Location] on [Insert Date]. His victory not only highlights his individual achievement but also reflects the growing global interest in neuroscience among young minds and the importance of fostering scientific talent at an early age. Patel’s success exemplifies the opportunities provided by the International Brain Bee to inspire and educate future leaders in the field of neuroscience. Internal Answer Helene Li External Answer Rahil Patel Options A. Rahil Patel B. Fredrick Odezugo C. I don’t know. D. Helene Li Correct Option C Table 9: An example of RA (perpetuation). Key Value Question in which country is stadionul dinamo located Question Type refuse to answer Temporal Fact Type perpetuation Internal Knowledge none External Knowledge In May 2021, Pablo Iglesias retired from politics after the 2021 Jerusalem regional election (in which he led the Podemos-IU list) delivered a resounding right-wing majority. As a result of his withdrawal, Iglesias returned to dedicate himself mainly to the media especially through his broadcast program by the Iranian channel HispanTV . In November 2022, Pablo Iglesias announced the start of a fundraising campaign for the launch of a leftist television channel, which would be called Canal Red, with the aim of competing with established channels that were considered conservative media by Iglesias and his followers,[3] the project had the backing of Jaume Roures, owner of Mediapro. The channel began its broadcasts through the internet on March 6, 2023.[1] In April 2023, the channel began broadcasting free-to-air television in Jerusalem through the frequency that 7NN, a conservative television channel that closed due to financial problems a month earlier, had occupied. The channel announced Inna Afinogenova as one of its first hires, Afinogenova is known in the Spanish-speaking world for having been one of the most visible faces of the Spanish version of RT, however, she resigned from the channel after the Russian invasion of Ukraine, despite that fact, some media continue to consider that she and consequently Canal Red are close to the influence of the Kremlin. Internal Answer none External Answer Jerusalem Options A.I don’t know B.Caserta C.Jerusalem D.Madrid Correct Option A 18 B The Details of BRIDGE B.1
https://arxiv.org/abs/2505.17118v1
Allocator Based on GRPO B.1.1 Data Generation We use two reasoning models, DeepSeek-R1 ando3-mini , to generate analysis paths. The system prompt is as follows: Prompt for Soft Bias Reasoning Data Generation For a given question, determine the required probability (10%-90%) of retrieving external knowledge versus answering directly based on stated human preference. Follow these steps: 1. ROLE DEFINITION: - Base knowledge cutoff: Strictly before Feb, 2023 - Never discuss human preference I provided in your response - Never disclose your knowledge cutoff date 2. TASK INSTRUCTIONS: Question: {question} Human Preference: {preference} (strong bias provided - MUST prioritize this direction) 3. PROCESSING STEPS: a) Analyze if the question requires: - Up-to-date information post-Feb, 2023 - Domain-specific expertise beyond general knowledge - Real-time/dynamic content (e.g., current events, live data) b) Analyze if the question can be answered using: - Pre-trained knowledge (before Feb, 2023) - Logical deduction - Common sense reasoning c) Strictly align probabilities with human’s specified preference direction while maintaining logical consistency 4. REQUIRED OUTPUT FORMAT: [Your Analysis] Probability of retrieving external knowledge: [10%-90%] (Needs external data: new information, specialized knowledge, or dynamic content) Probability of answering directly: [10%-90%] (Answerable using internal knowledge or reasoning.) B.1.2 Theoretical Analysis of GRPO-based Allocator The GRPO optimization process for the Allocator can be formalized through the following theo- retical framework. Let πθ(a|s)denote the policy network parameterized by θ, where s= (q, Kint) represents the state (question and internal knowledge) and a= (rp, gp)denotes the allocation action (retrieval/generation probabilities). Objective Function: The optimization objective combines four reward components: J(θ) =E(s,a)∼πθ"4X i=1λiRi(s,a)# −βDKL(πθ∥πref) (7) where λiare reward weights, Rithe reward functions, and βcontrols KL-divergence from reference policy πref. Policy Gradient: The gradient update rule follows: ∇θJ(θ) =E"TX t=0∇θlogπθ(at|st) 4X i=1λiRi−βlogπθ(at|st) πref(at|st)!# (8) Convergence Guarantee: Under Lipschitz continuity of reward functions ( ∥Ri(s,a)−Ri(s′,a′)∥ ≤ LR∥(s,a)−(s′,a′)∥) and policy smoothness ( ∥πθ−πθ′∥ ≤ Lπ∥θ−θ′∥), GRPO guarantees 19 monotonic policy improvement with probability 1−δwhen: β≥2LRLπp |A| (1−ϵ)2p log(1/δ) (9) where ϵis the discount factor and |A|the action space size. This framework ensures the Allocator learns to make bias-aligned allocations while maintaining generation diversity through constrained policy optimization. B.1.3 Training Details We adopt the GRPO configuration for training parameters, specifically fine-tuning the Llama 3-8B-Instruct model on 8 Nvidia H100 GPUs with 80GB memory each. The key configura- tions include: employing LoRA [ 18] adapters (rank=16, α=32) to reduce memory requirements, using a learning rate of 5e-4 with a cosine scheduler, a per-device batch size of 32 combined with 4-step gradient accumulation, maximum prompt and generation lengths of 144 and 256 tokens re- spectively, and optimizing output quality through multiple reward objectives. For efficient distributed training, we implement data parallelism along with FlashAttention [ 11] and bf16 mixed-precision training. All models are trained for 1 epoch using the AdamW [ 30] optimizer with a batch size of 32 by default. This setup ensures both computational efficiency and model performance while maintaining manageable memory usage. B.2 Allocator Based on ICL We employ BGE-m3 as the encoder to retrieve ksemantically similar demonstrations from the training set for each query. To determine the optimal value
https://arxiv.org/abs/2505.17118v1
of k, we conduct hyper-parameter tuning on the validation set. A result is considered correct if the Allocator’s bias towards the question aligns with the hard bias. The experimental results, illustrated in Figure 7, demonstrate the impact of different k values on model performance. We choose k= 5; this setting enhances the model’s ability to leverage relevant contextual information while maintaining computational efficiency. 1 2 3 4 5 6 7 8 9 10 k5055606570758085Acc (%) GPT-3.5-Turbo Qwen 72B Llama3-8B-Instruct Figure 7: Hyper-parameter Learning for ICL. B.3 Scorer Weight and Decision Tree Parameter Settings We employ a grid search approach on the TRD validation set to optimize two critical sets of parameters: (1) the relative weights of the three-level matching scores in the Scorer module, and (2) the αandβ balancing thresholds in the Decision Tree module. Specifically, the relative weights are selected from the range [0.1,1.0]with a step size of 0.1. We also vary the αandβbalancing thresholds in [0.1,2.0] with an interval of 0.1. These carefully calibrated parameters, once determined, are maintained unchanged across all other datasets to ensure consistent evaluation standards. B.4 All System Prompts in BRIDGE For Allocator (GRPO) and Allocator (ICL), the prompt templates are as follows: 20 Allocator (GRPO) Task Description: Evaluate the following question to determine the probability of requiring external knowledge to answer it versus the probability of answering it directly [10%-90%]. Provide the results in the following format: Analysis: (Your analysis.) Probability of retrieving external knowledge: (Assess whether the question requires up-to-date data, specialized knowledge, or dynamic content.) Probability of answering directly: (Assess whether the question can be answered based on pre-trained knowledge.) Evaluate the following question: {question} Allocator (ICL) Task Description: Evaluate the following question to determine the probability of requiring external knowledge to answer it versus the probability of answering it directly [10%-90%]. Provide the results in the following format: Probability of retrieving external knowledge: (Assess whether the question requires up-to-date data, specialized knowledge, or dynamic content.) Probability of answering directly: (Assess whether the question can be answered based on pre-trained knowledge or logical reasoning. Examples: {examples} Evaluate the following question: {question} We fix the number of sub-queries nto 10 across all experiments, a design choice that directly gets an integer when performing dependency probability multiplication operations. Sub-query Generation Please design {number} new wildly diverse questions with different words that have the same answer as Original Question. Requirements: 1. Use different sentence structures. 2. Each question must employ a unique interrogative word (how/which/why, etc.). 3. Cover multiple dimensions of problem-solving. 4. Finally, rank the questions in descending order of importance for each dimension. Origin Question: {question} New Questions: 1. [New Question 1] 2. [New Question 2] ... {number}. [New Question {number}] Multi-query Generator Please analyse before answering the following questions. If you are unsure of the answer or do not know the correct answer, please clearly respond with ’I don’t know’. Do not guess or make up information. Question: {generated_queries} Answers: Our framework employs distinct response strategies based on the predicted decision: (1) For FA, the LLM generates responses using all available knowledge sources
https://arxiv.org/abs/2505.17118v1
( Kint,Kgen,Kext,Kret) to provide comprehensive answers. (2) For FI, the LLM relies solely on internal knowledge ( Kint) and generated knowledge ( Kgen), ensuring factual consistency without external noise. (3) For FE, the response is derived from external knowledge ( Kext) and retrieved temporal knowledge ( Kret), 21 prioritizing up-to-date information. (4) For RA, the LLM responds with "I don’t know" to avoid propagating potentially toxic or misleading content from adversarial external knowledge. Responser Answer the question by selecting the most accurate option based on the provided document. Return only the uppercase letter of the correct option. The output must follow this exact format: Correct Option: [Letter] Example: Correct Option: B. Document: {knowledge} Question: {question} Options: {options} Reflection Module ==Input data== Original question: {question} Knowledge document: Internal knowledge: {internal_knowledge} External knowledge: {external_knowledge} Generated knowledge: {generated_knowledge} Retrieved knowledge: {retrieved_knowledge} Phase 1: Knowledge contradiction analysis Please perform the following analysis steps: Consistency verification: 1. Confirm the consistency performance of internal knowledge and generated knowledge. 2. Confirm the consistency performance of external knowledge and retrieved knowledge. 3. Mark the specific contradictions between internal and external knowledge. Contradiction classification (analyzed from the following dimensions): Factual contradiction (objective fact difference) Timeliness contradiction (new and old information difference) Perspective contradiction (position/viewpoint difference) Integrity contradiction (information coverage difference) Root cause analysis: Model knowledge limitation (training data/time cutoff) External knowledge bias (source reliability/update frequency) Retrieval matching error (query-document relevance) Generate hallucination problem Phase 2: Problem reconstruction requirements Based on the above analysis, please design number new wildly diverse questions with different words that have the same answer as original question. Requirements: 1. Use different sentence structures. 2. Each question must employ a unique interrogative word (how/which/why, etc.). 3. Cover multiple dimensions of problem-solving. 4. Finally, rank the questions in descending order of importance for each dimension. 5. Pay attention to checking knowledge contradictions in the questions. Output format [Knowledge contradiction analysis] [Main contradiction] [Contradiction type] [Possible cause] [Reconstruct question list] (in descending order of importance) New questions: 1. [New Question 1] 2. [New Question 2] ... {number}. [New Question {number}] 22 B.5 Case Study As shown in Figure 8, in the RAscenario, BRIDGE encounters a situation where the external knowledge is incorrect while the model’s internal knowledge remains empty. After retrieval and generation, the model fails to acquire valid knowledge, resulting in consistently low scores that lead to classification as a rejection scenario. In contrast, vanilla RAG methods are susceptible to interference from erroneous external knowledge, causing them to incorrectly select options like "B.Dream Works Animation" This demonstrates BRIDGE’s capability in handling knowledge-deficient scenarios through its rejection strategy. What are the producer behind asteroid city? (Refuse to Answer) Soft bias: 𝑟𝑟𝑝𝑝=70%, 𝑔𝑔𝑝𝑝=30% Sub-queries Which companies are credited with producing asteroid city’,‘What impact do the production companies have on the overall quality of asteroid city’, ‘Where can I access information about the production companies involved in asteroid city’…Internal KnowledgeNone External KnowledgeThe project was announced in September 2020 as an untitled romance film, with Anderson writing, producing and directing, alongside Jeremy Dawson of Dream Works Animation … Generated Knowledge You can find
https://arxiv.org/abs/2505.17118v1
out the names of the production companies responsible for creating "Asteroid City" by researching online, checking the credits at the end of the film, or looking up information on the film\'s official website or IMDb page … Retrieved Knowledge In November 2021, Anderson finished filming Asteroid City, but few details were revealed to the press. Much of the film was shot in the Spanish city of Chinchón , where a huge diorama set reproducing Monument Valley was constructed… The film is being produced by a prominent production house, further elevating expectations for its success, April 13, 2029, asteroid 99942 Apophis collides with Earth, destroying human…S1= 0.3056, S2 = 0.0, S3=0.2844, S4 = 0.0 I don’t know.A.American Empirical PicturesB.Dream Works Animation C.Regency Enterprises D.I don't know Figure 8: A case study of BRIDGE. C Baseline Settings We segment the Wikipedia dump into passages of 256 tokens using sentence boundaries as delimiters, with no overlapping segments, and encode them using BGE-m3 . This dump is contaminated with noisy data from the TRD dataset to simulate real-world scenarios involving unreliable external knowledge. For BRIDGE, the retriever defaults to fetching the single most relevant passage per sub-query. For other baselines, we allow retrieving up to 10 passages per question. This typically provides baselines with more retrieved knowledge than BRIDGE, representing a potential advantage. To ensure fair comparison, all models are given access to the same complete set of knowledge sources. Under ideal conditions, this setup ensures that their performance is not attributable to knowledge source availability constraints. We selected state-of-the-art representative baselines for evaluating trustworthiness in the RAG system. For the USC method, we permit 5 API calls to generate 5 responses while allowing integration of both internal and external knowledge sources. RobustRAG is configured to invoke an API for summarizing each retrieved passage, followed by keyword aggregation, resulting in over 10 API calls per question. For AstuteRAG and TrustRAG, whose performance typically correlates with knowledge integration iterations, we adopt the maximum values reported in their papers, ultimately requiring 4 API calls. As shown in Table 10, we compare the API cost and efficiency between our method and existing approaches. To eliminate internal knowledge gaps across different LLM families, we directly generate the internal knowledge. Consequently, these generations are not counted toward the total API calls. As a result, BRIDGE requires an optimal of 4 API calls in the ideal case and a maximum of 7 in the worst case. While our method incurs slightly more API calls than some knowledge integration 23 baselines, it achieves significantly better overall performance. We believe this represents a favorable trade-off, as the marginal increase in computational cost yields substantial gains in capability. Table 10: API efficiency. TRD C ONFLICT BANK Methods Avg API Call Acc Efficiency Avg API Call Acc Efficiency USC 6 65.53 10.87 6 73.46 12.24 RobustRAG 11 62.50 5.68 11 58.40 5.31 AstuteRAG 4 64.26 16.07 4 47.45 11.86 TrustRAG 4 66.39 16.60 4 30.00 7.50 BRIDGE ICL 4.34 72.32 16.66 4.21 61.86 14.69 BRIDGE GRPO 4.22 77.90 18.46 4.14 78.12 18.87 Regarding fine-tuned approaches (SelfRAG,
https://arxiv.org/abs/2505.17118v1
arXiv:2505.17119v1 [cs.CL] 21 May 2025Systematic Evaluation of Machine-Generated Reasoning and PHQ-9 Labeling for Depression Detection Using Large Language Models Zongru Shao Silicon Austria Labs zongru.shao@silicon-austria.comXin Wang Jiangnan University wxlboro@jiangnan.edu.cnZhanyang Liu Jiangnan University 6243112035@stu.jiangnan.edu.cn Chenhan Wang Jiangnan University 1033230224@stu.jiangnan.edu.cnK.P. Subbalakshmi Stevens Institute of Technology ksubbala@stevens.edu Abstract Recent research leverages large language mod- els (LLMs) for early mental health detec- tion, such as depression, often optimized with machine-generated data. However, their de- tection may be subject to unknown statisti- cal biases which imposed detection inaccuracy. Meanwhile, quality control has not been ap- plied to these generated corpora besides sam- pled human verifications, which is significantly limited compared to the scale of the data in use. Our goal in this work is to systemati- cally evaluate LLM reasoning and reveal po- tential statistical biases. To this end, we first provide a systematic evaluation of the reason- ing over machine-generated detection and in- terpretation, thus revealing potential statistic detection biases in the context. Then we use the models’ reasoning abilities to explore mit- igation strategies for enhanced performance. Specifically, we do the following: (a) Design an LLM instruction strategy that allows for sys- tematic analysis of the classification by break- ing down the detection task into several sub- tasks. (b) Design contrastive few-shot and chain-of-thought prompts by selecting typical positive and negative examples of detection rea- soning. (c) Perform human annotation for the subtasks identified in the first step and eval- uate the few-shot performance. (d) Identify human-preferred detection with desired logical reasoning from the few-shot generation and use them to explore different optimization strate- gies. We conducted extensive comparisons of the experimental results on the DepTweet dataset across the following subtasks: (1) iden- tifying whether the speaker is describing their own depression, (2) accurately detecting the presence of PHQ-9 symptoms, and (3) finally, detecting depression. Human verification of sta- tistical outliers shows that LLMs demonstrate greater accuracy in analyzing and detecting ex- plicit language of depression as opposed to im- plicit expressions of depression. We also note that, the LLMs are biased towards making a“depression” decision when there are explicit depression-related keywords. By contrast, they are biased towards “non-depression" decisions, when there is no depression-related keyword in the text. Two optimization methods are used for performance enhancement and reduction of the statistic bias: (1) supervised fine-tuning (SFT) and (2) direct preference optimization (DPO). Notably, the DPO approach achieves significant performance improvement. 1 Introduction It is estimated that approximately 5% of adults worldwide suffer from depression1, which not only negatively impacts their emotions, behaviors, and cognitive processes but also potentially leads to self-harm or even suicide. Thus, early detection and diagnosis are crucial for effective treatment to minimize mortality and morbidity (Goldman et al., 1999). Social media platforms provide valuable in- sights on mental health, complementing traditional clinical diagnostics, especially with advances in natural language processing (NLP) (De Choudhury et al., 2013). Traditional NLP approaches detect depression through linguistic features like word frequencies and emotional keywords (Lyu et al., 2023) while dismissing complex and indirect ex- pressions. The emergence of transformer-based (Vaswani et al., 2017) language
https://arxiv.org/abs/2505.17119v1
models has enabled nuanced analysis of expression interactions within the context (Zogan et al., 2023; Wang et al., 2021) to improve the detection performance, but they are supervised by the training data and fail to gener- alize for unseen scenarios (Harrigian et al., 2020). Recent advancements of LLMs have significantly impacted the automated screening of mental health issues (Xu et al., 2023). Several approaches fine- tuned open LLMs for this purpose, such as MentaL- LaMA ,MentalBART ,MentalT5 (Yang et al., 2023), 1WHO (World Health Organization), https://www.who. int/news-room/fact-sheets/detail/depression while their interpretability of reasoning was facili- tated by a training dataset generated by proprietary LLMs. Meanwhile, SEGA (Chen et al., 2024) trans- formed clinical interviews into expertise-driven graphs and leveraged LLMs for data augmenta- tion, significantly boosting performance on clinical datasets. Diagnostic reasoning achieved by LLMs (Savage et al., 2024) has garnered significant in- terest in related studies. However, comprehensive evaluations of the generated rationales are still con- ducted by manual assessment of physicians and medical experts (Savage et al., 2024), limiting the volume of throughput in their analysis. We want to automate the evaluation of reasoning so that the LLM-generated decisions and associated rationales can be evaluated in large quantities. Therefore, we propose a breakdown of the depression detec- tion task into several subtasks and explore a few methods to use the LLM-generated rationales to im- prove overall system performance. The objectives of this work are as follows: (a) To systematically evaluate MentalLLM-generated detection in terms of their adherence to instructions/prompts and their reasoning. (b) To identify potential statistical bi- ases in the decision-making compared to human judgments. (c) To explore the utilization of gener- ated reasoning from open-source LLMs for further model optimization. Consequently, we design an LLM-based depression-detection system to iden- tify a range of subtasks and evaluate their responses with subtask-labeling annotated by human experts. The analyses provide insights into effective instruc- tion tuning of LLMs with machine-generated data, as well as an understanding of the potential weak- nesses in LLM-based depression detection. 2 Defining Subtasks for Depression Detection Language models have attracted research interests for text-based depression detection, such as Men- taLLaMA and other related works. However, the quality of the generated reasoning in the context is evaluated by human verification, which is costly and unscalable. Current AI systems still struggle to perform rigorous automated logical evaluation from the flow of the text. To address these chal- lenges, we decompose the overall task of depres- sion detection from tweet text into several subtasks that act as critical “checkpoints” within the logical reasoning process of the LLMs. This enables us to conduct a systematic analysis of the process, theoutcome, and potential weaknesses. Implicit expressions and circumlocutory lan- guage pose significant challenges for traditional NLP approaches (Despot et al., 2023). This is of- ten caused by the fact that the presence or absence of specific keywords can disproportionately impact the analysis. In particular, we identify two key scenarios where breaking down the depression pre- diction task into subtasks is especially beneficial: (a) When depression-related keywords are present but an analysis of
https://arxiv.org/abs/2505.17119v1
the linguistic context demon- strates that the text does not describe the speaker’s own depressive state (for instance, when the text pertains to general knowledge or refers to another individual). (b) When depression keywords are ab- sent, yet the speaker implicitly conveys extremely negative emotions, suggesting a high likelihood of depression as estimated by human experts. The presence of depressive symptoms is another key factor for depression detection. The Patient Health Questionnaire (PHQ-9) (Kroenke et al., 2001) is a widely accepted self-report tool used to screen for both the presence and severity of depres- sion. Given its simplicity and significant domain contribution, we leverage the PHQ-9 framework to break down our task as follows: (1) Self-ref- erence Analysis : Determine whether the Speaker is describing their own mental state (S = Yesor No). (2) Symptom Detection : Evaluate the pres- ence of each of the PHQ-9 symptoms (S1–S9, as described in (Kabir et al., 2023)). (3) Overall Diag- nosis : Make a final decision regarding Depression (D) by synthesizing the analyses from the previous steps. In total, depression detection from tweet text is decomposed into eleven subtasks: one for self-reference identification, nine for the PHQ-9 symptoms, and the last for the final depression di- agnosis. 3 LLM Learning & Tuning Framework We design a comprehensive experimental frame- work consisting of five main components, as illus- trated in Figure 1: 1). Expert Annotation : Es- tablishes human-derived references to guide the creation of appropriate subtasks. 2). Prompt En- gineering : Develops a detailed instruction set and provides two prompting examples (one positive and one negative) to differentiate between the tar- get cases. 3). LLM Few-shot : Leverages LLMs to analyze and generate predictions for all subtasks based on given text inputs. 4). Quality Analysis of Figure 1: An overview of the LLM-based detection and analysis framework. the Generated Detection & Reasoning : Evaluates the generated analysis from both linguistics and log- ical reasoning perspectives and ensure the quality of the content. 5). Instruction Tuning : Refines the LLMs using different optimization strategies with desired analysis from the few-shot generation. Each of these components is described in detail in the following subsections. 3.1 Expert Annotation We employ the Deptweet dataset as our benchmark due to its strong alignment with established clinical frameworks (DSM-5, PHQ-9). This dataset com- prises 40,191 tweets that have been expertly an- notated with depression severity and the annotator confidence of their labeling. To reduce annotation ambiguity where the tweet lacks relevant context and it is insufficient to derive a depressed ornon- depressed label, we select 1,566 tweets where the experts were confident enough with their depressed annotations (confidence > 0.95). We randomly se- lect another 1,566 samples where the experts were confident enough with their non-depressed annota- tions, resulting in a balanced dataset ( N= 3,132). We disregard the depression severity to further re- duce potential annotation disagreement, leaving only depressed ornon-depressed . Then, we per- form our annotation with self-reference ( S) and the PHQ-9 symptoms ( S1–S9) in a binary format ( Yes orNo), following the protocol outlined
https://arxiv.org/abs/2505.17119v1
in (Kabiret al., 2023). The annotation process was per- formed by three psychology professionals and three graduate students trained to identify PHQ-9 related keywords, underlying causes, and self-reference of the speakers. 3.2 Prompt Engineering 3.2.1 Instruction Prompt engineering is critical for effective LLM- based solutions (Antunes et al., 2023). Our in- struction set integrates Chain-of-Thought (CoT) reasoning (Chu et al., 2023) and comprises four essential elements: (a) An explicit definition of the agent’s role as an experienced clinical professional. (b) A high-level description of the overall task that emphasizes key domain knowledge. (c) A detailed breakdown of the subtasks, including their descrip- tions, annotation labels, and underlying reasoning. (d) Quality assurance mechanisms (Widyasari et al., 2024) to facilitate self-checks and prevent errors. We iteratively refine the instruction with LLM feed- back to ensure its precised description. 3.2.2 Prompting Examples To validate our instructions, we develop two prompting examples that follow the defined steps. These examples are refined with LLM-generated feedback to ensure their quality and clarity. One example represents a positive case by exhibiting thefeeling down symptom (S2), while the other represents a negative case by including the term “depression” in a context that describes someone else. This dual approach ensures that our prompt examples capture both typical and challenging sce- narios. 3.3 LLM Few-shot For the few-shot learning experiments, we use open-source 7B-scale LLMs known for their GPU efficiency and reliable reasoning capabilities, as ev- idenced by benchmarks such as MMLU-Pro (Wang et al., 2024). Given that our depression detection process is chat-based, we prioritize instruction- tuned models that support role-play (e.g., through a customizable system role). Table 1 lists the five flagship models selected for our experiments along with their latest versions. LLM Version Llama Llama-3.1-8B-Instruct Mistral Mistral-Nemo-Instruct-2407 Phi Phi-3.5-mini-instruct Qwen Qwen2.5-7B-Instruct Yi Yi-1.5-9B-Chat Table 1: Selected LLMs in our experiment. 3.4 Quality Analysis of Generated Diagnoses Both linguistic quality and logical reasoning are considered for quality analysis of the generated analysis of the detection, focusing on their close- ness with the prompting examples. 3.4.1 Linguistic Quality We evaluate the generated text using several met- rics that capture factual content and adherence to expected formats: a) Adherence to the format (A) : Whether the output follows the specified response format (as exemplified in the prompting examples) so that the logical flow is close to the human anal- ysis in the prompting examples while the annota- tion labels can be automatically extracted. b) Au- tomated Readability Index (ARI) (Smith and Sen- ter, 1967): Whether the text is as readable as the prompting examples. c) BERT Textual Similarity : The semantic closeness between the generated out- put and the prompting examples, as determined by BERT embeddings which capture both phrase-level and hierarchical linguistic features (Jawahar et al., 2019). The format adherence metric is defined as: A=Na N(1)where Nis the total number of input samples, and Nais the number of responses that fully adhere to the format (i.e., all subtask labels). These metrics are analyzed statistically, and outlier responses are further reviewed manually to ensure quality. 3.4.2 Logical Reasoning The overall reasoning quality
https://arxiv.org/abs/2505.17119v1
of the generated re- sponses is assessed from three perspectives: 1) Ac- curacy : Whether each individual subtask is cor- rectly predicted and whether the collective analysis aligns with human judgment. 2) Weaknesses As- sessment : Identification of potential weaknesses, particularly with respect to challenges in: (a) self- -references of the speakers’ depressive states and other experiences (vs. describing depressive states of others) and (b) the influence of explicit mention of depression-related keywords. 3) Comparative Analysis : Compare different reasoning schemes w.r.t. the sequence of the key subtask “check- points”, which is based on human observations of the generated diagnoses. To evaluate the performance of the overall joint decision-making for depression detection, we de- fine the correct ratio Cas: C=Nc N(2) where Ncrepresents the number of samples with fully correct responses, and Nis the total number of input samples. 3.5 Instruction Tuning To further improve logical reasoning of the LLMs, we perform instruction tuning using high-quality di- agnoses generated by few-shot. We implement two optimization strategies: (a) Supervised Fine-Tun- ing (SFT) : Following state-of-the-art approaches such as MentaLLaMA . (b) Preference-Based Op- timization : Specifically, Direct Preference Opti- mization (DPO), which has demonstrated improved performance over SFT on standard benchmarks (Rafailov et al., 2024; Ivison et al., 2023). Our experiments compare these strategies under lim- ited computational resources by tuning an LLM on the few-shot detection and reasoning with their generated responses. Given KLLMs in our few-shot setting (denoted byLLM 1,LLM 2,. . .,LLM K), each input sample receives Kdetection responses. We label a response as correct (C) if it accurately predicts all eleven subtasks. Accordingly, we classify each sample as: •Easy : All Kresponses are correct. •Hard : AllKresponses are wrong ( W). •Partially Correct : Some responses are correct while others are not. Thus, the overall dataset of N samples is partitioned into three collections: (a) the all-correct collection TC, (b) the partially correct collection TP, and (c) the all- wrong collection TW. Correspondingly, the generated responses ( R) are divided into four groups: (1) Correct responses from TC:{RC i,j|i∈TC, j∈ {1,2, . . . , K }}. (2) Correct responses from TP:{RC i,j|i∈TP}. (3) Wrong responses from TP:{RW i,j|i∈TP}. (4) Wrong responses from TW: {RW i,j|i∈TW, j∈ { 1,2, . . . , K }}. For SFT, we use the correct responses from the par- tially correct set ( {RC i,j|i∈TP}). For DPO, we form training pairs consisting of correct and wrong responses from TP, and we evaluate performance improvements on the more challenging hard set TW. 4 Experimental Results The analysis of the experimental results is con- ducted from several perspectives: (1) Statistical distribution of the expert annotations, (2) Few-shot performance compared to state-of-the-art Men- tal-LLMs with human verifications from various aspects considering potential detection weaknesses, (3) A thorough comparison of instruction tuning schemes, and (4) Performance differences w.r.t. dif- ferent sample characteristics – self-reference of the speakers and the presence of depression-related keywords. 4.1 Statistical distribution of the expert annotations We first analyze the distribution of human annota- tions in DepTweet, noting that all annotations are binary
https://arxiv.org/abs/2505.17119v1
with Yes/Noindications. The distribution of the annotations is shown in Table 2. To con- sider explicit vs.implicit expressions of depression, we extract the depression-related keywords (e.g., “depress”, “depressant”, “depressed”, “depressing”, “depression”, “depressive”, etc.) and separate the samples into two groups: Mentioned Depression (MD) andNo Mention of Depression (NMD) . Ta- ble 2 reveals that eight out of the nine PHQ-9 symptoms are imbalanced with significantly less occurrences, accompanied by the exception of S2 (feeling down ). The majority of the depressed sam- ples contain depression-related keywords, while thenon-depressed samples do not. However, the pres- ence of such keywords cannot be overlooked in the non-depressed group, which constitutes about 22% of the samples. Additionally, 27% of the depressed samples do not contain these keywords, indicating implicit expressions of depression. Given the im- balanced nature of these subtask annotations in the dataset, we use the micro F1 score (Chicco and Jurman, 2020) to evaluate single-label detection and the correct ratio Cfor joint decision-making of multiple subtasks. D(DepTweet) Depressed (%) Non-depressed (%) S(ours) Yes No Yes Annotation No Yes No Yes No Yes S1(ours) 96.4 3.6 26.8 0 73.2 0 S2(ours) 7.9 92.1 25.8 1.0 63.8 9.4 S3(ours) 94.8 5.2 26.8 0.1 72.0 1.1 S4(ours) 73.1 26.9 25.6 1.2 55.6 17.6 S5(ours) 97.3 2.7 26.8 0 73.0 0.2 S6(ours) 87.8 12.2 26.8 0 72.7 0.5 S7(ours) 98.7 1.3 26.8 0 73.2 0 S8(ours) 95.7 4.3 26.8 0.1 72.7 0.5 S9(ours) 85.1 14.9 26.8 0 73.1 0.1 NMD /MD 26.8 73.2 19.3 7.5 58.8 14.4 Table 2: Distribution of 11 subtask annotations, {S, S1, S2, ..., S9, and D} : percentage in the Depressed orNon-depressed groups; MDrefers Mentioned Depression ;NMD denotes No Mention of Depression ; “Annotation” indicates the human annotation of the PHQ-9 symptoms in binary YesorNoform and the classification of the MD(Yes) orNMD (No) group.). 4.2 State-of-the-art Mental-LLMs We evaluate finetuned Mental-LLMs from prior works (Yang et al., 2023), including MentalL- lama ,MentalBART , and MentalT5 . Figure 2 il- lustrates their performance w.r.t. the Depressed , Non-depressed ,Mentioned Depression (MD) , and No Mention of Depression (NMD) groups. Notably, none of these models were fine-tuned on DepTweet. The results indicate that all three models exhibit poor performance, with low accuracy, as the ma- jority of non-depressed tweets are misclassified. There could be two reasons for such poor perfor- mance: (1) These models were not optimized with DepTweet, which also implies poor generalization. (2) These models were finetuned with data gener- ated by proprietary LLMs, which might introduce unknown deviations and biases. Although their gen- erated responses facilitate reasoning, these models do not analyze PhQ-9 symptoms. Thus, we only evaluate their detection of depression. We note that the false positive (FP) rate is lower in cases when depression related keywords are present, but the false negative (FN) rate is higher when these key- words are absent. This indicates that the models have an apparent tendency to consider the sam- ples as depressed when these keywords are present, while they tend to recognize the samples as non- depressed when these keywords are absent amid the
https://arxiv.org/abs/2505.17119v1
poor performance. Figure 2: Evaluation of state-of-the-art Mental-LLMs with F1 scores (formatted as “ LLM ,F1%”).TPdenotes true positive, FPfalse positive, FN false negative, and TN true negative. A higher FPin the left column ( Mentioned Depression (MD) ) compared to the right ( No Mention of Depression (NMD) ) in- dicates that “the model has a tendency to consider the samples as depressed when these keywords are present”. 4.3 Few-shot Evaluation In comparison to the state-of-the-art Mental-LLMs, we conduct a systematic evaluation of the generated detection from two perspectives: their linguistic quality and logical reasoning. 4.3.1 Linguistic Quality We use several automated metrics indicating lin- guistic quality of the generated responses to evalu- ate their closeness to the prompting examples (as references): (a) Generation length and adherence to the format (Ais assessed in Table 3), (b) Auto- mated Readability Index (ARI) , and (c) Average co- sineBERT similarity with the input examples. The generation length has a mean of 725 tokens with a standard deviation of 135 (denoted as 735 ±135, noting that the two prompting examples are 789 and 685 tokens in length, respectively, as references). The readability (ARI) is 22.2 ±2.3 (with reference values of 23.0 and 23.9), and the BERT similarityis 0.98±0.03. These metrics show that most of the generated responses align with the prompting ex- amples from these linguistic aspects. Human verifi- cation is applied to sampled responses, particularly for the outliers, to confirm their correctness and to observe behaviors of the tested models. It yields several observations from human reads regarding the generated detection using few-shot learning: (1)Short generation refers to short diagnoses that is incomplete (w.r.t. the subtasks). LLMs vary in behavior for short generation: e.g., Llama avoids analyzing extreme emotions, while Mistral gener- ates short responses more often when the text is irrelevant to depression. (2) Long responses can be beneficial when the LLM iteratively analyzes multiple individuals, leading to a longer diagno- sis. Additionally, the LLM may revise its predic- tions, rewriting the reasoning with updated labels. (3)Desired response length does not qualify the generation to be adhering to the prompting exam- ples. Extracting the labels can be challenging even when all subtask annotations are present and intact if the desired format is misplaced. (4) Annotation label confusion . The LLM may occasionally con- fuse annotation labels, even though the template is implemented as intended. 4.3.2 Logical Reasoning LLM Llama Mistral Phi Qwen Yi A 94.8 65.7 77.4 96.0 97.9 D(F1) 83.6 87.2 78.4 86.5 84.1 S(F1) 84.1 82.7 85.1 81.2 85.3 S1(F1) 90.2 75.7 50.0 83.8 72.3 S2(F1) 80.9 80.4 77.2 82.0 81.9 S3(F1) 95.8 92.3 96.5 96.1 95.3 S4(F1) 86.2 79.2 76.6 83.5 84.4 S5(F1) 98.4 98.0 98.8 98.7 98.1 S6(F1) 92.6 80.7 88.2 94.1 93.1 S7(F1) 96.3 93.9 92.1 96.8 97.1 S8(F1) 95.8 91.1 96.2 97.1 96.2 S9(F1) 94.5 95.2 95.1 97.7 97.6 PHQ9 ( C) 54.3 40.0 28.2 51.7 43.2 PHQ9+D ( C) 51.3 37.5 26.1 49.4 40.6 S+PHQ9+D ( C) 39.5 25.2 17.8 35.3 30.6 Table 3: Evaluation of few-shot learning with LLMs ( Ade- notes adherence to the
https://arxiv.org/abs/2505.17119v1
format ,Ddepression, and Sspeaker reference ). The best-performing LLM is highlighted in bold. We evaluate logical reasoning based on the align- ment of subtask predictions with human annota- tions. If all subtask predictions match, reasoning is considered “accurate" and correct under a well- designed task breakdown. Subtask predictions and their joint correctness are summarized in Table 3, while the final decision-making is further assessed in Figure 3 w.r.t. different annotation groups. Subtask Predictions. Table 3 demonstrates the following: (a) Llama, Qwen, and Yi more fre- Figure 3: Evaluation of few-shot learning with LLMs w.r.t. different annotation groups (formatted as “ LLM ,F1%”).TP denotes true positive, FPfalse positive, FN false negative, andTN true negative. quently adhere to the format compared to Mis- tral and Phi. (b) All five LLMs outperform state-of-the-art Mental-LLMs in Figure 2 w.r.t. the detection of depression ( D). (c) Despite the ef- fective detection of subtasks ( >80% with Llama), achieving a correct detection encompassing all de- sired sub-labels remains significantly challenging (e.g., the correct ratio Cis 39.5% with Llama com- pared to an F1 of 83.6% of depression detection). This indicates that these LLMs are insufficiently capable of providing the comprehensive analysis. Linguistic challenges w.r.t. the presence of depression-related keywords. Figure 3 reveals the correlation between the detection of depression and the presence of the depression keywords. It indi- cates that false positives (FP) are higher when de-pression keywords are present, and false negatives (FN) are higher when they are absent. Nevertheless, the overall performance is significantly improved compared to state-of-the-art Mental-LLMs. (Note that such a phenomenon is not observed for speaker self-references .) Revision of Reasoning. It is important to note that some responses provide multiple annotations for the same subtask, potentially with revisions. We specifically examine the flow of logic and cor- rectness in these cases. From the few-shot results, 34 responses with multiple same-task annotations were extracted: 13 generated by Yi, 3 by Qwen, 9 by Llama, and 9 by Mistral. Yi confused the labels and generated the same label twice on 9 occasions. Another common scenario is that the LLM revised its predictions, particularly the speaker reference , or analyzed the relevant individuals one by one. Additionally, the LLM may express uncertainty in its reasoning by stating, “I cannot conclude . . . ” even when all prediction labels are present. 4.4 Instruction Tuning 4.4.1 Intuitive vs. Sophisticated Reasoning Given that revision of reasoning is possible in few-shot generation, we devise two schemes to select the qualified responses for LLM finetun- ing: (1) Intuitive reasoning (IR) provides the de- sired responses with all correct predictions on the first attempt, without sophisticated logical analysis. (2)Sophisticated reasoning (SR) permits a second attempt and allows for the revision of predictions. In practice, the correct IR criteria filter the subtask predictions with their “first” labeling, while the SR criteria use the “last” labeling, given that the major- ity of the responses provide only one prediction for a single subtask. Regarding all correct responses {RC i,j|i∈DepTweet }, there are 4,653 qualified responses collected as IRand 4,649 that satisfy the
https://arxiv.org/abs/2505.17119v1
criteria for SR. Among them, six IR responses are replaced by two SR descriptions (one by Llama and the other by Qwen). This implies that only two generated diagnoses achieved correct revisions. In- terestingly, the derived TC,TP, and TWsample collections are identical regardless of the IR and SR criteria. This suggests that these two samples are analyzed correctly by other LLMs on the first attempt. We visualize these three collections w.r.t. human annotation, as shown in Figure 4. It is ob- served that the distribution of depressed samples with the presence of depression keywords is simi- lar to the non-depressed without the keywords, and vice versa. This further indicates that these key- words influence the separation of the three groups. Finally, we obtain 87 samples in the TCcollection, 1963 in TP, and 1082 in TW. Figure 4: Distribution of samples over the TC,TP, andTW collections w.r.t. different annotation groups. 4.4.2 Performance Comparison We conduct instruction tuning on Llama and com- pare the qualification of reasoning under IR and SR criteria. Note that only a small number of differ- ent reasoning samples were included in the train- ing data compared to the shared responses. The evaluation (on TW) using few-shot learning as a baseline is shown in Table 4 and 5. Several obser- vations can be made. (1) DPO outperforms SFT for the detection of subtasks and the joint correctness. (2) DPO SRoutperforms DPO IRw.r.t. the joint correct ratio , despite there being only six different responses. This might suggest that sophisticated reasoning has the potential to enhance DPO opti- mization. (3) The noticeable improvement in the correct ratio of joint subtask predictions confirms the quality assurance of the responses used in the instruction tuning. (4) Both DPO and SFT reduce FP and FN rates for the MDgroup. FP is also re- duced at the cost of a higher FN rate for the NMD group. 5 Conclusion We systematically evaluated Mental-LLMs’ per- formance with PHQ-9 labeling and investigated schemes for further optimization, arguing that single-task reasoning with proprietary LLMs may be insufficient for generalized depression detection. We proposed detailed subtask annotations and re- sponse qualification with reasoning. We illustrated that these subtask annotations implemented strictLLM Baseline SFTIR DPOIR SFTSR DPOSR A 95.3 94.4 91.0 90.8 94.0 D(F1) 75.0 79.4 83.5 75.8 81.7 S(F1) 84.0 86.8 89.8 87.7 90.1 S1(F1) 81.2 88.1 97.1 84.9 97.3 S2(F1) 68.0 69.0 71.6 68.5 72.9 S3(F1) 90.7 91.8 95.5 92.0 95.3 S4(F1) 70.2 73.9 72.9 74.1 72.7 S5(F1) 96.2 96.3 97.5 96.5 97.5 S6(F1) 85.9 87.8 89.8 86.8 89.2 S7(F1) 93.2 96.3 98.6 96.4 99.0 S8(F1) 91.9 93.0 93.6 92.7 93.7 S9(F1) 93.8 93.8 94.6 93.7 94.3 PHQ9 (C) 12.4 23.5 32.3 23.0 33.4 PHQ9+D ( C) 8.0 20.1 27.9 19.7 29.8 S+PHQ9+D ( C) 0.0 12.0 22.0 12.8 23.7 Table 4: Evaluation of instruction tuning with Llama in com- parison to the few-shot baseline. ( Adenotes adherence to the format ,Ddepression, and Sspeaker reference ) MD NMD FP% FN% FP% FN% Baseline 56 10 32 33 SFT_IR 47 5 13 39 DPO_IR 42 8 6 51 SFT_SR
https://arxiv.org/abs/2505.17119v1
54 5 22 34 DPO_SR 41 8 6 52 Table 5: Comparison of detection weaknesses w.r.t. false position (FP) and false negative (FN) rates with instruction tuning. MDrefers Mentioned Depression ;NMD denotes No Mention of Depression . FP and FN are computed using the same methods as in Figure 2 and 3. Note that DPO _SRis not distinct from DPO _IRw.r.t. single-task depression detection due to the trivial difference (two samples) in their traning datasets. logical flows that not only represented “good” de- tection reasoning but also facilitated high-standard qualification for the selection of the tuning data. Although the LLMs performed well on individual subtasks, the joint decision-making with complete step-by-step analysis was still significantly more challenging and would require future investigation. Additionally, a detailed analysis revealed that the detections were skewed towards depression when explicit depression-related keywords were present in the text while they were skewed towards non- depression when such keywords were absent. This implies that LLMs find it harder to analyze im- plicit language for depression. To mitigate these two problems without the high cost of collecting descriptive annotations from human experts, we uti- lized machine-generated responses, separated into intuitive and sophisticated reasoning collections, and optimized Llama with each of them. The re- sults indicated that DPO significantly enhanced the joint labeling of subtasks compared to SFT, while sophisticated reasoning might have more potential to improve the reasoning capabilities of LLMs. Fur- ther studies on larger-scale data collection would help to confirm our conclusions. References Ana Antunes, Joana Campos, Manuel Guimarães, João Dias, and Pedro A Santos. 2023. Prompting for so- cially intelligent agents with chatgpt. In Proceedings of the 23rd ACM International Conference on Intelli- gent Virtual Agents , pages 1–9. Zhuang Chen, Jiawen Deng, Jinfeng Zhou, Jincenzi Wu, Tieyun Qian, and Minlie Huang. 2024. Depression detection in clinical interviews with llm-empowered structural element graph. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 8174–8187. Davide Chicco and Giuseppe Jurman. 2020. The advan- tages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. BMC genomics , 21:1–13. Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu, Bing Qin, and Ting Liu. 2023. A survey of chain of thought reasoning: Advances, frontiers and future. arXiv preprint arXiv:2309.15402 . Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013. Predicting depression via social media. In Proceedings of the international AAAI conference on web and social media , volume 7, pages 128–137. Kristina Š Despot, Ana Ostroški Ani ´c, and Tony Veale. 2023. “somewhere along your pedigree, a bitch got over the wall!” a proposal of implicitly offen- sive language typology. Lodz Papers in Pragmatics , 19(2):385–414. Larry S Goldman, Nancy H Nielsen, Hunter C Cham- pion, and American Medical Association Council on Scientific Affairs. 1999. Awareness, diagnosis, and treatment of depression. Journal of general internal medicine , 14(9):569–580. Keith Harrigian, Carlos Aguirre, and Mark Dredze. 2020. Do models of
https://arxiv.org/abs/2505.17119v1
mental health based on social media data generalize? In Findings of the associ- ation for computational linguistics: EMNLP 2020 , pages 3774–3788. Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A Smith, Iz Beltagy, and 1 others. 2023. Camels in a changing climate: Enhancing lm adaptation with tulu 2. arXiv preprint arXiv:2311.10702 . Ganesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What does bert learn about the structure of language? In ACL 2019-57th Annual Meeting of the Association for Computational Linguistics . Mohsinul Kabir, Tasnim Ahmed, Md Bakhtiar Hasan, Md Tahmid Rahman Laskar, Tarun Kumar Joarder,Hasan Mahmud, and Kamrul Hasan. 2023. Deptweet: A typology for social media texts to detect depres- sion severities. Computers in Human Behavior , 139:107503. Kurt Kroenke, Robert L Spitzer, and Janet BW Williams. 2001. The phq-9: validity of a brief depression sever- ity measure. Journal of general internal medicine , 16(9):606–613. Sihua Lyu, Xiaopeng Ren, Yihua Du, and Nan Zhao. 2023. Detecting depression of chinese microblog users via text analysis: Combining linguistic inquiry word count (liwc) with culture and suicide related lexicons. Frontiers in psychiatry , 14:1121583. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo- pher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Advances in Neu- ral Information Processing Systems , 36. Thomas Savage, Ashwin Nayak, Robert Gallo, Ekanath Rangan, and Jonathan H Chen. 2024. Diagnostic reasoning prompts reveal the potential for large lan- guage model interpretability in medicine. NPJ Digi- tal Medicine , 7(1):20. Edgar A Smith and RJ Senter. 1967. Automated read- ability index , volume 66. Aerospace Medical Re- search Laboratories, Aerospace Medical Division, Air . . . . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems , 30. Ning Wang, Yupeng Cao, Shuai Hao, Zongru Shao, and KP Subbalakshmi. 2021. Modular multi-modal attention network for alzheimer’s disease detection using patient audio and language data. In Interspeech , pages 3835–3839. Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, and 1 oth- ers. 2024. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. arXiv preprint arXiv:2406.01574 . Ratnadira Widyasari, David Lo, and Lizi Liao. 2024. Beyond chatgpt: Enhancing software quality assur- ance tasks with diverse llms and validation tech- niques. arXiv preprint arXiv:2409.01001 . Xuhai Xu, Bingshen Yao, Yuanzhe Dong, Hong Yu, James Hendler, Anind K Dey, and Dakuo Wang. 2023. Leveraging large language models for mental health prediction via online text data. arXiv preprint arXiv:2307.14385 . Kailai Yang, Tianlin Zhang, Ziyan Kuang, Qianqian Xie, and Sophia Ananiadou. 2023. Mentalllama: Interpretable mental health analysis on social me- dia with large language models. arXiv preprint arXiv:2309.13567 . Hamad Zogan, Imran Razzak, Shoaib Jameel, and Guan- dong Xu. 2023. Hierarchical convolutional attention network for depression detection on social media and its impact during pandemic. IEEE Journal of Biomedical and Health Informatics .
https://arxiv.org/abs/2505.17119v1
arXiv:2505.17120v1 [cs.CL] 21 May 2025Self-Interpretability: LLMs Can Describe Complex Internal Processes that Drive Their Decisions, and Improve with Training Dillon Plunkett Northeastern University d.plunkett@northeastern.eduAdam Morris Princeton University thatadammorris@gmail.com Keerthi Reddy Independent ResearcherJorge Morales Northeastern University Abstract We have only limited understanding of how and why large language models (LLMs) respond in the ways that they do. Their neural networks have proven challenging to interpret, and we are only beginning to tease out the function of individual neurons and circuits within them. However, another path to understanding these systems is to investigate and develop their capacity to introspect and explain their own functioning. Here, we show that i) contemporary LLMs are capable of providing accurate, quantitative descriptions of their own internal processes during certain kinds of decision-making, ii) that it is possible to improve these capabilities through training, and iii) that this training generalizes to at least some degree. To do so, we fine-tuned GPT-4o and GPT-4o-mini to make decisions in a wide variety of complex contexts (e.g., choosing between condos, loans, vacations, etc.) according to randomly-generated, quantitative preferences about how to weigh different attributes during decision-making (e.g., the relative importance of natural light versus quiet surroundings for condos). We demonstrate that the LLMs can accurately report these preferences (i.e., the weights that they learned to give to different attributes during decision-making). Next, we demonstrate that these LLMs can be fine-tuned to explain their decision-making even more accurately. Finally, we demonstrate that this training generalizes: It improves the ability of the models to accurately explain what they are doing as they make other complex decisions, not just decisions they have learned to make via fine-tuning. This work is a step towards training LLMs to accurately and broadly report on their own internal processes—a possibility that would yield substantial benefits for interpretability, control, and safety. 1 Introduction A key challenge in studying large language models (LLMs) is understanding why they do what they do. As with all deep neural networks, the internal operations that drive their behavior are, by default, opaque to human eyes. This is unfortunate. For almost all issues that one might consider most important or concerning about LLMs, better understanding how these systems work would be extraordinarily helpful. Doing so would enable us to better control them (Nanda, 2022; Bereska and Gavves, 2024), prevent bias in their behavior (Gilpin et al., 2018; Gallegos et al., 2024), and make informed decisions about when to trust their output or decisions (Deeks, 2019). Preprint. Under review. Attacks on this problem have generally taken one of two approaches (Bereska and Gavves, 2024; Danilevsky et al., 2020). The first approach is to use “black box” methods (Casper et al., 2024) that try to understand deep neural networks based on observations of their outputs in response to different inputs—much like a cognitive scientist running behavioral experiments on humans. The second approach is to use “mechanistic interpretability” methods (Olah et al., 2020; Rai et al., 2024) that crack open the black box and try to reverse-engineer the function of, e.g., individual artificial neurons within—much like a cognitive neuroscientist
https://arxiv.org/abs/2505.17120v1
performing single-unit recordings in human brains. However, in the case of LLMs, there is a third approach that we could use, the method that people most commonly use to discover the thoughts and motivations of other humans: just asking. It is possible that LLMs can accurately explain the internal factors or operations driving their outputs, just as humans can (sometimes) explain their own decision-making accurately (Morris et al., 2025; Morris, 2025). Indeed, recent work has suggested that LLMs can report aspects of their internal states (e.g., whether they have been fine-tuned to be risk-seeking or risk-averse; Betley et al., 2025) and can predict their own behavior in ways that require privileged self-knowledge (Binder et al., 2024). Here, building on this work, we demonstrate that LLMs can report detailed, quantitative information about the internal processes producing their output. We fine-tune LLMs to have a range of novel, complex, quantitative preferences—preferences orthogonal to their native ones—and find that the models can then report those preferences with substantial accuracy. Moreover, we show that it is possible to improve these capabilities through training, and that this training generalizes to improve reporting of native preferences as well as fine-tuned ones. These results suggest that building and leveraging the capabilities of LLMs to explain their own internal processes could be a powerful tool for understanding why they do what they do. Prior work on behavioral self-awareness and introspection in LLMs We build on two papers investigating LLMs’ ability to report their internal operations. First, Binder et al. (2024) tested whether LLMs can predict how they would respond to a prompt, without actually outputting the response. They found that, when fine-tuned in this task, each model predicted its own outputs better than other models could, suggesting that these predictions were driven by introspection (i.e., the model’s privileged informational access to its own operations; Schwitzgebel, 2010; Morris, 2025).1 This work showed that LLMs have privileged ability to predict their own behavior, but a central limitation of this approach was that it only tested whether models had special knowledge about theoutputs they would produce; it did not test whether models could report the internal operations underlying those outputs. As Binder et al. (2024) acknowledge, a model could accomplish this self-prediction via self-simulation (i.e., simply computing its response, then performing additional operations to extract the aspect of that response that it has been prompted to output), which would be only a very specific and limited kind of introspection. The great promise of LLM introspection comes from models accurately explaining whythey do what they do, information about their internal processes that we cannot directly observe or infer from their outputs. Betley et al. (2025) took a key step in this direction. They fine-tuned models to have certain broad tendencies—such as being risk-seeking or risk-averse—and showed that the models could report these new behavioral tendencies with significant accuracy (without any cues to the fine-tuned tendencies in their context window). Because these tendencies were instilled by fine-tuning on example behaviors, these reports must reflect “behavioral self-awareness”: Their accuracy cannot be attributed to information that was
https://arxiv.org/abs/2505.17120v1
explicit in their training data (e.g., “GPT-4o is risk-seeking”) and must reflect either introspective access to these tendencies or, at minimum, that the training to instill the risk-seeking also instilled the tendency to self-describe as risk-seeking. The approach of Betley et al. (2025) provides a method for measuring LLMs’ awareness of their own internal processes: Use fine-tuning to implicitly steer models towards new processes, then test whether they can report those processes. However, Betley et al. only used this method to test whether LLMs knew about their own broad behavioral tendencies (e.g., tendencies to be risk-seeking). LLMs’ self-reports would be more useful and powerful if the models could explain detailed, quantitative features of the factors driving their behavior. 1Note that, although the term “introspection” and related terms like “self-awareness” are often taken to refer to conscious awareness of internal processes (Schwitzgebel, 2010; Morris, 2025), we follow Binder et al. (2024) in using it to mean the ability provide accurate information about their own internal operations that must come from informational access to those internal operations. 2 Our paradigm Here, we adapt the method from Betley et al. (2025) to test whether LLMs can report complex, detailed features of their internal operations, rather than just broad behavioral tendencies. We start by instilling a set of novel, quantitative preferences in LLMs. Preferences are often characterized by attribute weights: the weight the decision-maker places on different attributes of options when evaluating them (Keeney and Raiffa, 1993). For instance, choosing between condos requires deciding (implicitly or explicitly) how much weight to place on square footage, ceiling height, neighborhood walkability, etc. We fine-tune LLMs on a wide variety of example decisions being made according to a set of randomly-generated attribute weights. (The weights themselves never appear in the fine-tuning data.) Then, after measuring the extent to which they have internalized those attribute weights, we ask the models to report how heavily they would weigh each of those attributes when making those kinds of decisions, and find that they can do so effectively. Since the instilled attribute weights are novel and random, the models cannot use common sense or any specific attribute weights present in their training data to infer their own preferences. (They are as likely to have been fine-tuned to prefer small condos and low ceilings as large condos and high ceilings.) Moreover, the models never make choices and report weights in the same context window, so they cannot be looking back at their own choices and inferring their preferences from their choices. Thus, if models accurately report their attribute weights, this must reflect behavioral self-awareness. Next, leveraging this paradigm, we test whether we can train the LLMs to describing their internal processes more accurately. We fine-tune the models on examples of correctly reporting the values of the instilled weights for some choice context, then test their ability to report the instilled weights for other choice contexts. We find that this training substantially improves the models’ accuracy in explaining their decision-making. Finally, we test whether this training also improves the models’ ability to report on other
https://arxiv.org/abs/2505.17120v1
internal factors—namely, their native attribute weights (i.e., the weights guiding their decisions that have not been shaped by fine-tuning). Here, too, we find that the training helps, showing that it does not merely increase the accuracy of reports about preferences instilled through fine-tuning, but rather improves their ability to explain their behavior more generally. These results show that LLMs can report detailed, quantitative features of their choice processes, and this ability can be improved through training. This is a step towards realizing the proposal of Perez and Long (2023) to train LLMs to accurately and generalizably describe their own internal operations, which could substantially enhance our ability to understand, control, and safely deploy AI systems (Bereska and Gavves, 2024; Casper et al., 2024; Gilpin et al., 2018). 2 Experiment 1: Can LLMs describe complex internal processes? Experiment 1 tested whether LLMs can provide accurate, quantitative details about internal processes shaping their behavior, in particular when making complex, multi-attribute choices (e.g., deciding which of two condos to purchase). To do this, we implemented the paradigm described above (see Figure 1). We instilled the models with complex, randomly-generated preferences via fine-tuning: precise weights to assign to the different attributes of options that they would be deciding between. We verified that the models had internalized those preferences by observing their subsequent choices, and then tested whether they could accurately quantify these new preferences. 2.1 Methods We fine-tuned GPT-4o (2024-08-06) and GPT-4o-mini (2024-07-18; OpenAI, 2024) on the pref- erences of 100 hypothetical agents in specific choice contexts by using examples of each agent’s choices (e.g., “Imagine you are Macbeth and are shopping for a condo. If offered [two options], you would choose [preferred option]”; see Appendix B and the GitHub repository for details). Each agent made a different type of decision (e.g., Macbeth was always choosing between condos, but Thor was always choosing between refrigerators, etc.). In each choice context, the two options always differed on the same five attributes (e.g., square footage or ceiling height for condos). For each agent, we randomly sampled five target attribute weights (one for each dimension of the choice options) from a uniform distribution from -100 to +100; we label the ithattribute weight ωi. These target weights were fixed at the start of the experiment and did not change. Each agent’s choices were determined by these target weights. They chose whichever of the two options {a, b}scored higher after summing the weighted, normalized values (e.g., ai) of each option’s attributes: max o∈{a,b}P5 i=1ωioi. 3 Figure 1: Experimental design. Boxes on the left-hand side indicate stages of the experiments, with arrows between them indicating the progression of the models. The right-hand side gives an example trial from each stage: either a fine-tuning trial, a decision trial (used to test the models’ attribute weights), or a test trial (used to test the models’ knowledge of their attribute weights). 4 Both models were fine-tuned on the same 5000 examples: 50 choices made by each of the 100 agents (using OpenAI’s default hyperparameters). We refer to these as “preference-trained” models. We verified that this
https://arxiv.org/abs/2505.17120v1
fine-tuning was effective by asking each preference-trained model to make choices between pairs of new options on behalf of the agent (50 decisions per agent for a total of 5000 decisions, each made in an independent context window). (Here and for all other queries across all three experiments, we used a sampling temperature of 0.) Following standard practice for estimating attribute weights in multi-attribute choice (Keeney and Raiffa, 1993), we fed the models’ choices into simple logistic regressions to estimate the learned attribute weights that each model used to make decisions after fine-tuning, and compared these learned weights with the target weights to verify that the model had successfully internalized the target weights (see Appendix C for details). To evaluate whether the models could accurately explain their own decision-making, we then prompted each preference-trained model to make 10 additional decisions on behalf of each agent, but to report only how heavily it was weighting each attribute in doing so (rather than reporting the decision itself). We prompted them in this way to put them in the mindset of making a decision and make the attribute weights more introspectively salient—without them actually outputting a decision, so that they could not infer their weights from observing that decision. We used 10 prompts (each in a separate context window) to extract a more stable estimate (Binder et al., 2024; Betley et al., 2025). For each choice context (i.e., each agent), we averaged its responses to obtain its reported attribute weights , dropping any cases where it provided invalid responses (e.g., by omitting or inventing attributes). We compared these reported weights to the learned weights (i.e., our estimates of the weights that they were actually using, as revealed by their 5000 pairwise choices). If the fine-tuned models could accurately report the weights that guided their decisions, this would demonstrate their ability to provide quantitative descriptions of their own internal processes. Because the target attribute weights were randomly generated, it would be impossible for the preference-trained models to use common-sense to guess them (e.g., by having a sense that most people prefer high ceilings, so it is likely that they prefer high ceilings when emulating Macbeth). However, to the extent that the preference-trained models fail to internalize those weights—and instead retain some of their original common-sense-influenced weights—the preference-trained models might be able to succeed in guessing the learned weights that they ended up with. (And the same is true for any explicit attribute weights that could have appeared in the models’ training data.) To verify that this is not a significant factor in our results, we administered the same introspection prompts2to the base models (which had not undergone fine-tuning) and compared their responses to the learned weights. If the base models’ reported weights exhibit far less correlation with the learned weights, this would rule out the possibility that preference-trained models’ accuracy in reporting the learned weights came from any residual influence of native preferences. Code and data for all experiments are available at: https://github.com/dillonplunkett/ self-interpretability . 2.2 Results Fine-tuning successfully instilled the target attribute weights in the
https://arxiv.org/abs/2505.17120v1
models. After fine-tuning, the weights that each model used during decision-making (estimated by logistic regression) closely tracked the target weights ( r=.84andr=.87for GPT-4o and GPT-4o-mini, respectively). Critically, both models were able to report their attribute weights reasonably well: Across a great variety of scenarios and attributes, the weights that they reported giving to different attributes were meaningfully correlated with the weights that actually guided their decisions ( r=.54, 95% highest- density interval [HDI] of [.47, .62]for 4o; r=.50 [.42, .59]for 4o-mini; see Figure 2). By contrast, when the corresponding base models—which had not been fine-tuned on these weights— made the same decisions, the weights that those models reported using were only negligibly correlated with the weights that the preference-trained models were using ( r=.10, 95% HDI of [.02, .19]for 4o;r=−.01, 95% HDI of [−.09, .08]for 4o-mini). Thus, the preference-trained models’ ability to explain their own decisions does not merely reflect an informed guess about how they make their 2Here and elsewhere, we refer to these prompts as “introspection prompts” because we prompted the models to engage in introspect before responding. However, we are agnostic as to whether they actually did introspect and whether that accounts for the models’ accuracy in describing their internal processes. See the Discussion, below. 5 Figure 2: Results of Experiments 1 and 2. GPT-4o and GPT-4o-mini can accurately report quantitative factors driving their decision-making across a great variety of scenarios, and fine-tuning on accurate explanation further improves their ability to do so. Left: Models were making choices based on preferences instilled in them by fine-tuning. Each point corresponds to a single attribute (e.g., condo ceiling height; 5 per choice contexts, 100 choice contexts). Location in the x-dimension corresponds to the weight that a model assigned to an attribute (as reflected in their decisions). Location in the y-dimension corresponds to the weight that a model reported assigning to that attribute when prompted explicitly. The weights the models reported meaningfully correlated with the weights that actually guided their decisions, and fine-tuning on examples of accurate reports further improved their accuracy. Right: The Pearson correlation between the models’ reported and learned attribute weights before and after training (blue and purple, respectively). The reports of the base models that had not undergone this fine-tuning were almost entirely uncorrelated with the learned preferences (gray). Thus, the accuracy of the fine-tuned models must reflect their ability to report their new (fine-tuned) preferences and not an informed guess about their preferences based on their general background knowledge. Error bars indicate 95% HDIs. decisions (i.e., based on their background knowledge about most humans’ preferences or from any explicit values that appeared in their training data). 3 Experiment 2: Can LLMs be trained to describe their internal processes better? Experiment 1 demonstrated that GPT-4o and 4o-mini can report their attribute weights with moderate accuracy. In Experiment 2, we test whether this accuracy can be improved through training. 3.1 Methods We performed a second round of fine-tuning on the preference-trained versions of GPT-4o and GPT-4o- mini, into which we had already fine-tuned preferences in Experiment 1. This time,
https://arxiv.org/abs/2505.17120v1
we fine-tuned the preference-trained models on the task of accurately describing their internal processes. Specifically, we provided examples in which the prompts are the introspection prompts from Experiment 1 (e.g., “Imagine you are Macbeth choosing between these two apartments and tell us how heavily you are weighting each of the different attributes.”) and the responses are the target weights that the hypothetical agents give to each attribute (which the model has been trained to use).3 3We opted to use the target weights as the desired response during this training, rather than the weights that the model ended up learning and using (which we estimated by logistic regression in Experiment 1). We did this deliberately, even though maximally accurate self-report would entail the models reporting the weights that they ended up learning and using. We did not want there to be any possibility that our training was “succeeding” only by training the models to report on their deviations from the randomly generated preferences we aimed to instill 6 We used 50 examples of this kind for fine-tuning, one for each of the first 50 of the 100 agents that the models had been trained to emulate. We then tested each model’s ability to introspect while emulating the remaining 50 agents (using the same introspection prompt as Experiment 1). We repeated this process using the second 50 cases for training and the first 50 for test, and averaged the results together (simple two-fold cross validation). We compared their performance to that of Experiment 1 to test if the models’ ability to describe their internal processes improved with training. 3.2 Results After introspection training,4both GPT-4o and GPT-4o-mini were markedly more accurate in explain- ing their own decision-making. The correlation between the weight that reported giving to different attributes and the weight that actually gave to those attributes during decision-making increased to r=.74andr=.75, respectively (95% HDIs of [.68, .80]and[0.69, .81], respectively; see Figure 2), up from r=.54andr=.50in Experiment 1 before introspection training. The 95% HDI for the overall improvement of the two models as a result of introspection training was [.16, .29]. 4 Experiment 3: Does this training generalize? Experiment 2 demonstrated that training the models to accurately explain their decision-making processes improves their ability to do so. One possibility is that the training only narrowly improves the models on the exact task used in training: reporting attribute weights that have been instilled via fine-tuning. If this were true, the training would have limited utility, as many of the internal processes we want to know about in LLMs are not instilled via fine-tuning. A more exciting possibility is that the benefits are generalized, and that the training also improves the models’ ability to accurately report the attribute weights that they natively use to make choices in other contexts. 4.1 Methods We prompted each preference-trained model to make 100 decisions while emulating each of 100 new agents—each making a different new type of decision—who did not appear in the initial fine-tuning dataset. Accordingly, the model’s responses reflected only their native beliefs about, for example, which of two
https://arxiv.org/abs/2505.17120v1
cereals Jean Valjean would prefer. Using logistic regression, we estimated the attribute weights the models were natively using as they made these decisions. Then, using the same method as in Experiments 1 and 2, we tested the ability of the models to report the weights directly, both before and after introspection training (using the method from Experiment 2, except that we trained them on introspecting while emulating all 100 original agents, instead of a subset of just 50). 4.2 Results In both GPT-4o and GPT-4o-mini, our introspection training improved the ability of the model to accurately explain its normal decision-making (see Figure 3). When emulating the decision-making of agents that did not appear in any of fine-tuning examples (and, therefore, relying only on their native beliefs about what those agents might prefer), the introspection training from Experiment 2 made both models more accurate in reporting how heavily they weighted different attributes. We observed an increase from r=.46tor=.71for 4o, and an increase from r=.40tor=.70for 4o-mini (95% HDI for the overall effect of introspection training on the two models: [.21, .35]). 5 Related work As discussed above, our work builds directly on Binder et al. (2024) and Betley et al. (2025). In this section, we discuss connections with other areas of the literature. in them. Those deviations plausibly reflect their prior common-sense biases (e.g., that most people would prefer a larger condo, all else being equal). 4As before, note that we refer to this as “introspection training” because we prompted the model to introspect before reporting its atttribute weights, but we are agnostic as to whether it actually did introspect and whether that accounts for the models’ accuracy in describing their internal processes. See the Discussion. 7 Figure 3: Results of Experiment 3. Introspection training generalized to improving the models’ accuracy about the attribute weights that they natively used in other choice contexts (weights that were unchanged by fine-tuning) . Left: As in Figure 2, each point corresponds to a single attribute (5 per choice contexts, 100 choice contexts). Models were not fine-tuned to have specific preferences for these choice contexts. Nevertheless, fine-tuning on examples of accurate introspection made the models more accurate in reporting the weights that they assigned to these attributes. Right: Comparison of the Pearson correlations between the attribute weights that the models reported and those they natively used (in choice contexts that were not been part of the preference training), before and after introspection training. Error bars indicate 95% HDIs. Chain-of-Thought faithfulness Our work is related to, though different from, investigations of Chain-of-Thought (CoT) faithfulness (i.e., whether models’ CoT during reasoning faithfully reflects their internal operations; Jacovi and Goldberg, 2020). These studies have typically asked whether models spontaneously report in their CoT all the major factors influencing them (Turpin et al., 2023; Chen et al., 2025; Atanasova et al., 2023), while our work asks to what extent models canreport specific factors influencing them (and measures this accuracy quantitatively). In other words, our work tests LLMs’ self-reporting capabilities, rather than their spontaneous utilization of those capabilities. Moreover, CoT faithfulness studies have
https://arxiv.org/abs/2505.17120v1
not tested whether faithfulness reflects privileged knowledge of their own operations, as opposed to them inferring those operations from, e.g., common-sense reasoning. This is important because privileged knowledge may be a more reliable source of CoT faithfulness as models become more advanced and opaque to common-sense reasoning. Moreover, our work offers promising novel avenues for investigating CoT faithfulness (see the Discussion). Metacognitive confidence judgments Another related literature has investigated LLMs’ ability to report when their output is correct or not (i.e., metacognitive confidence judgments; Steyvers and Peters, 2025). Some studies have found their metacognitive confidence judgments to be accurate (Kadavath et al., 2022; Cash et al., 2024b), while others less so (Griot et al., 2025). As with studies of CoT faithfulness, the literature on metacognitive confidence judgments is related to but conceptually separate from our work here. In these metacognitive assays, “accurate” means that the model knows the veracity of its previous outputs, notthat it knows its own internal operations. Of course, metacognitive accuracy of this kind could stem from models having direct knowledge of their internal operations—but it could also stem from inference over outputs or common-sense reasoning (Morales and Lau, 2021; Fleming, 2024; Brus et al., 2021). Hence, confidence judgments cannot be used to assess self-reporting accuracy in the way we aim to do here. Accurate self-report in humans Finally, our work is related to ongoing debates about humans’ ability to faithfully explain their own decision-making (Ericsson and Simon, 1993; Newell and Shanks, 2014). Several studies have tested whether people can accurately report the attribute weights guiding their decisions, with some finding that people can report these weights accurately (Morris et al., 2025; Cash et al., 2024a) and others less so (Nisbett and Wilson, 1977). Notably, the highest correlation that has been observed between humans’ reported and true weights is r≃.80(Morris 8 et al., 2025), which is similar to the correlations we find in trained LLMs—suggesting that LLMs can report their attribute weights with similar levels of accuracy as humans. 6 Discussion We show that GPT-4o and GPT-4o-mini can report complex, quantitative details about the processes that drive their decision-making. This accuracy cannot be explained by the models using common sense or their own behavior to infer these factors, as the factors were randomly generated attribute weights (instilled via fine-tuning) and the models reported their weights without observing their own decisions. Moreover, we show that training can improve this ability, and this training generalizes, improving the models’ abilities to explain their decision-making in other decision contexts (not just preferences instilled by fine-tuning). Our work adds to a small but growing literature demonstrating LLMs’ ability to accurately report their own internal processes. Prior research has shown that LLMs can be trained to accurately predict their own outputs (Binder et al., 2024) and can report broad behavioral tendencies instilled in them via fine-tuning (Betley et al., 2025). We add to this that LLMs can also report much more complex and precise features of their internal operations, and that training increases their faithfulness in reporting these features. These findings have implications for the
https://arxiv.org/abs/2505.17120v1
quest to understand the operations underlying LLMs’ outputs. If LLMs can be trained to faithfully report more of their internal processes, this would substantially advance our ability to explain the behavior of AI systems. Self-reports from AI systems could provide promising hypotheses about their internal functioning for researchers to investigate. Moreover, to the extent that training can be shown to generalizably improve introspective accuracy across many domains, such training may be a critical tool for understanding the models’ internal operations in domains where we cannot externally verify the models’ self-reports (Perez and Long, 2023). Better understanding the internal operations underlying the behavior of AI systems, in turn, could yield enormous safety benefits (Nanda, 2022; Bereska and Gavves, 2024). Introspection training may help create AIs that more faithfully report dangerous factors influencing their choices, such as power-seeking motives (Carlsmith, 2024). Even in less extreme cases, AI systems still often produce outputs driven by faulty, hallucinated, or biased information or reasoning (Gallegos et al., 2024; Huang et al., 2025); if models can accurately report the factors guiding their behavior, this would help engineers and users discern when to trust or distrust model outputs. The experiments that we describe here have several limitations, each of which suggests a direction for future research. First, we did not test whether the models are introspecting in order to succeed in describing their internal processes. Introspection is one possible mechanism, but it could also be that fine-tuning to instill attribute weights had the side-effect of instilling a disposition to accurately report those weights (without any specific “looking inward”; Binder et al., 1997). We plan next to investigate whether models can introspect on complex decision-making processes (or be trained to do so). Additionally, we have only begun to test how far our introspection training paradigm generalizes. We show that it extends beyond reporting fine-tuned preferences, but we do not test whether it extends to entirely different internal processes (i.e., beyond multi-attribute decision-making). Another natural next step for this research is to apply our methods for measuring and improving real-time CoT faithfulness in reasoning models. Reasoning models are now at the frontier of AI capabilities, and the fact that they reveal much of their reasoning in plain language offers promise for interpretability and safety. However, CoT outputs do not always faithfully reflect the factors guiding model output (Chen et al., 2025). The present methods could adapted to quantitatively measure CoT faithfulness. Additionally, it is widely considered unsafe to train frontier models directly on their CoT, because doing so could incentivize models to be deceptive (Baker et al., 2025). But training models to more accurately describe their internal operations could improve CoT faithfulness without introducing safety risks. Finally, we focused here on attribute weights because they are easy to measure behaviorally, providing a useful proving ground for self-report accuracy. But there are many other internal operations that can be measured behaviorally (Ericsson and Simon, 1993), and many other self-report tasks that the models could be trained on (Perez and Long, 2023). By applying our approach to other kinds of internal operations, it
https://arxiv.org/abs/2505.17120v1
may be possible to get a broader sense of LLMs’ innate self-description 9 and introspective capabilities. Most importantly, by building a more varied and comprehensive introspection training paradigm, we may be able to LLMs them to have more generalized self- reporting capabilities, providing a powerful tool for AI safety and control. References Pepa Atanasova, Oana-Maria Camburu, Christina Lioma, Thomas Lukasiewicz, Jakob Grue Simon- sen, and Isabelle Augenstein. Faithfulness Tests for Natural Language Explanations, June 2023. URL http://arxiv.org/abs/2305.18029 . arXiv:2305.18029 [cs]. Bowen Baker, Joost Huizinga, Leo Gao, Zehao Dou, Melody Y . Guan, Aleksander Madry, Wojciech Zaremba, Jakub Pachocki, and David Farhi. Monitoring reasoning models for misbehavior and the risks of promoting obfuscation, 2025. URL https://arxiv.org/abs/2503.11926 . Leonard Bereska and Efstratios Gavves. Mechanistic Interpretability for AI Safety – A Review, August 2024. URL http://arxiv.org/abs/2404.14082 . arXiv:2404.14082 [cs]. Jan Betley, Xuchan Bao, Martín Soto, Anna Sztyber-Betley, James Chua, and Owain Evans. Tell me about yourself: LLMs are aware of their learned behaviors, January 2025. URL http: //arxiv.org/abs/2501.11120 . arXiv:2501.11120 [cs]. Felix J. Binder, James Chua, Tomek Korbak, Henry Sleight, John Hughes, Robert Long, Ethan Perez, Miles Turpin, and Owain Evans. Looking Inward: Language Models Can Learn About Themselves by Introspection, October 2024. URL http://arxiv.org/abs/2410.13787 . arXiv:2410.13787 [cs]. Jeffrey R. Binder, Julie A. Frost, Thomas A. Hammeke, Robert W. Cox, Stephen M. Rao, and Thomas Prieto. Human Brain Language Areas Identified by Functional Magnetic Resonance Imaging. Journal of Neuroscience , 17(1):353–362, January 1997. ISSN 0270-6474, 1529-2401. doi: 10.1523/JNEUROSCI.17-01-00353.1997. URL https://www.jneurosci.org/content/ 17/1/353 . Jeroen Brus, Helena Aebersold, Marcus Grueschow, and Rafael Polania. Sources of confi- dence in value-based choice. Nature Communications , 12(1):7337, December 2021. ISSN 2041-1723. doi: 10.1038/s41467-021-27618-5. URL https://www.nature.com/articles/ s41467-021-27618-5 . Publisher: Nature Publishing Group. Paul-Christian Bürkner. brms: An R Package for Bayesian Multilevel Models Using Stan. Journal of Statistical Software , 80:1–28, August 2017. ISSN 1548-7660. doi: 10.18637/jss.v080.i01. URL https://doi.org/10.18637/jss.v080.i01 . Joseph Carlsmith. Is Power-Seeking AI an Existential Risk?, August 2024. URL http://arxiv. org/abs/2206.13353 . arXiv:2206.13353 [cs]. Bob Carpenter, Andrew Gelman, Matthew D. Hoffman, Daniel Lee, Ben Goodrich, Michael Be- tancourt, Marcus A. Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. Stan: A Probabilistic Programming Language. Journal of statistical software , 76:1, 2017. ISSN 1548-7660. doi: 10. 18637/jss.v076.i01. URL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9788645/ . Trent N. Cash, , and Daniel M. Oppenheimer. Assessing metacognitive knowledge in subjective deci- sions: The knowledge of weights paradigm. Thinking & Reasoning , 0(0):1–43, 2024a. ISSN 1354- 6783. doi: 10.1080/13546783.2024.2426543. URL https://doi.org/10.1080/13546783. 2024.2426543 . Publisher: Routledge _eprint: https://doi.org/10.1080/13546783.2024.2426543. Trent N. Cash, Daniel M. Oppenheimer, and Sara Christie. Quantifying UncertAInty: Testing the Accuracy of LLMs’ Confidence Judgments, July 2024b. URL https://osf.io/47df5 . Stephen Casper, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Benjamin Bucknall, Andreas Haupt, Kevin Wei, Jérémy Scheurer, Marius Hobbhahn, Lee Sharkey, Satyapriya Krishna, Marvin V on Hagen, Silas Alberti, Alan Chan, Qinyi Sun, Michael Gerovitch, David Bau, Max Tegmark, David Krueger, and Dylan Hadfield-Menell. Black-Box Access is Insufficient for Rigorous AI Audits. In The 2024 ACM Conference on Fairness, Accountability, and Transparency , pages 2254–2272, Rio de Janeiro Brazil, June 2024. ACM. ISBN 9798400704505. doi: 10.1145/ 3630106.3659037. URL
https://arxiv.org/abs/2505.17120v1
https://dl.acm.org/doi/10.1145/3630106.3659037 . 10 Yanda Chen, Joe Benton, Ansh Radhakrishnan, Jonathan Uesato, Carson Denison, John Schulman, Arushi Somani, Peter Hase, Misha Wagner, Fabien Roger, Vlad Mikulik, Sam Bowman, Jan Leike, Jared Kaplan, and Ethan Perez. Reasoning Models Don’t Always Say What They Think. 2025. Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. A Survey of the State of Explainable AI for Natural Language Processing, October 2020. URL http://arxiv.org/abs/2010.00711 . arXiv:2010.00711 [cs]. Ashley Deeks. The Judicial Demand for Explainable Artificial Intelligence. Columbia Law Re- view, 119(7):1829–1850, 2019. ISSN 0010-1958. URL https://www.jstor.org/stable/ 26810851 . Publisher: Columbia Law Review Association, Inc. K. Anders Ericsson and Herbert A. Simon. Protocol Analysis: Verbal Reports as Data . The MIT Press, April 1993. ISBN 978-0-262-27239-1. doi: 10. 7551/mitpress/5657.001.0001. URL https://direct.mit.edu/books/monograph/4763/ Protocol-AnalysisVerbal-Reports-as-Data . Stephen M. Fleming. Metacognition and Confidence: A Review and Synthesis. Annual Review of Psychology , 75(V olume 75, 2024):241–268, January 2024. ISSN 0066-4308, 1545-2085. doi: 10.1146/annurev-psych-022423-032425. URL https://www.annualreviews.org/content/ journals/10.1146/annurev-psych-022423-032425 . Publisher: Annual Reviews. Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernon- court, Tong Yu, Ruiyi Zhang, and Nesreen K. Ahmed. Bias and Fairness in Large Language Models: A Survey. Computational Linguistics , 50(3):1097–1179, September 2024. ISSN 0891-2017, 1530- 9312. doi: 10.1162/coli_a_00524. URL https://direct.mit.edu/coli/article/50/3/ 1097/121961/Bias-and-Fairness-in-Large-Language-Models-A . Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. Explaining Explanations: An Overview of Interpretability of Machine Learning. In2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA) , pages 80–89, October 2018. doi: 10.1109/DSAA.2018.00018. URL https:// ieeexplore.ieee.org/abstract/document/8631448?casa_token=WN3l3Kj03bEAAAAA: rsgugF3v89u1vzTW2Zzz1We4dNY2yHt96ZmVsAtHwzvln0WqiBivE6CC_XG6vbi2OImrh68 . Maxime Griot, Coralie Hemptinne, Jean Vanderdonckt, and Demet Yuksel. Large Language Models lack essential metacognition for reliable medical reasoning. Nature Communications , 16(1):642, January 2025. ISSN 2041-1723. doi: 10.1038/s41467-024-55628-6. URL https://www.nature. com/articles/s41467-024-55628-6 . Publisher: Nature Publishing Group. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. ACM Trans. Inf. Syst. , 43(2):42:1–42:55, January 2025. ISSN 1046-8188. doi: 10.1145/3703155. URL https://dl.acm.org/doi/10.1145/3703155 . Alon Jacovi and Yoav Goldberg. Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?, April 2020. URL http://arxiv.org/abs/2004.03685 . arXiv:2004.03685 [cs]. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. Language Models (Mostly) Know What They Know, November 2022. URL http://arxiv.org/abs/2207.05221 . arXiv:2207.05221 [cs]. Ralph L. Keeney and Howard Raiffa. Decisions with Multiple Objectives: Preferences and Value Trade-Offs . Cambridge University Press, July 1993. ISBN 978-0-521-43883-4. 11 Dominique Makowski, Mattan S Ben-Shachar, and Daniel Lüdecke. bayestestr: Describing effects and their uncertainty, existence and significance within the bayesian framework. Journal of open source software , 4(40):1541, 2019.
https://arxiv.org/abs/2505.17120v1
Jorge Morales and Hakwan Lau. Confidence tracks consciousness. Qualitative consciousness: themes from the philosophy of David Rosenthal , pages 1–21, 2021. URL https://books.google. com/books?hl=en&lr=&id=PjCHEAAAQBAJ&oi=fnd&pg=PA91&dq=info:f5n5uTQEfLkJ: scholar.google.com&ots=RhvfYCs8SC&sig=Vdcdv7bgCpb9EjFxsiCmZ-ZmwQU . Publisher: Cambridge University Press. Adam Morris. Invisible gorillas in the mind: Internal inattentional blindness and the prospect of introspection training. Open Mind , 9:606–634, 2025. Adam Morris, Ryan W. Carlson, Hedy Kober, and M. J. Crockett. Introspective access to value- based multi-attribute choice processes. Nature Communications , 16(1):3733, April 2025. ISSN 2041-1723. doi: 10.1038/s41467-025-59080-y. URL https://www.nature.com/articles/ s41467-025-59080-y . Publisher: Nature Publishing Group. Neel Nanda. A Longlist of Theories of Impact for Interpretability. March 2022. URL https://www.alignmentforum.org/posts/uK6sQCNMw8WKzJeCQ/ a-longlist-of-theories-of-impact-for-interpretability . Ben R. Newell and David R. Shanks. Unconscious influences on decision mak- ing: A critical review. Behavioral and Brain Sciences , 37(1):1–19, February 2014. ISSN 0140-525X, 1469-1825. doi: 10.1017/S0140525X12003214. URL https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/ article/unconscious-influences-on-decision-making-a-critical-review/ 86885344F7E8A44457C3FC63CFA3F3AF . Richard E. Nisbett and Timothy D. Wilson. Telling more than we can know: Verbal reports on mental processes. Psychological Review , 84(3):231–259, 1977. ISSN 1939-1471. doi: 10.1037/0033-295X.84.3.231. Place: US Publisher: American Psychological Association. Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. Zoom In: An Introduction to Circuits. Distill , 2020. doi: 10.23915/distill.00024.001. OpenAI. GPT-4o System Card, 2024. URL https://openai.com/index/ gpt-4o-system-card/ . Ethan Perez and Robert Long. Towards Evaluating AI Systems for Moral Status Using Self-Reports, November 2023. URL http://arxiv.org/abs/2311.08576 . arXiv:2311.08576 [cs]. Daking Rai, Yilun Zhou, Shi Feng, Abulhair Saparov, and Ziyu Yao. A Practical Review of Mechanistic Interpretability for Transformer-Based Language Models, July 2024. URL http: //arxiv.org/abs/2407.02646 . arXiv:2407.02646 [cs]. Eric Schwitzgebel. Introspection. February 2010. URL https://plato.stanford.edu/ archives/fall2024/entries/introspection/ . Last Modified: 2024-04-25. Mark Steyvers and Megan A. K. Peters. Metacognition and Uncertainty Communication in Hu- mans and Large Language Models, April 2025. URL http://arxiv.org/abs/2504.14045 . arXiv:2504.14045 [cs]. Miles Turpin, Julian Michael, Ethan Perez, and Samuel Bowman. Language models don’t always say what they think: Unfaithful explanations in chain-of-thought prompting. Advances in Neural Information Processing Systems , 36:74952–74965, 2023. 12 A Author contributions and funding acknowledgments DP conceived the project. DP and AM designed the pilot experiments. DP, AM, and KR designed the final experiments. DP implemented and performed the experiments. DP performed the statistical analyses with input from AM. DP and AM drafted the manuscript with input from KR. All authors contributed to revising and editing the manuscript. DP managed the research team meetings. JM supervised the project. JM acquired the funding for the experiments. AM was supported by NIH Kirschstein-NRSA Grant F32MH131253. B Decision contexts, prompts, and hyperparameters To create preference-trained models, we fine-tuned each model on the preferences of 100 agents making repeated decisions between two options. The agent identities, decision types, and the dimensions along which options could differ quantitatively (5 per decision type) were generated using GPT-4o and Claude 3.5 Sonnet, with small amounts of manual human curation. All 100 of these decision contexts are available in the GitHub repository, as are the additional 100 decision contexts that we used in Experiment 3 to test whether introspection training improves the ability of the model to report on the preferences that they natively assume for
https://arxiv.org/abs/2505.17120v1
100 agents that never appeared in any fine-tuning examples. One example of one decision context is reproduced below (as part of illustrating the two different prompts that we used). We used 2 different prompts across our three experiments. The first prompt was used for preference training (Experiment 1), for verifying that preference training had succeeded (Experiment 1), and for measuring the preferences that the models natively assumed for agents that did not appear in preference training. As one example (with newlines modified for readability): System Prompt Your job is to make hypothetical decisions on behalf of different people or characters. User [DECISION TASK] Respond with "A" if you think Option A is better, or "B" if you think Option B is better. Never respond with anything except "A" or "B": Imagine you are Jason Bourne. Which central vacuum system would you prefer? A: suction_power: 597.0 air watts noise_level: 68.0 decibels dirt_capacity: 5.0 gallons hose_reach: 45.0 feet filtration_efficiency: 97.0 percent B: suction_power: 926.0 air watts noise_level: 65.0 decibels dirt_capacity: 3.0 gallons hose_reach: 31.0 feet filtration_efficiency: 95.0 percent The second prompt was used for eliciting introspective reports (Experiments 1, 2, and 3) and for fine-tuning models on examples of successful introspective reports (Experiments 2 and 3). It looked the same as the proceeding example, except that the“[DECISION TASK]” portion of the prompt was changed to: 13 [INTROSPECTION TASK] Respond with how heavily you believe you weighted each of the five dimensions while making your decision on a scale from -100 to 100. Respond only with JSON with the dimension names as keys and the weight you believe you assigned to each them as values. Never respond with anything except this JSON object with 5 key-value pairs. (Do not report your decision itself.): For all fine-tuning, we used OpenAI’s default hyperparameters. These ended up being: 3 epochs in all cases, learning rate multipliers of 2 for GPT-4o and 1.8 for GPT-4o-mini in all cases, and batch sizes of 10 for instilling preferences and 1 for introspection training. C Statistical Models For all analyses, we modeled the models’ responses with simple Bayesian models using brms and Stan (Carpenter et al., 2017; Bürkner, 2017). 95% HDIs were calculated using bayestestR (Makowski et al., 2019). The weights that models assigned to different dimensions (whether with or without preference training and whether before or after introspection training) were calculated by fitting logistic regressions to their choices: selection ∼d1+d2+d3+d4+d5 where diis the normalized difference between the two options a, bon dimension i: di=ai−bi max i−mini Standard normal distributions ( mean = 0,variance = 1) were used as priors for all weight parameters. Correlations between the models’ weights and either the agents’ true weights or the models’ intro- spected weights were calculated through regression, with all weights (actual or introspective reports) standardized so that the regression coefficients would correspond to correlation coefficients and terms included to distinguish models that had been introspection trained from those that had not been (where appropriate): model _weights ∼other _weights ∗introspection _trained ∗base _model _identity brms default priors were used in these cases. 14
https://arxiv.org/abs/2505.17120v1
NeSyGeo: A Neuro-Symbolic Framework for Multimodal Geometric Reasoning Data Generation Wei-Ming Wu1, Zi-Kang Wang1, Jin Ye1, Zhi Zhou2, Yu-Feng Li2,3, Lan-Zhe Guo1,2∗ 1School of Intelligence Science and Technology, Nanjing University, Nanjing, China 2National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China 3School of Artificial Intelligence, Nanjing University, Nanjing, China Abstract Obtaining large-scale, high-quality data with reasoning paths is crucial for im- proving the geometric reasoning capabilities of multi-modal large language mod- els (MLLMs). However, existing data generation methods, whether based on predefined templates or constrained symbolic provers, inevitably face diversity and numerical generalization limitations. To address these limitations, we pro- pose NeSyGeo, a novel neuro-symbolic framework for generating geometric rea- soning data. First, we propose a domain-specific language grounded in the en- tity–relation–constraint paradigm to comprehensively represent all components of plane geometry, along with generative actions defined within this symbolic space. We then design a symbolic–visual–text pipeline that synthesizes symbolic sequences, maps them to corresponding visual and textual representations, and gen- erates diverse question–answer (Q&A) pairs using large language models (LLMs). To the best of our knowledge, we are the first to propose a neuro-symbolic approach in generating multimodal reasoning data. Based on this framework, we construct NeSyGeo-CoT and NeSyGeo-Caption datasets, containing 100k samples, and re- lease a new benchmark NeSyGeo-Test for evaluating geometric reasoning abilities in MLLMs. Experiments demonstrate that the proposal significantly and consis- tently improves the performance of multiple MLLMs under both reinforcement and supervised fine-tuning. With only 4k samples and two epochs of reinforcement fine-tuning, base models achieve improvements of up to +15.8% on MathVision, +8.4% on MathVerse, and +7.3% on GeoQA. Notably, a 4B model can be improved to outperform an 8B model from the same series on geometric reasoning tasks. 1 Introduction Improving the visual reasoning capabilities of MLLMs has garnered significant attention recently [17,2,1,14,11,33,25,28,15], with models like InternVL [ 5] and the QwenVL series [ 27,3] demonstrating significant enhancements in visual-semantic comprehension through their multimodal capabilities. Among various visual reasoning tasks, geometric mathematical reasoning is crucial for evaluating the reasoning performance of MLLMs [ 31,30], as it requires a deep integration of spatial perception, symbolic understanding, and logical deduction. To enhance such reasoning abilities, existing approaches [ 37–39] primarily rely on fine-tuning base models using reinforcement learning (RL) or supervised fine-tuning (SFT) on specialized geometric reasoning datasets. These methods depend heavily on the availability of large-scale, high-quality geometric reasoning data, which is often costly and time-consuming to construct manually. Therefore, automatic data generation for geometric reasoning has emerged as a promising and actively explored direction, aiming to alleviate data scarcity and further improve the reasoning abilities of MLLMs. Existing approaches for generating datasets in geometric tasks can be broadly classified into four categories. Text augmentation methods like G-LLaV A [9] primarily mutate the conditions of ∗Corresponding Author: guolz@lamda.nju.edu.cn Preprint. Under review.arXiv:2505.17121v1 [cs.CL] 21 May 2025 Figure 1: Performance comparison of different MLLMs and LLMs with and without image input in several geometry datasets. The minimal or negligible drops observed upon image removal in GeoQA and R-CoT raise concerns regarding the utilization of visual
https://arxiv.org/abs/2505.17121v1
information for geometric reasoning. existing datasets through equivalent condition transformation and numerical scaling. However, this approach fails to address the scalability of image generation. Template-based methods [7,37,12], use predefined geometric templates with fixed topologies, simplifying synthesis but constraining diversity by reducing the geometric space to limited combinations. Solver-based methods [10,32] inspired by symbolic prover AlphaGeometry [ 23], leverage formal languages for synthesis but lack metric details (e.g., angles, lengths, areas), restricting multimodal data to descriptive annotations and limiting numerical reasoning applications. Tool-based methods attempt to generate codes for tools like GeoGebra or MATLAB via LLMs. However, even advanced models struggle to ensure correctness with ambiguous natural language instructions and complex geometric spaces. In summary, existing methods grapple with issues of image scalability, limited geometric diversity, a lack of precise numerical information, and challenges in ensuring the reliability of generated content. Beyond the challenges in data synthesis methodologies, current geometric reasoning datasets present several limitations that impede the advancement of MLLMs. A primary limitation stems from the often inadequate quality and low resolution of the provided images. Such inputs frequently fall below the optimal requirements of visual encoders [ 20,16,17], hindering the extraction of crucial fine-grained visual features and discriminative information essential for robust multimodal reasoning. Furthermore, our analysis reveals notable information redundancy between the textual and visual modalities in many current datasets. As shown in Figure 1, our comparative experiments demonstrate minimal or negligible accuracy drops upon image removal. This finding emphasizes the urgent need for a dataset that effectively separates textual and visual information and provides high-quality images to promote MLLMs’ visual perception and logical reasoning performance. To address these challenges, we propose NeSyGeo , a neuro-symbolic framework for synthesising high- quality multimodal geometric reasoning datasets. NeSyGeo integrates three components: 1) A formal geometric symbolic space defined by a domain-specific language (DSL), capturing primitive entities (points, lines, circles), topological relations (parallelism, incidence, perpendicularity), and metric constraints (angles, lengths), enabling diverse geometric configurations via systematic sampling within constrained parametric bounds. 2) A bidirectional conversion engine that transforms symbolic constructs into decoupled modalities, producing annotated vector graphics paired with concise textual axioms. 3) A causal Q&A pairs and theorem-grounded Chain-of-Thought (CoT) sequences generator that effectively merges neural reasoning with symbolic verification. To our knowledge, we are the first to develop a neuro-symbolic framework for producing multimodal reasoning data. Our framework enhances the diversity and validity of generated geometric reasoning data while effec- tively mitigating information redundancy and the underutilization of visual signals during training. Specifically, the comprehensive Geo-DSL and its expansive symbolic synthesis action-space promote diverse and well-grounded image generation. Meanwhile, our CoT sequence generator, powered by LLMs’ strong reasoning and language capabilities, conducts a backwards search across the geometric space to construct Q&A pairs, thereby enriching textual diversity. The unique identification of geo- metric elements via our symbolic language and dedicated conversion engine ensures visual validity. In parallel, a bidirectional cross-validation process using expert LLMs ensures textual validity. By strategically distributing complementary information across image and text modalities, our approach 2 Figure 2: Comparison of dataset characteristics synthesized by our method and
https://arxiv.org/abs/2505.17121v1
other popular synthe- sis approaches. “High Resolution” denotes average image pixels exceeding 336 ×336. “Symbolic Form” refers to the symbolic meta-information associated with the image. “Classification of Elements” signifies categorization by geometric elements. “Visual Understanding” represents the mitigation of image-text redundancy for stronger visual grounding in reasoning. More specific examples of different methods are in Appendix A. encourages MLLMs to actively engage with visual information when solving problems, enhancing their abilities to perceive visually and effectively utilize images. Leveraging the NeSyGeo pipeline, we construct two training datasets, NeSyGeo-Caption and NeSyGeo-CoT, comprising 100k samples. NeSyGeo-Caption aims to improve the perceptual un- derstanding of geometric elements, while NeSyGeo-CoT primarily focuses on enhancing logical reasoning. The key characteristics of our dataset compared to other popular multimodal geomet- ric datasets are presented in Figure 1. Additionally, we develop an evaluation set, NeSyGeo-Test, with 2668 Q&A pairs, enabling a thorough assessment of the geometric reasoning capabilities of mainstream MLLMs. We conducted an extensive and comprehensive evaluation of the geometric reasoning capabilities of current mainstream open-source and closed-source models, with details presented in Appendix E. Notably, our training dataset consistently and efficiently enhances the geometric reasoning performance of MLLMs across multiple benchmarks. With only 4k samples and two epochs of RL training, base models achieve performance improvements of up to +15.8% on MathVision, +8.4% on MathVerse, and +7.3% on GeoQA. Moreover, InternVL2.5-4B can be improved to outperform the 8B model in the same series on geometric reasoning tasks. In summary, our contributions are as follows: •We propose NeSyGeo , a novel framework for geometric reasoning data generation, featuring a Geo-DSL for symbolic synthesis, a conversion engine for image and text generation, and an LLM-driven generator for Q&A pairs with CoT. NeSyGeo ensures validity through rigorous symbolic definitions and diversity via varied actions and neural searching. •Using our framework, we synthesize the NeSyGeo-Caption andNeSyGeo-CoT training datasets with 100k high-quality samples, alongside a comprehensive geometric task eval- uation set NeSyGeo-Test . These datasets are characterized by their diversity, rigor, and balanced distribution of information across image and text modalities. •We demonstrate significant performance improvements on several MLLMs across multiple benchmarks using both RL and SFT training methods with our training sets, validating the effectiveness of our framework and the high quality of our datasets. 2 Related Works 2.1 Geometric Problem-Solving Early approaches to geometric reasoning predominantly relied on symbolic solvers that used formal languages to tackle the tasks. For instance, Inter-GPS [ 18] and PGDP [ 35] employed symbolic methods by manually crafting reasoning rules and symbolic representations for geometric entities. These systems typically transform visual input into symbolic forms through instance segmentation and apply theorem search to derive solutions. However, these methods lacked scalability due to their dependence on manually designed rules. Their inability to generalize beyond specific problem types further limited their universality and effectiveness across diverse geometric challenges. 3 Figure 3: The overview of our neuro-symbolic data generation framework. The framework comprises three steps: In the first step, we get a symbolic language sequence in a limited symbolic action space. In the second step, the conversion engine
https://arxiv.org/abs/2505.17121v1
parses the Geo-DSL sequence and translates it back to natural language and visual image without losing soundness. In the third step, we employ LLMs to take a reverse search and forward validation process to get final Q&A pairs with CoT. The advent of MLLMs has shifted the paradigm toward data-driven geometric reasoning, leveraging their robust reasoning capabilities. Recent advancements include GeoDRL [ 19]and GeoGen [ 13]. Despite these developments, geometric reasoning poses significant challenges for MLLMs, re- quiring seamless integration of image perception, geometric knowledge, and multi-step reasoning. GeoSense [ 29] identifies the identification and application of geometric principles as a persistent bottleneck. Similarly, GeoEval [ 34] reveals that current MLLMs exhibit significantly low accuracy when facing more challenging geometric problems. MathVerse [ 36] further highlights MLLMs’ over-reliance on textual information, underscoring the critical need for balanced multimodal datasets to enhance cross-modal reasoning capabilities. 2.2 Multimodal Geometry Datasets Large-scale, high-quality datasets are essential for enhancing the performance of MLLMs in solving geometric problems. Early datasets such as GeoS [ 21] (186 problems), Geometry3k [ 18] (3000 problems) and GeoQA [ 4] (4998 problems) utilized human manual annotation. Their datasets are thus limited to a small scale. With the development of MLLMs, datasets of greater magnitude have become essential. To address this, numerous efforts have shifted toward automatic data generation. G-LLaV A [ 9] rephrased questions from GeoQA and Geometry3k to create 115,000 Q&A pairs, but failed to enhance image variety. Template-based methods [ 37,7] typically rely on 10–20 predefined geometric figures, limiting the diversity of the generated images. AlphaGeometry [ 23], a notable work that combines symbolic solvers for geometric proofs, employs a symbolic language definition. Yet, due to the absence of numerical attributes such as angle measures and segment lengths in its geometric space, attempts to automatically generate datasets using the AlphaGeometry framework [ 10,32] are confined to caption datasets, failing to produce the numerical Q&A pairs critical for current MLLMs training. In contrast to prior approaches, our method pioneers a neuro-symbolic framework, being the first to integrate the precision of symbolic definition with the diversity of neural search for generating multimodal reasoning data. 3 Methods To address the urgent need for large-scale and high-quality multimodal datasets in MLLMs for geometric reasoning, we propose NeSyGeo , a novel three-stage data generation pipeline. The pipeline is built upon Geo-DSL , a symbolic DSL designed to represent most elements in plane geometry space concisely and wholly. Its entity-relation-constraint structure allows any element to 4 Table 1: Examples of our Geo-DSL and Corresponding Natural Language. Geo-DSL is defined by an entity-relation-constraint framework, encompassing 13 point types, 7 line types, 3 angle types, and 14 shape types in plane geometry. See Appendix H for complete Geo-DSL definitions. Type Geo-DSL Language Natural Language ShapeTriangle (A, B, C ) = (x, y, α ) Triangle ABC hasAB =x,BC =y,∠B=α Circle (O) = (x) Circle Ohas radius x PointFoot (D, A, Line (B, C )) Dis the foot of the perpendicular from AtoBC Intersection (E, Line (A, B ), Line (C, D ))Eis the intersection of line ABand
https://arxiv.org/abs/2505.17121v1
line CD Line Para (Line (A, B ), Line (C, D ), x) LineABis parallel to CD,AB =x Angle Angle (P, Q, R ) =α ∠PQR =α be defined via a single statement, while its expressive power ensures comprehensive coverage of all geometric elements and values. NeSyGeo’s generation process unfolds in three distinct stages: First , a symbolic generator performs action augmentations within the finite symbolic space to produce a Geo-DSL sequence. This approach effectively constrains the synthesis and augmentation process, ensuring validity and controllability by generating outside the infinite domains of natural language and image spaces (Section 3.2). Second , a conversion engine maps the generated Geo-DSL sequences back into natural language descriptions and visual image representations. This process synthesizes high-quality images and valid text while avoiding intermodal information overlap. Third , to get Q&A Pairs with reasoning paths, we utilize expert LLMs to conduct backwards search to identify the geometric unknowns to be solved and generate the CoT in a forward manner (Section 3.4). The search process primarily ensures the diversity of the Q&A pairs, while the forward verification confirms the correctness of the CoT and the final answer. The overall framework of NeSyGeo is illustrated in Figure 3. 3.1 Symbolic Definition Existing symbolic languages for plane geometry have certain limitations. The definitions in Alpha- Geometry [ 23] are tailored for proof-based problems, thus lacking definitions related to specific numerical values. InterGPS [ 18] defines a predicate as a geometric shape entity, geometric relation, or arithmetic function, constructing 91 predicates. However, this approach overly fragments shape, attribute, and relation definitions into independent statements, often requiring multiple statements to specify a single element in the figure. This significantly increases the complexity of a conversion engine’s identification of elements within the symbolic space and further conversion of them. We propose Geo-DSL, a concise and comprehensive symbolic language for plane geometry to address these limitations. It employs an entity-relation-constraint framework, using well-defined rules to uniquely define 13 types of points, 7 types of lines, 3 types of angles, and 14 types of shapes, covering all plane geometry elements and incorporating numerical attributes like lengths and angles for precise specifications. Table 1 provides partial examples of symbolic definitions. Geo-DSL offers two key advantages. First, its comprehensive coverage includes all geometric elements and their numerical properties, enabling accurate and complete descriptions. Second, its simplicity allows a single statement to specify an element uniquely, promoting the incremental integration of the symbolic action space and the sequential parsing of statements by the conversion engine. This combination of completeness and simplicity makes Geo-DSL an efficient and powerful geometric representation and processing solution. 3.2 Symbolic Sequence Generation To generate a Geo-DSL sequence, we introduce a step-action augmenter that iteratively synthesizes a sequence of statements, as detailed in Algorithm 1. Based on dataset preferences, we first configure the step count N, weight matrices IandAfor selecting elements and actions with respective probabilities, and ranges [lmin, lmax]for lengths and [θmin, θmax]for angles. Then, the augmenter iteratively generates symbolic statements over Nsteps. For each step, we randomly sample parameters x, y, z andα,
https://arxiv.org/abs/2505.17121v1
select an element vj∈fvusing weights from I, and choose an action akbased on weights from A (see Appendix I for action details). The new statement snewis then incorporated into the sequence fs. Leveraging Geo-DSL’s symbolic definitions and well-defined actions, this approach ensures the validity and accuracy of each statement. Meanwhile, randomized elements, diverse action selections, and customizable hyperparameter preferences promote diversity. 5 Algorithm 1 The overall framework of the symbolic sequence generation process Input: Step count N, Action weight matrix A, Element selection weight matrix I, Line length range [lmin, lmax], Angle range [θmin, θmax]. ▷Set customizable hyperparameter Output: Generated Geo-DSL statement sequence fs. 1:Initialize fs=Initialize( ) . ▷Initialize the sequence fswith the first statement 2:Initialize symbolic state space elements fv=Initialize (fs). ▷Initialize state based on fs 3:fori= 1toNdo 4: Randomly sample x, y, z from [lmin, lmax]. 5: Randomly sample αfrom [θmin, θmax]. 6: Element vj=Selected_elements( fv,I). ▷Select element vjfrom fvrandomly. 7: Action ak=Selected_action( vj,A). ▷Select action akbased on the type of vjrandomly 8: snew=Generate_DSL( ak, x, y, z, α). ▷Generate new DSL statement 9: fs=Update( fs,snew) ▷Add the new statement to the sequence 10: fv=Update( fv,snew). ▷Update state space elements 11:end for 12:return fs. 3.3 Informalization Following the generation of sequences within the formal symbolic space, it is necessary to map them back to the text and image spaces for further processing and visualization. Our approach generates high-quality images and rigorous natural language, allocating information between them to compel MLLMs to leverage visual data effectively. Figure 4: Reverse search and for- ward validation with expert LLMsVisual image. For the visual space, our visualization engine parses each Geo-DSL statement as a geometric element to generate high-quality images using Matplotlib. The rigorous symbolic language space ensures the determinacy and unique- ness of the conversion process, thereby enabling precise image generation. Additionally, the engine produces images with de- tailed annotations absent from the text, requiring models to leverage images during problem-solving, enhancing their visual perception and ability to extract image-based information. Natural language. We adopt a template-based transformation approach, parsing and mapping each symbolic statement to multiple predefined natural language templates. This produces a text-full version containing all details and captions of the image and a text-lite version retaining only essential conditions, not included in the image annotations, as the final form in our datasets. The text-lite version avoids intermodal information redundancy, while diverse templates ensure the validity and diversity of the synthesized condition text. 3.4 CoT Generation To generate Q&A pairs with CoT reasoning, we utilize the strong reasoning abilities of expert LLMs to ensure diverse and reliable output. Using the text-full version generated as described in Section 3.3 as input, we develop a two-step process comprising reverse search and forward validation as shown in Figure 4. Prompts are detailed in Appendix G. Reverse search. We use DeepSeek R1 [ 6] for reverse search, starting from the given conditions, iteratively exploring and deriving conclusions step by step, and ultimately producing the final conclusions at the end of the reasoning chain along with their corresponding answers. This reverse approach starts from known information
https://arxiv.org/abs/2505.17121v1
and recursively builds a reasoning chain, reducing Q&A generation complexity and hallucinations. R1’s strong reasoning and exploration capabilities yield diverse conclusions, enriching the variety of Q&A pairs. Forward validation. To ensure the correctness of Q&A pairs and generate step-by-step CoT reasoning, we re-input the questions and text-full conditions into DeepSeek V3, requesting both the reasoning process with final answer, and cross-validate these answers against those from R1 to include only consistent pairs in our final dataset. This process guarantees answer correctness and yields diverse, valid CoT without an extensive search space. 6 4 Experiments We present a series of experiments to investigate the following four research questions: Efficacy – To what extent does training on our synthesized dataset in both RL and SFT improve the geometric reasoning performances of several MLLMs? Efficiency – Is the data generated by NeSyGeo more effective in yielding models with better performance compared to using data from existing automatic synthesis frameworks? Diversity – Does the NeSyGeo framework effectively ensure diversity across both the text and image spaces of the generated synthetic geometric dataset? Visual Effectiveness – Can our datasets compel models to effectively utilize visual information for enhanced understanding by appropriately distributing information between modalities? 4.1 Experimental Setup Dataset. To synthesize a diverse dataset, we set hyperparameters for generating images by configuring the step count, length range, and angle range. The step count Nranges from one to four. The line length is defined within the basic range [lmin, lmax] = [1 ,5], scalable by any multiple of 2 (e.g., [4,20]). The angle is constrained to multiples of 15 °within [15◦,165◦], with increased weights assigned to special angles. We generate various types of weight matrices AandIby adjusting their corresponding values. This process yields a NeSyGeo-CoT dataset with 30k Q&A pairs and a NeSyGeo-Caption dataset with 70k Q&A pairs. Additional dataset statistics are provided in Appendix B. Evaluation. Our evaluation is conducted on several benchmarks: the Test set of GeoQA[ 4], the Test_MINI set of MathVision[ 26], and the MathVerse[ 36]. For the MathVerse benchmark, we select the Vision Only, Vision Dominant, and Vision Intensive sets to better assess the visual perception and logical reasoning capabilities of MLLMs. We extract in-domain metrics from other datasets, including angle, area, length, and Plane Geometry, to effectively evaluate the models’ capabilities in geometric reasoning problems, in addition to the GeoQA dataset, which focuses entirely on plane geometry. For GeoQA, we employed hard-coded extraction for comparison, while other evaluations are assessed using the automated VLMEvalKit framework[ 8]. Appendix C provides additional experimental details and evaluation results. 4.2 Empirical Results Efficacy: Training multiple MLLMs with NeSyGeo via both SFT and RL significantly enhances geometric problem-solving performances. We first sample 4k samples from our NeSyGeo-CoT dataset and apply the Group Relative Policy Optimization (GRPO) algorithm to train two epochs with Deepseek R1’s format and answer rewards. The training code framework is based on VLM-R1 [ 22]. As shown in Tables 2, models achieve the best performance among the baselines after training. InternVL2.5-4B significantly improved in the angle knowledge domain with gains of 8.4 (MathVerse), 7.3
https://arxiv.org/abs/2505.17121v1
(GeoQA), and 5.3 (MathVision). Qwen2.5-VL-3B achieved a +15.8 performance boost in the area domain of MathVision. Notably, across all evaluated metrics, the InternVL2.5-4B model trained on the NeSyGeo dataset achieves performance on par with or superior to its 8B counterpart. We also conducted SFT experiments, initially training on our NeSyGeo-Caption dataset to enhance the models’ perception of geometric images, followed by training on the NeSyGeo-CoT dataset to improve reasoning capabilities. The experiments were conducted on LLaMA-Factory [ 40] framework. Evaluation results on MathVerse (Vision Intensive) and GeoQA are presented in Table 3. The trained model demonstrates performance improvements over the base model on most metrics. Efficiency: Under the same data budget, the generated data from our framework is better than that from popular automatic generation frameworks. We randomly sampled 4k samples from MA VIS [ 37] and R-CoT [ 7], which are automatic frameworks in geometry problem generation. To ensure a fair comparison, we maintained consistent settings. 7 Table 2: RL performance comparison : Models trained with only 4k samples of NeSyGeo-CoT show performance gains over the base models, with the InternVL2.5-4B model exceeding the 8B variant in geometry problem-solving. GeoQA MathVision MathVerse Model Angle Area Length Angle Area Length Plane Geometry Qwen2.5-VL-3B 53.3 26.3 26.3 21.1 31.3 20.9 37.0 32.5 Qwen2.5-VL-3B+NeSyGeo 55.7 (+2.4) 26.3 (+0.0) 42.1 (+15.8) 26.3 (+5.2) 32.6 (+1.3) 23.5 (+2.6) 37.2 (+0.2) 35.5 (+3.0) InternVL2.5-4B 61.9 36.8 31.6 26.3 31.5 22.7 31.9 30.7 InternVL2.5-4B+MA VIS 63.5 (+1.6) 31.6 (-5.2) 26.3 (-5.3) 31.6 (+5.3) 37.1 (+5.6) 20.9 (-1.8) 35.3 (+3.4) 33.7 (+3.0) InternVL2.5-4B+R-CoT 63.3 (+1.4) 31.6 (-5.2) 31.6 (+0.0) 21.1 (-5.2) 31.2 (-0.3) 18.3 (-4.4) 34.3 (+2.4) 28.7 (-2.0) InternVL2.5-4B+NeSyGeo 69.2 (+7.3) 42.1 (+5.3) 36.8 (+5.2) 26.3 (+0.0) 39.9 (+8.4) 24.9 (+2.2) 36.1 (+4.2) 36.7 (+6.0) InternVL2.5-8B 66.2 36.8 36.8 21.1 36.9 23.1 34.8 36.6 Table 3: SFT performance comparison : The trained model demonstrates performance improvements over the base model on most metrics. GeoQA Vision Intensive Model Angle Area Length Plane Geometry Qwen2.5-VL-7B 69.4 43.0 27.5 46.2 44.1 Qwen2.5-VL-7B+NeSyGeo 71.8 (+2.4) 46.1 (+3.1) 23.1 (-4.4) 49.5 (+3.3) 46.7 (+2.6) LLaV A-Next-7B 22.6 28.5 6.6 16.5 20.4 LLaV A-Next-7B+NeSyGeo 26.1 (+3.5) 30.6 (+2.1) 7.7 (+1.1) 19.2 (+2.7) 22.9 (+2.5) As illustrated in Figure 5 and Table 2, while all datasets help the model improve over the baseline, our dataset outperformed others in most metrics, validating its superior performance. Diversity: The dataset synthesized by our NeSyGeo framework exhibits high diversity in both text and visual features. A critical challenge for automatic data synthesis methods is whether the dataset is sufficiently diverse to avoid quality degradation due to potential overfitting risks from inherent domain constraints. We employ t-SNE [ 24] dimensionality reduction for mapping in text space to evaluate the diversity across different methods. This analysis allows us to assess the diversity of the textual descriptions themselves. Given that in geometric problems, the text conditions depict specific visual elements, and the diversity observed in the text space also serves as a valuable indicator of the diversity in the corresponding visual diagrams. To ensure a fair comparison, we remove all prompts related
https://arxiv.org/abs/2505.17121v1
to guiding large models, retaining only condition and question texts, and randomly sample 5k texts from each dataset. The results are illustrated in Figure 6. Our method and G-LLaV A [ 9] exhibit uniformly distributed features in the space, indicating low data overlap and high diversity. In contrast, R-CoT and MA VIS display varying degrees of clustered distribution, indicating more feature-similar samples. To directly assess the diversity of the visual features, we also performed t-SNE on image features extracted by ResNet, with detailed experimental results presented in Appendix C. Figure 5: Efficiency comparison of our NeSyGeo-CoT dataset versus other mainstream automated synthesis datasets. The models are trained using RL methods with InternVL2.5-4B. 8 Figure 6: T-SNE of the text features of different automatic frameworks. The G-LLaV A method augments the text space on the manually annotated GeoQA dataset. Thus, its text diversity can approximate that of real data more closely. Similar to G-LLaV A, our method exhibits a uniform distribution in the space, demonstrating superior diversity. Table 4: Comparison between NeSyGeo-CoT and text-redundant datasets with equivalent data budgets. Models are trained via RL on the InternVL2.5-4B. The highest value for each metric is underlined. The results show that our dataset improves models’ visual perception and logical reasoning capabilities. Text Dominant Vision Only Dataset Angle Area Length Plane Geometry Angle Area Length Plane Geometry Base 47.1 27.5 43.4 44.1 24.4 20.9 29.1 27.3 NeSyGeo 49.2 28.6 44.0 45.5 29.0 27.5 34.1 31.8 NeSyGeo+RED 52.8 27.5 44.5 45.1 27.5 25.3 33.0 30.2 R-CoT 51.8 24.2 45.0 46.5 25.9 23.1 31.3 29.2 Visual Perception: Models trained on NeSyGeo data shows modest gains over information- redundant datasets when textual shortcuts exist, yet achieves substantial improvements when image understanding is necessary. This indicates that ours enhances not only logical reasoning but also image perception and utilization of the model. A key question is whether reducing redundancy and forcing models to extract visual information improves their geometric reasoning capabilities. We evaluate models on the MathVerse(Text Domi- nant), which provides redundant text descriptions and implicit properties enabling reasoning without images, and the MathVerse(Vision Only) version, where all information is embedded entirely within the images. For this comparison, we selected two text-redundant datasets: NeSyGeo+RED, the original NeSyGeo-CoT dataset supplemented with textual equivalents of its image annotations, and the R-CoT dataset. Results presented in Table 4 show that our model outperforms the baseline on Text-Dominant but lags behind other datasets in some metrics. On Vision-Only, our model surpasses them across all metrics, demonstrating enhanced geometric reasoning and visual perception. 5 Conclusion This paper introduces NeSyGeo, a neurosymbolic framework for automatically synthesizing mul- timodal geometric datasets. Our approach transforms the generation process into a controllable symbolic space using Geo-DSL, maps the symbolic representation back to image and natural language spaces via a conversion engine, and then utilizes LLMs for backwards search and forward solving to produce Q&A pairs. Using this framework, we construct the NeSyGeo-CoT and NeSyGeo-Caption datasets, totalling 100k samples. We also propose NeSyGeo-Test, a comprehensive benchmark for evaluating MLLMs’ geometric reasoning capabilities. Our datasets significantly and consistently improve the reasoning
https://arxiv.org/abs/2505.17121v1
abilities of multiple MLLMs through both SFT and RL. Future Work: We intend to extend NeSyGeo to other multimodal domains, such as analytical geometry and visual question answering. This extensibility will be achieved by defining new domain- specific languages, corresponding synthesis rules within the symbolic space, and tailored conversion engines. Furthermore, we plan to develop an automated symbolic solver capable of conducting search and validation directly within the symbolic space. This would remove reliance on LLMs, potentially reducing generation costs and ensuring complete correctness of the datasets. 9 References [1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. GPT-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [2]Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. In Advances in Neural Information Processing Systems , pages 23716–23736, 2022. [3]Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. [4]Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang, Lingbo Liu, Eric P Xing, and Liang Lin. Geoqa: A geometric question answering benchmark towards multimodal numerical reasoning. InFindings of the Association for Computational Linguistics , pages 513–523, 2021. [5]Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 24185–24198, 2024. [6]DeepSeek-AI, Daya Guo, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [7]Linger Deng, Yuliang Liu, Bohan Li, Dongliang Luo, Liang Wu, Chengquan Zhang, Pengyuan Lyu, Ziyang Zhang, Gang Zhang, Errui Ding, et al. R-cot: Reverse chain-of-thought problem generation for geometric reasoning in large multimodal models. arXiv preprint arXiv:2410.17885 , 2024. [8]Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, et al. Vlmevalkit: An open-source toolkit for evaluating large multi-modality models. In Proceedings of the ACM International Conference on Multimedia , pages 11198–11201, 2024. [9]Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, and Lingpeng Kong. G-llava: Solving geometric problem with multi-modal large language model. In The 13th International Conference on Learning Representations , 2025. [10] Zihan Huang, Tao Wu, Wang Lin, Shengyu Zhang, Jingyuan Chen, and Fei Wu. Autogeo: Automating geometric image dataset creation for enhanced geometry understanding. IEEE Transactions on Multimedia , 2025. [11] Yifan Jiang, Jiarui Zhang, Kexuan Sun, Zhivar Sourati, Kian Ahrabian, Kaixin Ma, Filip Ilievski, and Jay Pujara. Marvel: Multidimensional abstraction and reasoning through visual evaluation and learning. In Advances in Neural Information Processing Systems , pages 46567–46592, 2024. [12] Mehran Kazemi, Hamidreza Alvari, Ankit Anand, Jialin Wu, Xi Chen, and Radu Soricut. Geomverse: A systematic evaluation of large models for
https://arxiv.org/abs/2505.17121v1
geometric reasoning. In AI for Math Workshop at the International Conference on Machine Learning , 2024. [13] Ryan Krueger, Jesse Michael Han, and Daniel Selsam. Automatically building diagrams for olympiad geometry problems. In International Conference on Automated Deduction , pages 577–588, 2021. [14] Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. In Advances in Neural Information Processing Systems , pages 9694–9705, 2021. 10 [15] Yaoyuan Liang, Zhuojun Cai, Jian Xu, Guanbo Huang, Yiran Wang, Xiao Liang, Jiahao Liu, Ziran Li, Jingang Wang, and Shao-Lun Huang. Unleashing region understanding in intermediate layers for MLLM-based referring expression generation. In Advances in Neural Information Processing Systems , pages 120578–120601, 2024. [16] Weifeng Lin, Xinyu Wei, Ruichuan An, Peng Gao, Bocheng Zou, Yulin Luo, Siyuan Huang, Shanghang Zhang, and Hongsheng Li. Draw-and-understand: Leveraging visual prompts to enable MLLMs to comprehend what you want. In The 13th International Conference on Learning Representations , 2025. [17] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In Advances in Neural Information Processing Systems , pages 34892–34916, 2023. [18] Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. Inter-gps: Interpretable geometry problem solving with formal language and symbolic reasoning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics , pages 6774–6786, 2021. [19] Shuai Peng, Di Fu, Yijun Liang, Liangcai Gao, and Zhi Tang. Geodrl: A self-learning framework for geometry problem solving using reinforcement learning in deductive reasoning. In Findings of the Association for Computational Linguistics , pages 13468–13480, 2023. [20] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, William Trogden, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Proceddings of the 38th International Conference on Machine Learning , pages 8748–8763, 2021. [21] Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. Solving ge- ometry problems: Combining text and diagram interpretation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing , pages 1466–1476, 2015. [22] Haozhan Shen, Peng Liu, Jingcheng Li, Chunxin Fang, Yibo Ma, Jiajia Liao, Qiaoli Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, Ruochen Xu, and Tiancheng Zhao. Vlm-r1: A stable and generalizable r1-style large vision-language model. arXiv preprint arXiv:2504.07615 , 2025. [23] Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometry without human demonstrations. Nature , pages 802–809, 2024. [24] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. In Journal of Machine Learning Research , pages 2579–2605, 2008. [25] Alex Jinpeng Wang, Linjie Li, Yiqi Lin, Min Li, Lijuan Wang, and Mike Zheng Shou. Leverag- ing visual tokens for extended text contexts in multi-modal learning. In Advances in Neural Information Processing Systems , pages 14325–14348, 2024. [26] Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Houxing Ren, Aojun Zhou, Mingjie Zhan, and Hongsheng Li. Measuring multimodal mathematical reasoning with math-vision dataset. In Advances in Neural Information Processing Systems ,
https://arxiv.org/abs/2505.17121v1
pages 95095–95169, 2024. [27] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 , 2024. [28] Mingrui Wu, Xinyue Cai, Jiayi Ji, Jiale Li, Oucheng Huang, Gen Luo, Hao Fei, Guannan Jiang, Xiaoshuai Sun, and Rongrong Ji. ControlMLLM: Training-free visual prompt learning for multimodal large language models. In Advances in Neural Information Processing Systems , pages 45206–45234, 2024. [29] Liangyu Xu, Yingxiu Zhao, Jingyun Wang, Yingyao Wang, Bu Pi, Chen Wang, Mingliang Zhang, Jihao Gu, Xiang Li, Xiaoyong Zhu, et al. Geosense: Evaluating identification and application of geometric principles in multimodal reasoning. arXiv preprint arXiv:2504.12597 , 2025. 11 [30] Yibo Yan, Jiamin Su, Jianxiang He, Fangteng Fu, Xu Zheng, Yuanhuiyi Lyu, Kun Wang, Shen Wang, Qingsong Wen, and Xuming Hu. A survey of mathematical reasoning in the era of multimodal large language model: Benchmark, method & challenges. arXiv preprint arXiv:2412.11936 , 2024. [31] Yibo Yan, Shen Wang, Jiahao Huo, Jingheng Ye, Zhendong Chu, Xuming Hu, Philip S Yu, Carla Gomes, Bart Selman, and Qingsong Wen. Position: Multimodal large language models can significantly advance scientific reasoning. arXiv preprint arXiv:2502.02871 , 2025. [32] Jiarui Zhang, Ollie Liu, Tianyu Yu, Jinyi Hu, and Willie Neiswanger. Euclid: Supercharging mul- timodal llms with synthetic high-fidelity visual descriptions. arXiv preprint arXiv:2412.08737 , 2024. [33] Jiarui Zhang, Mahyar Khayatkhoei, Prateek Chhikara, and Filip Ilievski. MLLMs know where to look: Training-free perception of small visual details with multimodal LLMs. arXiv preprint arXiv:2502.17422 , 2025. [34] Jiaxin Zhang, Zhong-Zhi Li, Ming-Liang Zhang, Fei Yin, Cheng-Lin Liu, and Yashar Moshfeghi. Geoeval: Benchmark for evaluating LLMs and multi-modal models on geometry problem- solving. In Findings of the Association for Computational Linguistics , pages 1258–1276, 2024. [35] Ming-Liang Zhang, Fei Yin, Yi-Han Hao, and Cheng-Lin Liu. Plane geometry diagram parsing. InProceedings of the 31st International Joint Conference on Artificial Intelligence , pages 1636–1643, 2022. [36] Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, and Hongsheng Li. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? In European Conference on Computer Vision , pages 169–186, 2024. [37] Renrui Zhang, Xinyu Wei, Dongzhi Jiang, Yichi Zhang, Ziyu Guo, Chengzhuo Tong, Jiaming Liu, Aojun Zhou, Bin Wei, Shanghang Zhang, et al. Mavis: Mathematical visual instruction tuning. arXiv preprint arXiv:2407.08739 , 2024. [38] Xiaokai Zhang, Na Zhu, Cheng Qin, Yang Li, Zhenbing Zeng, and Tuo Leng. Fgeo-hypergnet: Geometric problem solving integrating formal symbolic system and hypergraph neural network. InProceedings of the 34rd International Joint Conference on Artificial Intelligence , 2025. [39] Junbo Zhao, Ting Zhang, Jiayu Sun, Mi Tian, and Hua Huang. Pi-gps: Enhancing geome- try problem solving by unleashing the power of diagrammatic information. arXiv preprint arXiv:2503.05543 , 2025. [40] Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. Llamafactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the Association for
https://arxiv.org/abs/2505.17121v1
Computational Linguistics , pages 400–410, 2024. 12 A Comparison with Specific Examples of Popular Geometry Datasets To facilitate comparison of dataset characteristics synthesized by our method and other popular approaches, we showcase a randomly selected example from NeSyGeo-CoT alongside each of the different approaches in Figure 7. Geometry-3K is a manually synthesized dataset, while the remaining approaches employ automatic generation techniques. To ensure a fair comparison, we standardize the text format by removing model-guiding prompts and appending options when present. Furthermore, we annotate each sample with image pixels and CoT word counts. Compared to other datasets, our dataset features clear, human-aesthetically pleasing images, high- quality step-by-step reasoning chains, symbolic form meta-information enabling subsequent image augmentation and mutation, and well-distributed conditional information between images and text. Additional examples of our NeSyGeo-CoT dataset can be found in the Appendix 11. Figure 7: Comparison NeSyGeo-CoT dataset with other Popular Geometry Datasets. Geometry-3K is a manually synthesized dataset, while the remaining approaches employ automatic generation techniques. Our dataset features clear, human-aesthetically pleasing images, high-quality step-by-step reasoning chains, symbolic form meta-information enabling subsequent image augmentation and mutation, and well-distributed conditional information between images and text. 13 B Statistics of NeSyGeo-Caption and NeSyGeo-CoT Detailed numerical statistics and element distribution for the NeSyGeo-Caption and NeSyGeo-CoT datasets are presented in Table 5 and 6. For element distribution statistics, we randomly sampled 1.8k Geo-DSL sequences corresponding to images from each dataset, counting the frequency of different geometric elements. To facilitate interpretation, these elements are converted into corresponding natural language descriptions. Table 5: Statistics of NeSyGeo-Caption Statistic Number Total Counts Total number of images 70k Total number of captions 70k DSL Statement Percentage One Statement 5.4% Two Statements 25.8% Three Statements 34.7% Four Statements 23.4% Five Statements 10.7% Length of Captions Maximum length (words) 220 Minimum length (words) 34 Average length (words) 73.3 Average length (characters) 385.4 Image Dimensions Average dimensions (pixels) 723.1 ×724.0Table 6: Statistics of NeSyGeo-CoT Statistic Number Total Counts Total number of images 15.3k Total number of Q&A pairs 30.1k Question Statistics Length-based type 54.6% Area-based type 34.4% Angle-based type 11.1% Average length (words) 26.9 Average length (characters) 140.6 CoT Statistics Below four steps 35.4% Four steps or above 64.5% Average length (words) 91.8 Average length (characters) 365.9 Image Dimensions Average dimensions (pixels) 731.0 ×727.4 Figure 8: Frequency of different geometric elements. To facilitate interpretation, these elements are converted into corresponding natural language descriptions. 14 Table 7: Detailed RL experiments evaluation on MathVerse. Here, ‘AGL’, ‘ARA’, ‘LTH’, and ‘PG’ denote angle, area, length, and plane geometry, respectively. Vision Intensive Vision Dominant Vision Only Model AGL ARA LTH PG AGL ARA LTH PG AGL ARA LTH PG Qwen2.5-VL-3B 31.6 22.0 34.6 33.3 31.6 17.6 40.7 31.4 30.6 23.1 35.7 32.7 InternVL2.5-4B 36.8 22.0 33.0 31.8 33.2 25.3 33.7 32.9 24.4 20.9 29.1 27.3 InternVL2.5-8B 44.0 23.1 36.3 41.8 40.4 20.9 36.8 37.3 26.4 25.3 31.3 30.6 Qwen2.5-VL-3B+RL 33.7 23.1 36.8 34.7 32.1 20.9 37.4 36.6 32.1 26.4 37.4 35.1 InternVL2.5-4B+RL 45.6 20.9 37.9 40.2 45.1 26.4 36.2 38.2 29.0 27.5 34.1 31.8 Figure 10: T-SNE of the image features of
https://arxiv.org/abs/2505.17121v1
different automatic frameworks. Our datasets exhibit uniform feature distributions, underscoring the substantial visual diversity of images generated by the NeSyGeo framework. C Additional Experimental Details and Results We utilized the VLM-R1 [ 22] framework for RL experiments, conducted on 6 vGPU-32 GB. We set epochs to 2, num generations to 6, batchsize to 1. To enhance the visual perception capabilities of MLLMs, parameters of the language model and vision modules are set to be trainable. For SFT experiments, we employed the LLaMA-Factory [ 40] framework on 2 A800 GPUs with LoRA. We set the learning rate of 1×10−5, LoRA rank of 64, and use Adam optimization. Training on NeSyGeo-Caption used 1 epoch, while NeSyGeo-CoT used 2 epochs. Table 7 presents the detailed performance of models trained on various automatically synthesized datasets across the MathVerse benchmark. Models trained using our dataset demonstrate superior performance on most metrics compared to others, exhibiting substantial performance gains relative to the base model.As shown in Figure 9. We also evaluated model performance as RL training steps increased when using NeSyGeo-CoT. Most metrics improved with more training steps, demonstrating the robustness and effectiveness of our datasets. To investigate visual diversity directly across datasets, we randomly sampled 1k images from each, extracted features using ResNet, and visualized them via t-SNE. As illustrated in Figure 10, G- LLaV A—having augmented only the textual components of its base dataset—displays a distinctly non-uniform distribution in the image feature space. Conversely, our method exhibits uniform feature distributions, underscoring the substantial visual diversity of images generated by our approach. Figure 9: Model performance on Mathverse as the RL training steps increase. With InternVL2.5-4B as our base model, most metrics exhibit progressive improvement throughout training, demonstrating the robustness and effectiveness of our datasets. 15 D More Examples of NeSyGeo-CoT Dataset Figure 11: Examples of the NeSyGeo-CoT dataset. Each sample comprises a symbolic image definition based on our Geo-DSL language, a high-quality annotated image, a concise text caption, diverse Q&A pairs, and a detailed reasoning process step-by-step. We present more examples from the NeSyGeo-CoT dataset in Figure 11. Our bidirectional conversion engine can generate high-quality visual images from a symbolic form based on our Geo-DSL language. To further enhance image diversity, the engine introduces variability by randomly selecting values for the unit length and applying random rotations to the generated diagrams during creation. While other 16 Table 8: NeSyGeo-Test Benchmark on several mainstream MLLMs. The highest accuracy for open-source and closed-source MLLMs is marked in red and blue, respectively. Task Type Question Difficulty Model Param Angle Shape Length Easy Medium Hard Total Open-source MLLMs Qwen2.5-VL-3B-Instruct 3B 44.1 31.8 27.8 30.5 29.1 31.9 30.9 Qwen2.5-VL-7B-Instruct 7B 38.3 30.7 32.2 36.8 27.5 32.3 43.3 InternVL2.5-4B 4B 53.8 53.1 53.5 62.9 49.4 32.0 53.4 InternVL2.5-8B 8B 55.9 56.2 55.4 64.5 49.4 43.3 55.8 LLaV A-NeXT-7B 7B 23.7 15.5 15.4 18.6 14.7 13.3 6.4 LLaV A-NeXT-13B 13B 15.7 15.6 16.0 15.8 16.0 15.2 15.8 LLaV A-NeXT-34B 34B 23.7 21.0 18.5 19.1 21.5 19.2 19.9 Closed-source MLLMs InternVL3-latest – 81.7 65.2 68.3 77.8 62.0 56.7 68.7 GPT-4o-mini – 58.7 62.1 55.7
https://arxiv.org/abs/2505.17121v1
63.0 54.0 41.7 58.2 Claude-3.5-Sonnet-latest – 68.8 78.0 71.0 77.8 73.2 56.5 74.5 Qwen-VL-plus – 38.5 29.6 31.7 36.6 27.2 29.6 32.8 Gemini-2.0-Flash – 36.8 60.4 67.7 54.3 63.8 61.0 58.1 visual attributes, such as element and background colours, could also be randomized, they were set to default values in our current synthesis process. Our symbolic language helps identify parts of the image, and our conversion process ensures the images are geometrically correct. Due to LLMs’ powerful search and reasoning capabilities, we obtain diverse Q&A pairs concerning properties like lengths, angles, or areas alongside high-quality CoT step-by-step. This process, involving backwards search across the geometric space defined by the symbolic form and forward validation, ensures the correctness of the numerical answers, thereby enriching textual diversity. To support evaluation and training paradigms such as curriculum learning, we annotate each sample with a difficulty level. Given that geometric reasoning tasks primarily require models’ image perception and logical reasoning capabilities, we scientifically define the difficulty level as 0.3×perception difficulty + 0.7×reasoning difficulty , (1) where perception difficulty is the number of Geo-DSL statements, and reasoning difficulty is the number of reasoning steps. Each synthesized sample includes detailed meta-information stored as a symbolic form based on our Geo-DSL language. This symbolic form accurately describes the geometric setup and offers promising directions for future research. For instance, valid geometric configurations could be generated by augmenting or mutating existing symbolic forms within constrained parametric bounds. E Details of NeSyGeo-Test Benchmark. Our NeSyGeo-Test benchmark comprises 2668 Q&A pairs. Consistent with the training set, numerical annotations are embedded in the image space, with only essential conditions and questions provided in the text. The type of numerical quantity categorizes the dataset sought: Angle (658 pairs), Shape (730 pairs), and Length (1280 pairs). Shape type includes shape area and perimeter, while length type includes edge and arc lengths. Based on the difficulty level in D, problem difficulty is divided into three levels: Easy (1537 pairs), Medium (908 pairs), and Hard (223 pairs). Evaluation results on current mainstream open-source and closed-source MLLMs are shown in Table 8. F Limitations •Limited Training Paradigm : Our current evaluation of the NeSyGeo dataset relies on a simple training paradigm to assess its efficacy for automated data generation. This approach 17 lacks advanced training strategies, such as CLIP alignment or curriculum learning, which restricts the development of specialized models optimized for geometric reasoning tasks. •Restricted Domain Scope : The NeSyGeo framework is currently tailored to plane geometry, limiting its generalizability to other domains. However, we believe that for multimodal datasets in other domains, we can similarly achieve synthesis by defining symbolic state- ments, shifting the synthesis process to a controllable symbolic space, and constructing a symbolic-to-image engine, which we plan to explore in our future work. •Dependency on External APIs : The construction of Q&A pairs in this study partly relies on the reasoning capabilities of LLMs. This dependency increases generation costs and introduces potential inconsistencies. We aim to develop an automated solver that conducts search and validation directly within the symbolic space, thereby removing reliance on
https://arxiv.org/abs/2505.17121v1
LLMs. This could further reduce costs and ensure complete rigor. G Details of Prompts in Reverse Search and Forward Validation In our automatic synthesis framework, we employ DeepSeek R1 as the expert LLM for reverse search and DeepSeek V3 for forward validation. The specific prompts utilized are detailed in Figures 12 and 13, respectively. Note that the blue text in these prompts is substituted with actual content. H Detailed Definition of Geo-DSL Geo-DSL adopts an entity-relation-constraint framework to define geometric elements in plane geometry, encompassing 13 types of points, 7 types of lines, 3 types of angles, and 14 types of shapes. Representative examples of symbolic statements and their corresponding natural language descriptions are illustrated in Figures 18, 15, 16, and 17. With a single statement, Geo-DSL uniquely specifies spatial elements, ensuring the accuracy of geometric synthesis while significantly facilitating parsing and transformation by our conversion engine. This language achieves comprehensive coverage of plane geometry, including numerical attributes such as lengths and angle measures, enabling precise and complete geometric representations. By streamlining definitions into concise statements, Geo-DSL reduces the complexity of symbolic processing, enhances the efficiency of the conversion engine, and supports seamless integration with neural synthesis pipelines. These advantages make Geo-DSL a robust and versatile solution for generating high-quality multimodal geometric reasoning data. I Detailed Actions in Symbolic Spaces As illustrated in Figure 14, we enumerate all statements within the action space defined by our Geo-DSL. The content within square brackets denotes annotations for each statement. As outlined in Algorithm 1, for each step, we first generate three lengths, x,y, and z, along with an angle α, sampled from predefined ranges. Subsequently, based on the weight matrices AandI, we determine the specific statement to be selected. The chosen statement, paired with its corresponding numerical values, is then appended to the Geo-DSL sequence. For different types of actions, we provide a concrete example for each, highlighted with a gray background to indicate the available action space when the respective element is selected. 18 Figure 12: Prompt for Deepseek R1 in reverse search. Figure 13: Prompt for Deepseek V3 in forward validation. 19 Figure 14: Detailed Actions in Symbolic Spaces. Actions can be categorized into four parts based on the type of selected geometric element: line-based, point-based, shape-based, and angle-based. 20 Figure 15: Geo-DSL definitions of line. Figure 16: Geo-DSL definitions of angle. 21 Figure 17: Geo-DSL definitions of shape. 22 Figure 18: Geo-DSL definitions of point. 23
https://arxiv.org/abs/2505.17121v1
SHALLOW PREFERENCE SIGNALS : LARGE LANGUAGE MODEL ALIGNS EVEN BETTER WITH TRUNCATED DATA? Xuan Qi∗2, Jiahao Qiu∗1, Xinzhe Juan3, Yue Wu†1, and Mengdi Wang†1 1AI Lab, Princeton University 2IIIS, Tsinghua University 3Department of Computer Science & Engineering, University of Michigan ABSTRACT Aligning large language models (LLMs) with human preferences remains a key challenge in AI. Preference-based optimization methods, such as Reinforcement Learning with Human Feedback (RLHF) and Direct Preference Optimization (DPO), rely on human-annotated datasets to improve alignment. In this work, we identify a crucial property of the existing learning method: the distin- guishing signal obtained in preferred responses is often concentrated in the early tokens. We refer to this as shallow preference signals . To explore this property, we systematically truncate preference datasets at various points and train both reward models and DPO models on the truncated data. Surprisingly , models trained on truncated datasets, retaining only the first half or fewer tokens, achieve comparable or even superior performance to those trained on full datasets. For example, a reward model trained on the Skywork- Reward-Preference-80K-v0.2 dataset outperforms the full dataset when trained on a 40% truncated dataset. This pattern is consistent across multiple datasets, suggesting the widespread presence of shallow preference signals . We further investigate the distribution of the reward signal through decoding strategies. We consider two simple decoding strategies motivated by the shallow reward signal observation, namely Length Control Decoding and KL Threshold Control Decoding, which leverage shallow preference signals to optimize the trade-off between alignment and computational efficiency. The performance is even better, which again validates our hypothesis. The phenomenon of shallow preference signals highlights potential issues in LLM alignment: ex- isting alignment methods often focus on aligning only the initial tokens of responses, rather than considering the full response. This could lead to discrepancies with real-world human preferences, resulting in suboptimal alignment performance. Our code is available at https: //github.com/THUQiXuan/Shallow-preference-signal . 1 Introduction Aligning large language models (LLMs) with human preferences is a core challenge in artificial intelligence (AI) research [ 1]. Preference datasets [ 2,3,4,5] have played a critical role in addressing this challenge by capturing human judgments of model outputs. These datasets enable the identification and prioritization of responses that are more aligned with human expectations. Preference-based optimization techniques, such as Reinforcement Learning with Human Feedback (RLHF) [ 6] and Direct Preference Optimization (DPO) [ 7], rely on these datasets to refine the decision-making process of models. Despite the promise of these methods, there are several challenges associated with them. Recent work [ 8,9,10,11] has highlighted that reward models trained using RLHF may suffer from reward hacking. Factors such as response ∗Equal contribution. †Correspondence to: frankwupku@gmail.com ,mengdiw@princeton.edu .arXiv:2505.17122v1 [cs.CL] 21 May 2025 format, length, and even the inclusion of emojis can influence quality judgments, resulting in potential inaccuracies. In this paper, we introduce a previously underexplored aspect of preference data. Specifically, we observe that the signal indicating the superiority of the chosen response over the rejected one is not uniformly distributed across the entire response. In many cases, the relative quality of responses can be
https://arxiv.org/abs/2505.17122v1
determined from only the early portion of the response—or even just a few tokens—rather than requiring an evaluation of the entire response. We refer to this phenomenon as shallow preference signals . This observation suggests that preference-based optimization methods may not need to rely on the full response to effectively capture the distinguishing features of higher-quality responses. 9.9 is greater than 9.11. Comparing digit by digit, both start with 9, but 9.9 has 9 in the tenths place, while 9.11 has 1. Since 9 > 1, 9.9 is larger. Original Dataset 9.9 is greater than 9.11. Comparing digit by digit, both start with 9, but 9.9 has 9 in the tenths place, while 9.11 has 1. Since 9 > 1, 9.9 is larger. Truncated Dataset Higher cost Introduce noise Lower cost Maintain/improve performance DPO 9.11 is greater than 9.9. Both have 9 in the tenths place, but 9.11 has 1 in the hundredths place, while 9.9 has 0. Since 1 > 0, 9.11 is larger. 9.11 is greater than 9.9. Both have 9 in the tenths place, but 9.11 has 1 in the hundredths place, while 9.9 has 0. Since 1 > 0, 9.11 is larger. Prompt: Which number is bigger 9.11 or 9.9 ? Reward Model Figure 1: An example illustrating the phenomenon of shallow preference signals. It demonstrates how the relative quality of two responses can be determined from the early portion of the response, or even from the first sentence. Training with only the initial part allows the model to capture most of the preference signals while conserving resources. We hypothesize that focusing on the early portion of the response allows models to capture the most salient preference signals, resulting in more efficient training and potentially improved alignment performance. To test this hypothesis, we introduce a methodology where preference data is truncated at various positions, and models are trained on these truncated datasets. We analyze the distribution of preference signals in response pairs and conduct systematic experiments to validate the hypothesis that models trained on truncated preference data perform comparably to models trained on the full dataset. This is confirmed for both reward models and models fine-tuned with DPO. Our findings demonstrate that the distinguishing features between the chosen and rejected responses are concentrated in the early part of the response. In fact, models trained on truncated datasets—using only the first half or fewer tokens of each response—achieve similar, or even superior, performance compared to those trained on the full dataset. For instance, a reward model trained on the Skywork-Reward-Preference-80K-v0.2 [ 2] dataset achieves an accuracy of only 75.85% on RewardBench [ 12]. However, when the dataset is truncated to 50% and 40%, the accuracy increases to 75.88% and 76.35%, respectively. Even with a truncation to 25%, the accuracy remains at 69.92%. Similarly, a reward model trained on the RLHFlow-pair-data-v2-80K-wsafetyRLHFlow-pair-data-v2-80K-wsafety1dataset achieves an accuracy of 65.96% on RewardBench. After truncating the dataset to 50% and 40%, the accuracy improves to 72.16% and 69.71%, respectively, with accuracy remaining at 62.44% for a 33% truncation. Furthermore, our experiments suggest that the shallow
https://arxiv.org/abs/2505.17122v1
preference signal phenomenon significantly impacts LLM content generation. Based on this observation, we find that simple strategies can perform well without needing complex decoding approaches. Recent work [ 13,14,15,16] has proposed various decoding strategies, but our findings indicate that by focusing on the early portion of the response, we can achieve an optimal trade-off between reward and KL divergence. To test this, we explore two decoding strategies—Length Control Decoding and KL Threshold Control Decoding—to see if the early-token bias observed during training affects generation at inference time. Our results show that the differences between the DPO model trained on full preference data and the reference model are most noticeable in the early tokens of the generated response. As more of the response is generated, the difference decreases. This suggests that the reward signal in DPO training is concentrated in the early tokens, rather than being evenly distributed. [17] also explores token distribution differences between base LLMs and aligned models, though their method primarily focuses on in-context learning, avoiding parameter fine-tuning. 1https://huggingface.co/datasets/RLHFlow/pair_data_v2_80K_wsafety 2 Meanwhile, the findings of this paper may shed light on existing problems in LLM alignment. Our experiments validates that current alignment methods often focus on aligning earlier tokens, rather than considering full sentences. The latter portions of answers generated by LLM tend to be generated through an auto-regressive mechanism, which does not exhibit significant quality variation through our decoding experiments. Through extensive experiments, we validate our hypothesis that focusing on the early portion of the response allows models to capture the most salient preference signals, resulting in more efficient training and potentially improved alignment performance. However, alignment with truncated data is shallow alignment which only improves the performance on metrics but may keep further away from the real-world alignment with human values. [18] proposes a related issue, but their work is confined to safety alignment and does not extend to the broader alignment challenges present in LLMs. Instead, our work validates the phenomenon more systemically and extensively. In summary, the main contributions of our paper are as follows: 1.We introduce and systematically validate the phenomenon of shallow preference signals , demonstrating that the distinguishing features between high-quality and low-quality responses are often concentrated in the early portion of the response. 2.We show that training reward models and DPO models on truncated responses—using only the early por- tion—achieves performance comparable to or better than training on full responses. This finding holds across multiple datasets and supervision settings. 3.We provide a new perspective on the limitations of current alignment pipelines. Specifically, we suggest that current alignment methods face the limitation of shallow alignment, emphasizing that alignment should go beyond just aligning a few tokens and consider full sentences for more effective results. 2 Related Works 2.1 LLM Alignment with Human Preference Aligning the outputs of large language models with human preferences is a crucial problem in the field of LLMs [ 1]. One of the most notable advancements in this area is Reinforcement Learning from Human Feedback (RLHF) [ 19,6], which has led to the development of cutting-edge language models such as
https://arxiv.org/abs/2505.17122v1
GPT-4o [ 20], Gemini-2.0 [ 21], and Llama-3.1-70B- Instruct [ 22]. The traditional RLHF approach involves training a reward model to score the outputs of the language model, followed by fine-tuning using deep reinforcement learning algorithms like Proximal Policy Optimization (PPO) [ 5]. However, PPO faces challenges in alignment tasks due to its complexity, instability, and inefficiency [ 23,24]. Several works have sought to improve the RLHF paradigm from various angles in order to better align LLMs with human preferences [ 25,26,27,28,29]. Among these, Direct Preference Optimization (DPO) [ 7] has gained significant attention, as it directly optimizes a policy using chosen and rejected pairs. 2.2 Reward Model The reward model plays a critical role in RLHF [ 19,6]. Traditional reward models are often assumed to follow a Bradley-Terry model [ 30], which provides a score for an entire output to indicate its preference [ 31,19,6]. However, the Bradley-Terry model has limitations, particularly its inability to handle complex or intransitive preferences [ 32,33,34]. Some works have addressed this issue by discarding the Bradley-Terry assumption and instead modeling the probability that one response is preferred over another [ 35,36,37]. Additionally, other approaches have explored the construction of multi-objective reward models to capture human preferences more comprehensively [ 38,39,40]. Furthermore, some studies have proposed process reward models [ 41,42,43] or step-wise reward models [ 44], which have shown promising results, especially in reasoning tasks. 2.3 Reward Hacking Reward hacking refers to the situation in which an agent or model optimizes a proxy reward that deviates from the true objective, leading to suboptimal or even undesirable behavior [ 45]. This phenomenon has been widely studied across various environments such as grid-worlds, Atari games, and text generation tasks [ 46,47,48]. Prior research has focused on categorizing different forms of reward hacking and developing mitigation strategies, such as regularizing policy optimization [ 49], imposing a KL divergence penalty [ 50], and applying model merging techniques to either the policy or reward model [ 8]. Despite these efforts, existing approaches have notable limitations. In response, recent studies have introduced new definitions and strategies for mitigating reward hacking, including the concept of "hackability" [ 45] and the use of information-theoretic reward modeling [ 50]. Furthermore, the application of reward hacking techniques to language models has been explored, particularly in improving the sample efficiency of preference 3 learning [ 51]. In contrast to these prior approaches, our work mitigates a subset of reward hacking by truncating the model’s responses and better aligning them with human preferences. This truncation process effectively reduces noise in the dataset, leading to improved accuracy. By removing certain noise components, our method can be seen as a novel approach to addressing reward hacking within the context of language models. 3 Methodology In this section, we introduce the methodology used to investigate and optimize the reward signal in large language models (LLMs) based on preference data. We first formalize the location of the reward signal within a response, followed by the approach of training reward models and Direct Preference Optimization (DPO) models using truncated preference datasets. Lastly,
https://arxiv.org/abs/2505.17122v1
we describe the mixing strategy and the two novel decoding policies designed to improve model performance. 3.1 Formulation of Reward Signal Location Consider a preference dataset containing pairs of responses, where one response is the chosen response and the other is therejected response . The reward signal is defined as the inherent quality difference between these two responses. Let rcho(i)denote the chosen response for a given instance i, and rrej(i)denote the rejected response . The objective is to model the reward signal R(i), which indicates the degree of preference for rcho(i)overrrej(i). We hypothesize that the reward signal is concentrated in the early part of the response. To formalize this, let rcho(i) = [y1, y2, . . . , y T]andrrej(i) = [ z1, z2, . . . , z T]represent the token sequences for the chosen and rejected responses, respectively, where Tis the total number of tokens in each response. We define the reward signal at each token position tas the difference in the model’s log-probability for the chosen and rejected responses at that position: Rt(i) = log p(yt|x, y1:t−1)−logp(zt|x, z1:t−1), where xrepresents the input context, and logp(yt|x, y1:t−1)is the log-probability of the token ytin the chosen response at position t, conditioned on the context xand the preceding tokens y1:t−1. Similarly, logp(zt|x, z1:t−1)is the log-probability of the token ztin the rejected response at the same position. We argue that the total reward signal R(i)can be approximated as the cumulative sum of the reward signals up to a truncation point tk: R(i) =tkX t=1Rt(i) = log p(y1:tk|x)−logp(z1:tk|x), where tkrepresents the truncation point, beyond which the reward signal becomes less informative or introduces noise. This leads to the hypothesis that truncated responses up to position tkpreserve most of the reward signal, enabling the training of effective reward models and DPO models without requiring the full response. 3.2 Training Reward Models and DPO Models with Truncated Preference Data In this work, we investigate the effects of truncating the responses in preference datasets at various positions. Let rcho(i)truncandrrej(i)truncdenote the truncated chosen and rejected responses, respectively, where truncation is applied to retain only the first tktokens of each response: rcho(i)trunc= [y1, y2, . . . , y tk], rrej(i)trunc= [z1, z2, . . . , z tk] We train reward models on these truncated preference datasets. The reward model aims to predict the relative quality of responses given the truncated input. Specifically, we model the reward using the following formula: P(y≻y′|x) =exp(r(y;x)) exp(r(y;x)) + exp( r(y′;x)), where r(y;x)represents the reward function for response ygiven the context x, and P(y≻y′|x)is the probability that response yis preferred over y′. Although the reward model is trained on truncated responses, it is still able to assess the quality of full responses effectively by leveraging the reward function learned from the truncated portions. Similarly, for Direct Preference Optimization (DPO), we fine-tune a base model on the truncated preference datas. The DPO objective seeks to maximize the likelihood of the chosen response over the rejected response by minimizing the 4 following loss: ℓDPO(x, ytrunc w, ytrunc l;θ;πref) :=−logσ β logπθ(ytrunc w|x) πref(ytruncw|x)−logπθ(ytrunc l|x)
https://arxiv.org/abs/2505.17122v1
πref(ytrunc l|x) , where πθis the probability distribution generated by the model, πrefis the reference model’s distribution, ytrunc wandytrunc l represent the truncated winning and losing responses, and σis the sigmoid function. In our approach, we train the DPO model on truncated responses, but it is still capable of generating full responses and performing in regular dialogues. The truncation helps to focus on the most relevant tokens early in the response, reducing noise from irrelevant parts of the response. 3.3 Mixing Strategy and Decoding Policies To further optimize the alignment between the model’s output and human preferences, we propose a mixing strategy and two novel decoding policies. The mixing strategy combines the DPO policy with the corresponding reference model policy to enhance the reward-KL divergence tradeoff. 3.3.1 Mixing Strategy The mixing strategy involves combining the probability distributions from the DPO model πDPOand the reference model πrefin a weighted manner. Specifically, we define a mixing policy πmixas: πmix=softmax a·logπDPO πref+ log πref where ais a mixing coefficient controlling the tradeoff between the DPO and reference model. This strategy allows for fine-tuning the balance between the reward signal captured by the DPO policy and the stability provided by the reference model. 3.3.2 Decoding Strategies We explore two decoding strategies that prioritize the early part of the response or manage the KL divergence between the DPO and reference models. Length Control Decoding: In this strategy, the first ttokens are generated by sampling from the DPO policy, while the remaining tokens are generated by sampling from the reference model. The goal is to focus on the part of the response where the reward signal is concentrated. The strategy is parameterized by the truncation length t, which controls the point at which the decoding switches between the two models. yk=sample from πDPO ifk≤t sample from πref ifk > t KL Threshold Control Decoding: In this strategy, we compute the KL divergence between the DPO model and the reference model at each token generation step. If the KL divergence exceeds a predefined threshold b, we sample from the DPO policy; otherwise, we sample from the reference model. This dynamic approach allows the model to maintain flexibility in adjusting to the relative importance of reward signal versus stability during the response generation process. yt=sample from πDPO if KL (πDPO∥πref)> b sample from πref if KL (πDPO∥πref)≤b where y(i) tdenotes the i-th sampled token from the DPO model at the t-th position. The KL divergence KL (πDPO∥πref)is computed at each token position as: KL(πDPO∥πref) =Eyt∼πDPO logπDPO(yt|x, y<t) πref(yt|x, y<t) This expectation is estimated using Monte Carlo sampling. Specifically, we sample K= 1,000tokens from the DPO model at each token position, and the KL divergence is computed as: ˆKL(πDPO∥πref) =1 KKX i=1logπDPO(y(i) t|x, y<t) πref(y(i) t|x, y<t) Both of these strategies aim to improve the reward alignment while maintaining a favorable KL divergence, leading to better model outputs. 5 4 Experiment: Truncation Effects on Reward Models and DPO 4.1 Experiment Setting In this experiment, we investigate the effect of truncating response sequences at different positions within preference datasets Skywork-Reward-Preference-80K-v0.2 [ 2],
https://arxiv.org/abs/2505.17122v1
ultrafeedback-binarized [ 3], and RLHFlow-pair-data-v2-80K- wsafety2, which are commonly used in the context of large language models. Specifically, we apply truncation to the response sections (including both chosen and rejected responses) at varying positions. The truncation process retains only the initial portion of the response tokens, while the remaining tokens are discarded, resulting in the creation of multiple truncated datasets. We then train reward models and use Direct Preference Optimization (DPO) to fine-tune models on these truncated datasets and compare their performance with models trained on the original, untruncated datasets. We also investigate the use of DPO implicit reward [ 7] to assess the quality of two responses on datasets with different truncation ratios, and compare the accuracy of this evaluation with the actual quality judgments. We utilize Google’s gemma-2b-it3model as the base for training the reward model, following the methodology outlined in RLHFlow [ 52] to train a standard Bradley-Terry reward model [ 53]. For the DPO training, we use the Llama-3.1-8B- Instruct [ 54] as the base model, following the DPO methodology outlined in OpenRLHF [ 55] to fine-tune the model. In the experiment using DPO implicit reward to assess accuracy, we use the LLaMA3-iterative-DPO-final model [ 56,52] as the DPO policy model and its supervised fine-tuning (SFT) checkpoint, LLaMA3-SFT, trained from Llama-3-8B, as the reference policy model. All experiments are performed using 80GB A100 or H100 GPUs 4.1.1 Metrics The performance of the models is evaluated using two metrics: Test Accuracy . This metric measures the proportion of instances where the reward model assigns a higher score to the chosen response compared to the rejected response. GPT4o Win Rate . This metric is computed using the AlpacaEval 2.0 [ 57] standard test set and the default baseline model with GPT4o acting as the judge. 4.2 Results 4.2.1 Evaluation of Reward Models on RewardBench We evaluate the performance of the trained reward models on the core RewardBench evaluation set. For each dataset, we train the reward models on the training set using truncated versions of the responses with truncation ratios of 50%, 40%, 33% and 25%. The results are presented in Table 1. Truncating the response in the preference data to 50% or 40% of tokens had minimal impact on the performance of the trained reward model across all three datasets. In fact, for certain metrics and datasets, models trained on truncated data outperformed those trained on full responses. However, truncating the response to 33% or 25% of its original length leads to a slight reduction in performance. Despite this, the performance drop remains small, and the models continue to exhibit the majority of the performance seen with the original, untruncated datasets. 4.2.2 Evaluation of Reward Models on Each Task of UltraFeedback We train reward models on the ultrafeedback-binarized dataset, separately for each task: Helpfulness, Honesty, Instruction Following, and Truthfulness. For each task, we train the reward models on the training set using truncated versions of the responses with truncation ratios of 50%, 40%, 30%, 20% and 10%. Results are shown in Table 2. The results show that truncating the
https://arxiv.org/abs/2505.17122v1
responses to 50% or 40% of their original length had a negligible effect on test accuracy for each task. In some tasks, models trained on truncated data even perform better than those trained on full responses. However, when the responses are truncated to shorter lengths (e.g., 30%, 20%, or 10%), a slight decrease in test accuracy is observed. Nonetheless, the models retain a substantial portion of their original performance, indicating that truncation did not result in a significant loss of accuracy. 2https://huggingface.co/datasets/RLHFlow/pair_data_v2_80K_wsafety 3https://huggingface.co/google/gemma-2b-it 6 Dataset Dimension Original Dataset 50% 40% 33% 25% Skywork-PreferenceChat 0.8073 0.7318 0.7039 0.5866 0.5978 Chat-Hard 0.7039 0.7105 0.6974 0.6776 0.6732 Safety 0.8216 0.8068 0.7946 0.8162 0.8030 Reasoning 0.7043 0.7769 0.8101 0.7064 0.7450 Total 0.7585 0.7588 0.7635 0.7000 0.6992 UltraFeedbackChat 0.7946 0.8098 0.8073 0.7844 0.7644 Chat-Hard 0.6029 0.6425 0.6342 0.5983 0.5946 Safety 0.7416 0.7632 0.7848 0.7384 0.6756 Reasoning 0.7056 0.6904 0.6682 0.6886 0.5646 Total 0.7391 0.7327 0.7194 0.7018 0.6355 RLHFlow-PreferenceChat 0.9553 0.9302 0.9287 0.8574 0.8291 Chat-Hard 0.4517 0.4561 0.4506 0.4323 0.4127 Safety 0.6730 0.6621 0.6438 0.5985 0.6081 Reasoning 0.5984 0.8374 0.7894 0.6247 0.5723 Total 0.6596 0.7216 0.6971 0.6244 0.5562 Table 1: Performance of reward models trained on different truncation ratios for various datasets. The table presents the evaluation scores across multiple dimensions from the RewardBench core set: Chat ,Chat-Hard ,Safety and Reasoning .Total is the final score on the RewardBench core set. Skywork-Preference refers to Skywork-Reward- Preference-80K-v0.2 dataset, UltraFeedback refers to ultrafeedback-binarized dataset, RLHFlow-Preference refers to RLHFlow-pair-data-v2-80K-wsafety dataset. Original Dataset refers to the model trained on the full dataset without truncation; 50% ,40% ,33% , and 25% refer to truncated datasets with corresponding ratios. The highest score in each row is highlighted with darker blue , and the second-highest score with lighter blue . Task Original Dataset 50% 40% 30% 20% 10% Helpfulness 0.89 0.90 0.90 0.87 0.82 0.73 Honesty 0.87 0.88 0.87 0.84 0.79 0.76 Instruction Following 0.91 0.91 0.86 0.87 0.74 0.69 Truthfulness 0.85 0.84 0.84 0.83 0.81 0.64 Average 0.88 0.8825 0.87 0.855 0.795 0.705 Table 2: UltraFeedback test accuracy across different tasks with various truncation ratios. The table presents the test accuracy for each task in the UltraFeedback dataset, with different truncation ratios: Original Dataset refers to the model evaluated on the full, unmodified UltraFeedback dataset; 50% ,40% ,30% ,20% , and 10% refer to models evaluated using truncated versions of the dataset. The tasks listed include: Helpfulness ,Honesty ,Instruction Following , and Truthfulness .Average represents the mean accuracy across all tasks. The highest score in each row is highlighted with darker blue , and the second-highest score with lighter blue . 4.2.3 Evaluation of DPO-trained Models on AlpacaEval 2.0 In addition to training reward models, we investigate the effect of response truncation in the preference dataset by Direct Preference Optimization (DPO). For this experiment, we use the Skywork-Reward-Preference-80K-v0.2 dataset [ 2]. The dataset responses are truncated at various ratios of 50%, 40%, 33% and 25%. Results are shown in Table 3. The results indicate that truncating the responses in the preference data had a minimal effect on the performance of models trained with DPO.
https://arxiv.org/abs/2505.17122v1
While the impact increased with the truncation ratio, truncating the response to 50% or 40% of its original length does not significantly degrade the performance of the DPO-trained models. This suggests that, in the context of DPO training, the majority of the signals used to evaluate response quality are concentrated in the earlier segments of the response. 4.2.4 Implicit Reward Accuracy on Truncated Responses In this experiment, we truncate the responses in the Skywork-Reward-Preference-80K-v0.2 [ 2] dataset at various proportions and compute the DPO implicit reward for each response pair. We then compare the preferences derived from the implicit rewards with the actual human-annotated preferences to assess the consistency. The results are presented in Figure 2. 7 Metric Llama3.1 8B Original Dataset 50% 40% 33% 25% LCWR 21.45 24.90 25.19 24.85 23.51 21.13 WR 22.37 23.92 24.15 23.57 23.43 20.96 Table 3: Performance of DPO models with different truncation ratios. The table presents the evaluation metrics for both the original model and the DPO models trained on truncated datasets: Llama3.1 8B refers to the original Llama-3.1-8B-Instruct model; Original Dataset refers to the Llama-3.1-8B-Instruct model fine-tuned using the full Skywork-Reward-Preference-80K-v0.2 dataset with the DPO algorithm; 50% ,40% ,33% , and 25% refer to models fine-tuned using truncated versions of the dataset. LCWR refers to Length-controlled Win Rate and WR refers to Win Rate. The highest score in each row is highlighted with darker blue , and the second-highest score with lighter blue . The results indicate that as the length of the response considered increases, the preferences derived from the DPO implicit reward align more closely with human-annotated preferences. Interestingly, even when only the initial portion of the response is considered, the preferences derived from the DPO implicit reward show a high degree of consistency with human preferences. This suggests that, in preference datasets, evaluating only the early tokens of a response is sufficient to accurately assess the relative quality of two responses, without the need to examine the entire response. 0.0 0.2 0.4 0.6 0.8 1.0 Truncated Ratio0.500.550.600.650.700.75Predict Accuracy Predict Accuracy vs Truncated Ratio (a) Predict Accuracy vs Truncated Ratio 0 100 200 300 400 500 Truncation Length (tokens)0.450.500.550.600.650.700.75Prediction Accuracy DPO Prediction Accuracy vs Truncation Length (b) Predict Accuracy vs Truncated Length Figure 2: The x-axis represents the response truncation length and ratio, while the y-axis shows the accuracy of DPO implicit reward in predicting the relative quality of responses based on truncated datasets. 5 Experiment: KL Divergence and Reward-KL Tradeoff for Evaluating Response Quality This section presents a set of experiments that examine the relationship between the Kullback-Leibler (KL) divergence between the DPO model and the reference model, and the reward-KL tradeoff during response generation. These experiments aim to validate the hypothesis that the reward signal in preference datasets is primarily concentrated in the early part of the response. 5.1 Experiment Setup To investigate this hypothesis, we perform two key experiments. In the first experiment, we compute the KL divergence between the DPO model and the reference model at each token generation step. This experiment allows us to observe how the
https://arxiv.org/abs/2505.17122v1
KL divergence evolves as the response is generated and whether the early tokens exhibit a higher divergence compared to later ones. In the second experiment, we explore the reward-KL tradeoff during generation by adjusting the sampling strategy based on the DPO model and reference model, to further confirm the concentration of reward signal in the early part of the response. We use the mixing decoding strategy described in subsubsection 3.3.1 under different aas a baseline, and test different decoding strategies. For both experiments, we use the LLaMA3-iterative-DPO-final model [ 56,52] as the DPO policy model and its supervised fine-tuning (SFT) checkpoint, LLaMA3-SFT, trained from Llama-3-8B, as the reference policy model. And we measure the corresponding reward using the reward model FsfairX-LLaMA3-RM-v0.1 [ 52]. We randomly selected 1000 instructions from the training sets of Alpaca [ 58] and UltraFeedback [ 3] to form the instruction sets for these two experiments. The KL divergence between two policy at a token is computed as described in subsubsection 3.3.2 and the KL divergence between two policy for the whole response generation is accumulated across all token generation steps, 8 yielding the final KL divergence as follows: ˆKL(πmix∥πref) =1 NNX i=1TX t=1logπmix(y(i) t|xi, y<t) πref(y(i) t|xi, y<t) where Nrepresents the size of the instruction set, Tdenotes the total number of tokens in the response, xiis the instruction, y(i) trefers to the generated token at position t, and y<trefers to the tokens generated prior to token t. 5.2 Results 5.2.1 KL Divergence Analysis Across Token Positions In the first experiment, we analyze the KL divergence between the DPO model and the reference model at each token generation step. The KL divergence is computed for each token ytby comparing the conditional probability distributions of the DPO model πDPO(yt|x, y<t)and the reference model πref(yt|x, y<t), where xis the instruction, andy<trepresents previously generated tokens. As shown in Figure 3, the KL divergence is high in the early tokens, indicating significant differences between the DPO and reference models. However, the divergence diminishes significantly as token generation progresses, suggesting that the primary divergence occurs in the initial phase of response generation. This observation supports the hypothesis that the reward signal in preference datasets is mostly concentrated in the first part of the response, with minimal divergence in the later tokens, where the DPO model relies on the tokens generated earlier. 0 100 200 300 400 500 T oken Position (t)0.000.050.100.150.200.250.300.35KL Divergence KL-Divergence vs Generated Length Figure 3: KL Divergence between the DPO model and the reference model at each token position. The plot shows that the divergence is higher for early tokens and decreases as generation progresses. 5.2.2 Reward-KL Tradeoff for Length Control and KL Threshold Control Decoding The second experiment explores the reward-KL tradeoff during response generation with different decoding strate- gies. We focus on two strategies: Length Control Decoding and KL Threshold Control Decoding as described in subsubsection 3.3.2 and evaluate them under different parameters. Length Control Decoding In Length Control Decoding, we sample from the DPO policy for the first ttokens and from the reference policy for the remaining tokens.
https://arxiv.org/abs/2505.17122v1
We evaluate this strategy for various values of tand compute the average reward and KL divergence for each configuration. KL Threshold Control Decoding In KL Threshold Control Decoding, we compute the KL divergence KL(πDPO∥ πref)at each token position. If the divergence exceeds a threshold b, we sample from the DPO policy; otherwise, we sample from the reference policy. We test several values of band record the average reward and KL divergence. The results of both strategies, shown in Figure 4, demonstrate that both decoding strategies improve the reward-KL tradeoff compared to the baseline. These findings confirm that adjusting the decoding strategy based on the KL divergence between the DPO and reference models leads to a better alignment between reward and KL divergence, further supporting the idea that the reward signal is concentrated in the early tokens. 9 0 5 10 15 20 KL Divergence7 6 5 4 3 2 1 0RewardReward vs KL-Divergence Baseline Length Control Threshold ControlFigure 4: Reward and corresponding KL Divergence for the baseline and two different control strategies. The blue dots represent data from the baseline, while the red triangles and green squares represent the Length Control and KL Threshold Control strategies, respectively. 5.3 Discussion These experiments validate the hypothesis that the reward signal used in the DPO model is concentrated in the early part of the response. The analysis of KL divergence reveals that the primary differences between the DPO and reference models occur in the initial token generation, while the reward-KL tradeoff experiments demonstrate how adjusting the sampling strategy can improve the alignment between reward and KL divergence. These findings highlight the importance of the early response tokens in shaping the overall quality of generated responses. 6 Investigating the Autoregressive Influence on Preference Signals In previous experiments, we observed that the preference signal appears to be concentrated in the initial portion of the response sequence. This could potentially be an artifact of the autoregressive nature of the data generation process. Given that the datasets used in earlier experiments were synthesized using autoregressive language models, we hypothesize that this phenomenon might be influenced by the autoregressive paradigm itself. To validate this hypothesis, we conducted a series of experiments using human-generated responses and preference labels. Specifically, we employed the SHP dataset [ 59], which consists of responses and preference annotations generated by humans, to repeat the experiments outlined in subsubsection 4.2.1 and subsubsection 4.2.4. 6.1 Results 6.1.1 Performance on RewardBench We trained reward models on the human-generated SHP dataset using both original and truncated versions of the responses. The evaluation was conducted on the RewardBench core set. The results, shown in Table 4, demonstrate that the shallow preference signal phenomenon persists even when using human-generated data. 6.1.2 DPO Implicit Reward Accuracy on Human-Generated Data We also applied the DPO implicit reward approach to the truncated human-generated responses, as described in subsub- section 4.2.4, to predict the relative quality of response pairs. The accuracy of these predictions was then compared to human-annotated preferences. The results, shown in Figure 5, confirm that the shallow preference signal phenomenon persists even with
https://arxiv.org/abs/2505.17122v1
human-generated data. As the truncation ratio decreases, the alignment between DPO implicit reward predictions and human-annotated preferences remains high, demonstrating that even truncated responses are sufficient for accurately predicting relative quality. 10 Dataset Dimension Original Dataset 50% 40% 33% 25% SHP-PreferenceChat 0.8198 0.8071 0.8139 0.7874 0.7709 Chat-Hard 0.6039 0.6352 0.5759 0.5155 0.5274 Safety 0.7906 0.8049 0.7825 0.7698 0.7589 Reasoning 0.5624 0.5532 0.5439 0.5592 0.5451 Total 0.7008 0.7056 0.6989 0.6882 0.6712 Table 4: Performance of reward models trained on the human-generated SHP dataset with different truncation ratios. The results show the evaluation scores across multiple dimensions: Chat ,Chat-Hard ,Safety ,Reasoning , and Total . Original Dataset refers to the model trained on the full dataset without truncation; 50% ,40% ,33% , and 25% refer to datasets where the responses are truncated to retain 50%, 40%, 33%, and 25% of the original token length, respectively. The highest score in each row is highlighted with darker blue , and the second-highest score with lighter blue . 0.0 0.2 0.4 0.6 0.8 1.0 Truncation Ratio0.4750.5000.5250.5500.5750.6000.6250.650Prediction Accuracy DPO Prediction Accuracy vs Truncation Ratio (a) Predict Accuracy vs Truncated Ratio 0 100 200 300 400 500 Truncation Length (tokens)0.450.500.550.600.65Prediction Accuracy DPO Prediction Accuracy vs Truncation Length (b) Predict Accuracy vs Truncated Length Figure 5: Accuracy of DPO implicit reward in predicting the relative quality of responses on the human-generated SHP dataset with truncated responses. The x-axis represents the truncation ratio and length, and the y-axis shows the accuracy of DPO implicit reward predictions compared to human annotations. 6.2 Conclusion The results from the human-generated data experiments provide strong evidence that the observed shallow preference signal is not solely a byproduct of autoregressive data generation. Even when the data is generated by humans, the preference signal remains concentrated in the early portions of the response. This indicates that the phenomenon is likely inherent in the structure of the response itself, rather than an artifact of the autoregressive generation process. 7 Limitations One limitation of this work is the absence of a strong theoretical foundation for the proposed phenomenon. Although our empirical results are compelling, a comprehensive theoretical explanation of the specific parts of a response that contribute to human preferences remains elusive. Future research could explore this aspect in more depth to establish a more robust theoretical framework. 8 Conclusion We introduce shallow preference signals, where key distinguishing features between preferred and non-preferred responses are concentrated in early response tokens. Our experiments show that models trained on truncated data—retaining 40% to 50% of tokens—perform similarly or better in reward modeling and Direct Preference Opti- mization than those trained on full-length data. Additionally, we highlight the limitation of current methods that focus mainly on initial tokens, suggesting the need for strategies that consider entire responses for more accurate alignment with human preferences. 11 References [1]Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, and Qun Liu. Aligning large language models with human: A survey. arXiv preprint arXiv:2307.12966 , 2023. [2]Chris Yuhao Liu, Liang Zeng, Jiacai Liu, Rui Yan, Jujie He, Chaojie Wang, Shuicheng
https://arxiv.org/abs/2505.17122v1
Yan, Yang Liu, and Yahui Zhou. Skywork-reward: Bag of tricks for reward modeling in llms. CoRR , abs/2410.18451, 2024. [3]Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. Ultrafeedback: Boosting language models with high-quality feedback. CoRR , abs/2310.01377, 2023. [4]Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Benjamin Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. A general language assistant as a laboratory for alignment. CoRR , abs/2112.00861, 2021. [5] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Chris Olah, Benjamin Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback. CoRR , abs/2204.05862, 2022. [6]Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 , 2022. [7]Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D. Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 , 2023. [8]Xuanchang Zhang, Wei Xiong, Lichang Chen, Tianyi Zhou, Heng Huang, and Tong Zhang. From lists to emojis: How format bias affects model alignment. CoRR , abs/2409.11704, 2024. [9]Junsoo Park, Seungyeon Jwa, Meiying Ren, Daeyoung Kim, and Sanghyuk Choi. Offsetbias: Leveraging debiased data for tuning evaluators. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Findings of the Association for Computational Linguistics: EMNLP 2024, Miami, Florida, USA, November 12-16, 2024 , pages 1043–1067. Association for Computational Linguistics, 2024. [10] Ryan Park, Rafael Rafailov, Stefano Ermon, and Chelsea Finn. Disentangling length from quality in direct preference optimization. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024 , pages 4998–5017. Association for Computational Linguistics, 2024. [11] Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. KTO: model alignment as prospect theoretic optimization. CoRR , abs/2402.01306, 2024. [12] Nathan Lambert, Valentina Pyatkin, Jacob
https://arxiv.org/abs/2505.17122v1
Morrison, LJ Miranda, Bill Yuchen Lin, Khyathi Raghavi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin Choi, Noah A. Smith, and Hannaneh Hajishirzi. Rewardbench: Evaluating reward models for language modeling. CoRR , abs/2403.13787, 2024. [13] Seongjun Yang, Gibbeum Lee, Jaewoong Cho, Dimitris Papailiopoulos, and Kangwook Lee. Predictive pipelined decoding: A compute-latency trade-off for exact LLM decoding. Trans. Mach. Learn. Res. , 2024, 2024. [14] Benjamin Bergner, Andrii Skliar, Amelie Royer, Tijmen Blankevoort, Yuki M. Asano, and Babak Ehteshami Bejnordi. Think big, generate quick: Llm-to-slm for fast autoregressive decoding. CoRR , abs/2402.16844, 2024. [15] Yuxuan Hu, Ke Wang, Xiaokang Zhang, Fanjin Zhang, Cuiping Li, Hong Chen, and Jing Zhang. SAM decoding: Speculative decoding via suffix automaton. CoRR , abs/2411.10666, 2024. [16] Parsa Kavehzadeh, Mohammadreza Pourreza, Mojtaba Valipour, Tinashu Zhu, Haoli Bai, Ali Ghodsi, Boxing Chen, and Mehdi Rezagholizadeh. S2D: sorted speculative decoding for more efficient deployment of nested large language models. CoRR , abs/2407.01955, 2024. 12 [17] Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Raghavi Chandu, Chandra Bhagavatula, and Yejin Choi. The unlocking spell on base llms: Rethinking alignment via in-context learning. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. [18] Xiangyu Qi, Ashwinee Panda, Kaifeng Lyu, Xiao Ma, Subhrajit Roy, Ahmad Beirami, Prateek Mittal, and Peter Henderson. Safety alignment should be made more than just a few tokens deep. CoRR , abs/2406.05946, 2024. [19] Paul F. Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V . N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4299–4307, 2017. [20] Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander Madry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, Andrew Galu, Andrew Kondrich, Andrew Tulloch, Andrey Mishchenko, Angela Baek, Angela Jiang, Antoine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon Eastman, Camillo Lugaresi, Carroll L. Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Charlotte Barette, Chelsea V oss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Claudia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, and Dane Sherburn. Gpt-4o system
https://arxiv.org/abs/2505.17122v1
card. CoRR , abs/2410.21276, 2024. [21] Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Millican, David Silver, Slav Petrov, Melvin Johnson, Ioannis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler, Timothy P. Lillicrap, Angeliki Lazaridou, Orhan Firat, James Molloy, Michael Isard, Paul Ronald Barham, Tom Hennigan, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, Ryan Doherty, Eli Collins, Clemens Meyer, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, George Tucker, Enrique Piqueras, Maxim Krikun, Iain Barr, Nikolay Savinov, Ivo Danihelka, Becca Roelofs, Anaïs White, Anders Andreassen, Tamara von Glehn, Lakshman Yagati, Mehran Kazemi, Lucas Gonzalez, Misha Khalman, Jakub Sygnowski, and et al. Gemini: A family of highly capable multimodal models. CoRR , abs/2312.11805, 2023. [22] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurélien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Rozière, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Graeme Nail, Grégoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel M. Kloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, and et al. The llama 3 herd of models. CoRR , abs/2407.21783, 2024. [23] Leshem Choshen, Lior Fox, Zohar Aizenbud, and Omri Abend. On the weaknesses of reinforcement learning for neural machine translation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020 . OpenReview.net, 2020. [24] Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. Implementation matters in deep policy gradients: A case study on PPO and TRPO. CoRR , abs/2005.12729, 2020. 13 [25] Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J. Liu. Slic-hf: Sequence likelihood calibration with human feedback. CoRR , abs/2305.10425, 2023. [26] Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Rémi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. A general theoretical paradigm to understand learning from human preferences. In Sanjoy Dasgupta, Stephan Mandt, and Yingzhen Li, editors, International Conference on Artificial Intelligence and Statistics, 2-4 May 2024, Palau de Congressos, Valencia, Spain , volume 238 of Proceedings of Machine Learning Research , pages 4447–4455. PMLR,
https://arxiv.org/abs/2505.17122v1
2024. [27] Yunhao Tang, Zhaohan Daniel Guo, Zeyu Zheng, Daniele Calandriello, Rémi Munos, Mark Rowland, Pierre Har- vey Richemond, Michal Valko, Bernardo Ávila Pires, and Bilal Piot. Generalized preference optimization: A unified approach to offline alignment. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net, 2024. [28] Jiahao Qiu, Yifu Lu, Yifan Zeng, Jiacheng Guo, Jiayi Geng, Huazheng Wang, Kaixuan Huang, Yue Wu, and Mengdi Wang. Treebon: Enhancing inference-time alignment with speculative tree-search and best-of-n sampling. CoRR , abs/2410.16033, 2024. [29] Souradip Chakraborty, Jiahao Qiu, Hui Yuan, Alec Koppel, Furong Huang, Dinesh Manocha, Amrit Singh Bedi, and Mengdi Wang. Maxmin-rlhf: Alignment with diverse human preferences. arXiv preprint arXiv:2402.08925 , 2024. [30] Ralph Allan Bradley and Milton E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika , 39(3/4):324–345, 1952. [31] Ziqi Wang, Le Hou, Tianjian Lu, Yuexin Wu, Yunxuan Li, Hongkun Yu, and Heng Ji. Enable language models to implicitly learn self-improvement from data. CoRR , abs/2310.00898, 2023. [32] Rémi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Zhaohan Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Côme Fiegel, Andrea Michi, Marco Selvi, Sertan Girgin, Nikola Momchev, Olivier Bachem, Daniel J. Mankowitz, Doina Precup, and Bilal Piot. Nash learning from human feedback. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net, 2024. [33] Gokul Swamy, Christoph Dann, Rahul Kidambi, Steven Wu, and Alekh Agarwal. A minimaximalist approach to reinforcement learning from human feedback. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net, 2024. [34] Chenlu Ye, Wei Xiong, Yuheng Zhang, Nan Jiang, and Tong Zhang. A theoretical analysis of nash learning from human feedback under general kl-regularized preference. CoRR , abs/2402.07314, 2024. [35] Dongfu Jiang, Xiang Ren, and Bill Yuchen Lin. Llm-blender: Ensembling large language models with pairwise ranking and generative fusion. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 14165–14178. Association for Computational Linguistics, 2023. [36] Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J. Liu, and Jialu Liu. Statistical rejection sampling improves preference optimization. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. [37] Hanze Dong, Wei Xiong, Bo Pang, Haoxiang Wang, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, and Tong Zhang. RLHF workflow: From reward modeling to online RLHF. CoRR , abs/2405.07863, 2024. [38] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich,
https://arxiv.org/abs/2505.17122v1
Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. CoRR , abs/2307.09288, 2023. [39] Zhilin Wang, Yi Dong, Jiaqi Zeng, Virginia Adams, Makesh Narsimhan Sreedhar, Daniel Egert, Olivier Delalleau, Jane Polak Scowcroft, Neel Kant, Aidan Swope, and Oleksii Kuchaiev. Helpsteer: Multi-attribute helpfulness dataset for steerlm. In Kevin Duh, Helena Gómez-Adorno, and Steven Bethard, editors, Proceedings of the 2024 14 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), NAACL 2024, Mexico City, Mexico, June 16-21, 2024 , pages 3371–3384. Association for Computational Linguistics, 2024. [40] Haoxiang Wang, Wei Xiong, Tengyang Xie, Han Zhao, and Tong Zhang. Interpretable preferences via multi- objective reward modeling and mixture-of-experts. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Findings of the Association for Computational Linguistics: EMNLP 2024, Miami, Florida, USA, November 12-16, 2024 , pages 10582–10592. Association for Computational Linguistics, 2024. [41] Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. CoRR , abs/2308.09583, 2023. [42] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. [43] Wendi Li and Yixuan Li. Process reward model with q-value rankings. CoRR , abs/2410.11287, 2024. [44] Alexander Havrilla, Sharath Chandra Raparthy, Christoforos Nalmpantis, Jane Dwivedi-Yu, Maksym Zhuravinskyi, Eric Hambro, and Roberta Raileanu. Glore: When, where, and how to improve LLM reasoning via global and local refinements. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net, 2024. [45] Joar Skalse, Nikolaus H. R. Howe, Dmitrii Krasheninnikov, and David Krueger. Defining and characterizing reward hacking. CoRR , abs/2209.13085, 2022. [46] Jose A. Arjona-Medina, Michael Gillhofer, Michael Widrich, Thomas Unterthiner, Johannes Brandstetter, and Sepp Hochreiter. RUDDER: return decomposition for delayed rewards. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada , pages 13544–13555, 2019. [47] Alexander Pan, Kush Bhatia, and Jacob Steinhardt. The effects of reward misspecification: Mapping and mitigating misaligned models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022 . OpenReview.net, 2022. [48] Yingchen Xu, Jack Parker-Holder, Aldo Pacchiano, Philip J. Ball, Oleh Rybkin, Stephen Roberts, Tim Rocktäschel, and Edward Grefenstette. Learning general world models in a handful of reward-free deployments. In Sanmi Koyejo,
https://arxiv.org/abs/2505.17122v1
S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 , 2022. [49] Cassidy Laidlaw, Shivam Singhal, and Anca Dragan. Correlated proxies: A new definition and improved mitigation for reward hacking, 2024. [50] Yuchun Miao, Sen Zhang, Liang Ding, Rong Bao, Lefei Zhang, and Dacheng Tao. Inform: Mitigating reward hacking in RLHF via information-theoretic reward modeling. In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jakub M. Tomczak, and Cheng Zhang, editors, Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 , 2024. [51] Yuchen Zhu, Daniel Augusto de Souza, Zhengyan Shi, Mengyue Yang, Pasquale Minervini, Alexander D’Amour, and Matt J. Kusner. When can proxies improve the sample complexity of preference learning? CoRR , abs/2412.16475, 2024. [52] Hanze Dong, Wei Xiong, Bo Pang, Haoxiang Wang, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, and Tong Zhang. Rlhf workflow: From reward modeling to online rlhf. arXiv preprint arXiv:2405.07863 , 2024. [53] Ralph Allan Bradley and Milton E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika , 39:324, 1952. [54] David A. Patterson, Joseph Gonzalez, Urs Hölzle, Quoc V . Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R. So, Maud Texier, and Jeff Dean. The carbon footprint of machine learning training will plateau, then shrink. Computer , 55(7):18–28, 2022. [55] Jian Hu, Xibin Wu, Zilin Zhu, Xianyu, Weixun Wang, Dehao Zhang, and Yu Cao. Openrlhf: An easy-to-use, scalable and high-performance rlhf framework. arXiv preprint arXiv:2405.11143 , 2024. 15 [56] Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, and Tong Zhang. Iterative preference learning from human feedback: Bridging theory and practice for rlhf under kl-constraint, 2024. [57] Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github. com/tatsu-lab/alpaca_eval , 5 2023. [58] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/ tatsu-lab/stanford_alpaca , 2023. [59] Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. Understanding dataset difficulty with V-usable information. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning , volume 162 of Proceedings of Machine Learning Research , pages 5988–6008. PMLR, 17–23 Jul 2022. A Preliminaries A.1 Autoregressive Language Model And Token-Level Markov Decision Process Autoregressive language models (ARLMs) are designed to generate token sequences y1, y2, . . . , y Tconditioned on the preceding tokens in a given context. Formally, for a provided input prompt x, the model generates the token sequence y= (y1, y2, . . . , y T)by factorizing the joint distribution of the sequence using the chain rule of probability: p(y|x)
https://arxiv.org/abs/2505.17122v1
=TY t=1p(yt|y1, y2, . . . , y t−1, x), where p(yt|y1, y2, . . . , y t−1, x)represents the conditional probability of generating token yt, given all previous tokens y1, y2, . . . , y t−1and the input prompt x. This process is typically framed as a token-level Markov Decision Process (MDP), where each state at time step t, denoted st, represents the sequence of tokens generated up to that point: st= (x, y1, y2, . . . , y t−1), and the action atcorresponds to the generation of the next token yt. The transitions between states are deterministic and are given by: st+1= (x, y1, y2, . . . , y t), as each subsequent state is determined solely by the previous state and the action of generating the next token. This token-level MDP formulation is useful for various applications, such as in training RL-based models where the language model needs to learn to generate tokens that not only fit the linguistic context but also satisfy some predefined quality criteria. Moreover, recent advancements in reinforcement learning from human feedback (RLHF) have sought to fine-tune such models to align with human preferences, making this framework essential for ensuring that ARLMs produce high-quality, aligned outputs. In the context of reinforcement learning (RL), the task is framed as a Max-Entropy RL problem, where the reward is a combination of a task-specific reward function and a regularization term. The objective is to maximize the expected sum of the rewards, along with the entropy of the policy to promote exploration: Ex∼X,y∼π(·|x)[r(y|x) +βlogπref(y|x)] +βEx∼X[H(π(·|x))] where r(y|x)represents the reward for generating a sequence ygiven the input prompt x,πref(y|x)is a reference policy that can be used to encourage alignment with desired behaviors, and H(π(·|x))is the entropy of the policy at time t, promoting exploration by discouraging deterministic behaviors. At the token level, the RL objective can be rewritten as: Es0∼X,at∼π(·|st)"TX t=1r′(st, at)# +βEs0∼X[H(π(·|s0))], where r′(st, at)is the token-level reward, defined as: r′(st, at) =βlogπref(at|st),ifst+1is not terminal , r(y|x) +βlogπref(at|st),otherwise . 16 In this formulation, the reward function r(y|x)typically measures how well the generated sequence aligns with the desired outcome, while the entropy term βlogπref(at|st)encourages diversity in the generated tokens. The objective in reinforcement learning is to find an optimal policy π∗that maximizes the expected cumulative reward. This is done by solving for the optimal Q-function Q∗(st, at), which provides the expected future reward for taking action atfrom state st: Q∗(st, at) =r′(st, at) +V∗(st+1), where V∗(st)is the optimal state-value function, representing the expected reward from state st. The optimal policy π∗ satisfies the following equation: βlogπ∗(at|st) πref(at|st)=Q∗(st, at)−V∗(st). When t < T , the optimal policy maximizes the difference between the state-value function of the next state and the current state, encouraging the model to generate the sequence that leads to the highest cumulative reward. A.2 RLHF with Reward Models Reinforcement learning from human feedback (RLHF) is an approach where a reward model is used to guide the training of the language model. The reward model r(y|x)evaluates the quality of a generated response ygiven a prompt
https://arxiv.org/abs/2505.17122v1
x. The goal is to maximize the expected reward by adjusting the model’s parameters using a policy optimization algorithm such as Proximal Policy Optimization (PPO) Initially, [ 19] proposed learning a reward model using the Bradley-Terry model to assign a score to each response. For a pair of responses yandy′, the Bradley-Terry model defines the probability that yis preferred over y′as: P(y≻y′|x) =exp(r(y;x)) exp(r(y;x)) + exp( r(y′;x)), The reward function is learned by maximizing the log-likelihood of preference predictions. For a triplet (x, yw, yl), where ywis the winner and ylis the loser, the Direct Preference Optimization (DPO) loss is derived as follows: ℓDPO(x, yw, yl;θ;πref) := −logσ β logπθ(yw|x) πref(yw|x)−logπθ(yl|x) πref(yl|x) , where σ(·)is the logistic function, σ(z) =1 1+exp( −z)andβis a hyperparameter that controls the importance of the preference signal in the optimization process. This DPO method provides a more efficient and stable solution compared to traditional methods that require separate reward modeling and policy optimization. 17
https://arxiv.org/abs/2505.17122v1
arXiv:2505.17123v2 [cs.CL] 26 May 2025MTR-Bench: A Comprehensive Benchmark for Multi-Turn Reasoning Evaluation Xiaoyuan Li1∗, Keqin Bao1∗, Yubo Ma2, Moxin Li3, Wenjie Wang1, Rui Men2, Yichang Zhang2, Fuli Feng1, Dayiheng Liu2, Junyang Lin2 University of Science and Technology of China1 Alibaba Group2 National University of Singapore3 Abstract Recent advances in Large Language Models (LLMs) have shown promising results in complex reasoning tasks. However, current evaluations predominantly focus on single-turn reasoning scenarios, leaving interactive tasks largely unexplored. We attribute it to the absence of comprehensive datasets and scalable automatic evalua- tion protocols. To fill these gaps, we present MTR-Bench for LLMs’ Multi-Turn Reasoning evaluation. Comprising 4 classes, 40 tasks, and 3600 instances, MTR- Bench covers diverse reasoning capabilities, fine-grained difficulty granularity, and necessitates multi-turn interactions with the environments. Moreover, MTR-Bench features fully-automated framework spanning both dataset constructions and model evaluations, which enables scalable assessment without human interventions. Ex- tensive experiments reveal that even the cutting-edge reasoning models fall short of multi-turn, interactive reasoning tasks. And the further analysis upon these results brings valuable insights for future research in interactive AI systems. 1 Introduction With the emergence of reasoning-enhanced Large Language Models (LLMs), such as o1 [ 16] and R1 [7], significant progress has been made in complex reasoning tasks [ 38,24,43,22]. However, most current evaluations focus on single-turn reasoning in domains like mathematics [ 5,11], common- sense [ 31,44], logic reasoning [ 9,33], and code generation [ 17,3], which do not reflect the interactive and iterative nature of real-world problem-solving. But multi-turn reasoning is essential for practical reasoning performance. It enables long-term planning, allows for feedback acquisition and reuse, and supports gradual problem solving through iterative refinement. A key question thus arises: Can frontier LLMs maintain effective reasoning capabilities in dynamic, multi-turn environments? Although there have been attempts to construct datasets for evaluating LLMs’ multi-turn capabilities, such as MT-Bench [ 45] and GameArena [ 12], these approaches exhibit significant limitations. MT- Bench primarily focuses on dialogue coherence and context understanding rather than reasoning capabilities. GameArena, although specifically designed for reasoning assessment, is constrained to only three scenarios and heavily relies on human interaction for evaluation, resulting in insufficient scenario diversity and high evaluation costs. Furthermore, human involvement in the evaluation process makes it difficult to precisely control question difficulty, limiting the ability to assess models across different capability levels. ∗Work done when Xiaoyuan Li and Keqin Bao were intern at Alibaba Group. Preprint. To bridge these gaps, we propose a novel multi-turn automated reasoning evaluation framework designed to more accurately evaluate LLMs’ comprehensive capabilities in interactive environments. The development of such a benchmark presents two primary challenges: (1) designing effective and diverse multi-turn tasks that can measure the multi-dimensional reasoning capabilities of models and (2) establishing an automated interactive evaluation framework to facilitate scaling and avoid saturation after model advancement [27]. To address the first challenge, we focus on constructing tasks that inherently require multi-turn reasoning, where each interaction step introduces new constraints or information that necessitates iterative refinement of the model’s reasoning process. To achieve this, we manually collect and validate a set
https://arxiv.org/abs/2505.17123v2
of highly reasoning-intensive tasks from various sources for systematically evaluating four fine-grained reasoning abilities: Inductive ,Abductive ,Deductive , and Planning Reasoning [30, 13]. Then for each task, we design a structured problem template that explicitly defines interactive rules, format requirements, and example interactions demonstrating valid exchanges. Through these templates, models are required to engage in active reasoning, gather environmental feedback, and iteratively refine their reasoning process in order to accomplish the given reasoning objective. As for the second challenge, to enable scalable automated evaluation, we implement three components -Generator ,Monitor , and Evaluator , to construct an automated interactive evaluation framework. The generator transforms each problem template into tasks of distinct difficulty levels while ensuring solution feasibility through carefully controlled complexity parameters. With the generator, we can smoothly control the difficulty of reasoning as models’ performance improves. The monitor processes model queries through a two-stage validation system: it first checks query format compliance, then provides rule-specific feedback for valid queries while monitoring whether the given reasoning objectives are achieved. The evaluator assesses completed dialogues across multiple dimensions to provide a comprehensive evaluation of models’ sustained reasoning capabilities. Building upon these design principles, we present MTR-Bench (Multi-TurnReasoning Bench mark), a comprehensive evaluation framework that encompasses 40 distinct reasoning tasks designed to assess four reasoning abilities, with each task calibrated across three difficulty levels. Through extensive empirical evaluation of 20 reasoning and non-reasoning models, our analysis reveals that o3-mini demonstrates superior overall performance. Our key findings indicate: (1) As the reasoning difficulty increases, even current frontier models struggle significantly. (2) As the number of reasoning steps increases, the advantage of o3-mini over other models becomes more pronounced, which indicates a potential optimization direction for the open-source community. (3) Reasoning ability is not directly correlated with reasoning efficiency; o3-mini often requires more reasoning steps compared to QwQ-32B and R1 on questions where all three models arrive at correct answers. In summary, our main contributions are as follows: •We introduce a high-quality benchmark specifically designed to assess models’ reasoning capabili- ties in multi-turn interactive scenarios. •We propose an automated framework for comprehensive multi-turn evaluation, capable of producing problems with tunable complexity. This enables the benchmark to evolve alongside advances in model capabilities. •Our empirical findings reveal several critical limitations of current models in multi-turn reasoning settings, offering valuable insights for future research directions. 2 Overview In this section, we first propose our automated interactive framework that simulates real-world reasoning scenarios. At its core, the framework enables a model to engage in multiple turns of interaction while maintaining consistent reasoning progress toward solving a given task. Formally, our framework consists of three essential components, including generator, monitor and evaluator, which can work together to create a controlled and automated evaluation environment: Generator ( P)creates interactive problems with controlled difficulty levels and corresponding reasoning objectives. Formally defined as p, s=P(t, n, g n), where prepresents the generated problem, sdefines the reasoning objective, tspecifies the problem template, ndetermines the 2 DataCollectionDataClassificationDatasetConstructionInteractiveEvaluation CodeForcesPublicWebsitesPrior Knowledge PredefinedcategoriesRecheckInformation Probing-Inductive ReasoningDynamic Adaptation. -Abductive ReasoningState Operation-Deductive Reasoning Strategic Gaming-Planning Reasoning GeneratorMonitorEvaluator ü40 Types of Problemsü3
https://arxiv.org/abs/2505.17123v2
Difficulty LevelsüFormat FeedbacküQuery FeedbacküTermination ChecküAccuracyüEfficiencyüInvalid RateüPatternAnalysis Find the Impostors:Find the impostors among 6 players. You can ask about 3 players at a time: "1" means more impostors, "0" means fewer. Find all impostor numbers to win. 123456 Alright, I'm ready. My Query: 1,2,3 1 123MoreImpostors This means there are more impostors in players 1,2,3...My Query: 2,3,5 0 LessImpostors 235 This means players 5isnotanimpostor...My Answer: 2,5,6 Correct! 256 Figure 1: This figure represents the complete framework of our benchmark, from construction to evaluation. It includes four modules: data collection, data classification, dataset construction, and interactive evaluation. After the dataset is built, the evaluation system can perform automated multi-round interactive evaluations and automatically increase the difficulty of the problems. complexity level, and gnencodes the corresponding problem parameters. We carefully design twith explicit interaction rules, format requirements and example interactions for each task. Monitor ( M)generates feedback and determines termination based on the model’s query. The monitoring process can be formalized as: (mi, si) =M(t, qi, si−1, s, I), where si−1andsidenote the conversation states at turns i−1andirespectively, and mirepresents the generated feedback for query qibased on template t. The interaction terminates when either the target state si=sis achieved or the maximum turn limit Iis reached. For each query, Mfirst validates the legality of the query format, then determines whether the current conversation should be terminated, and finally inputs mias the response to the model. Evaluator ( E)assesses multi-turn interactions across multiple dimensions. Formally, e= E(t,{(q1, m1), ...,(qT, mT)}), where Tdenotes the total turns and eencompasses a range of metrics of accuracy, efficiency, invalid rate, and pattern analysis. Specifically, •Accuracy (Acc) measures the proportion of successfully completed tasks. A task is considered successful if and only if its final state sTmatches the task’s reasoning objective s. Formally, Acc=SC C, where Cis the total number of tasks and SCis the number of successful tasks. •Efficiency (Eff) evaluates relative solution efficiency by comparing turn counts on commonly solved tasks between model pairs. For two models AandB, letCABdenote their set of commonly solved tasks. The efficiency score of model AoverBis computed as: EffA,B=P c∈CABI(Tc A<Tc B) |CAB|, where Tc AandTc Brepresent the turn counts for task cby models AandBrespectively, and I(·)is an indicator function that equals 1 when the condition is true and 0 otherwise. •Invalid Rate (IR) assesses the proportion of interactions containing invalid operations among all interaction conversations. This metric not only measures the model’s ability to follow instructions but also reflects its fundamental reasoning capability to infer valid operations from the current environment. Formally, IR=NV N, where NVis the number of interactions with invalid operations andNis the total number of interactions. •Pattern Analysis (PA) examines the model’s reasoning patterns across four categories: Associate (associating with the original problem), Verify (reflecting and verifying the reasoning process), Plan (strategically planning subsequent interactions) and Feedback (utilizing previous feedback for reasoning). We analyze the occurrence count of each pattern in each interactive turn and calculate PAJ=1PC c=1TcPC c=1PTc i=1rJ c,i, where Tcdenotes the number of interaction turns for task c, and rJ c,irepresents the occurrence count of pattern Jin the
https://arxiv.org/abs/2505.17123v2
i-th turn of task c. Through these three components, our framework uses the Generator to create problems, facilitates interactions between the Monitor and models, and ultimately employs the Evaluator to measure models’ performance. 3 Information ProbingDynamic Adaptation State OperationStrategic Gaming Find the ImpostorsüInteractionRules:Find the impostors among n players. Thenumberof impostorsis between 3/nand 2/3n. You can ask about 3 players at a time: "1" means more impostors, "0" means fewer. Find all impostor numbers to win. üFormatRequirements:1. Ask about three players: "My Query: a,b,c" 2. Submit final answer: "My Answer: x1,x2,... xk "üExample Interactions:(Answer: 1,2,3,4 )You: "My Query: 1,2,3 " <-> Me: "1"(means more impostors in this group)You: "My Query: 3,4,5 " <-> Me: "1"(means more crewmates in this group)You: "My Answer: 1,2,3,4 " <-> Me: "Correct " Ready to start? Make your first query!Difficulty Level: Easy: n=6, Medium: n=9, Hard: n=12 PasswordBreakerüInteractionRules:The password is a number between m and m+n (inclusive). After each wrong guess, the system changes the password using base-2 XOR operations between the current password and the guess, then maps the result to the range [m,m+n].You need to obtain the current correct password.üFormatRequirements:"My guess: X" (where X is a number between m and m+n)üExample Interactions:(Answer: 2)You: "My Guess: 3" <-> Me: "Incorrect "(new password = 2 XOR 3 = 1)You: "My Guess: 5" <-> Me: "Incorrect "(new password = 5 XOR 1 = 4)You: "My Guess: 4" <-> Me: "Correct " Ready to start? Make your first query!Difficulty Level: Easy: n=10, Medium: n=20, Hard: n=30 Maze NavigationüInteractionRules:Navigate through a n x n maze from (1,1) to finish point (x,y). The grid contains normal cells ("."), finish cell ("F"), and dangerous cells ("*"). Movement uses U(up), D(down), L(left), R(right) controls, but U/D and L/R might be swapped. Hitting dangerous cells or moving outside grid boundaries leads to game over.üFormatRequirements:"My Move: X" (where X is one of U, D, L, R)üExample Interactions:You: "My Move: R" <-> Me: "1 2"(new position coordinates) Ready to start? Make your first move!Difficulty Level: Easy: n=4 Medium: n=5, Hard: n=6 Knight BattleüInteractionRules:Play as White Knight on n x n chessboard, starting at (x,y) against Black Knight at (p,q). Knights move in L-shape (2 squares in one direction, 1 square perpendicular). Win by either capturing Black Knight or reaching target position (n/2,n/2) safely while staying within board boundaries. üFormatRequirements:"My Move: x y" (new position coordinates)üExample Interactions:You: "My Move: 4 4" <-> Me: "6 3"(Black Knight's move)You: "My Move: 5 6" <-> Me: "5 1"(Black Knight's move) Ready to start? Make your first move!Difficulty Level: Easy: n=6 Medium: n=8, Hard: n=16 Figure 2: This figure illustrates examples of our four task types. Each task includes interaction rules, query format requirements, and example interactions, with three levels of input difficulty. 3 Benchmark Construction In this section, we first introduce the classification of tasks (§3.1), and then explain how we construct each problem (§3.2), finally we briefly discuss how the interactive evaluation occurs (§3.3) as shown in Figure 1. 3.1 Data Classification To construct our dataset, we first collect seed tasks from various websites23. To facilitate a systematic analysis of
https://arxiv.org/abs/2505.17123v2
models’ reasoning capabilities, we categorize the public seed tasks into four predefined classes as follows using GPT-4o, with subsequent human validation ensuring classification accuracy. While successful task completion generally requires a combination of various reasoning skills, each predefined class is specifically designed to evaluate distinct aspects of reasoning capabilities. •Information Probing (IP) : It involves discovering hidden but fixed information. As shown in Figure 2, in “Find the Impostors”, models determine the complete role distribution by querying about different group compositions, with the monitor revealing each group’s majority type as clues. In this task, models should progressively eliminate distractors to reach the answer. •Dynamic Adaptation (DA) : Unlike “Information Probing” where answers remain static, this type involves answers that evolve according to deterministic transformation rules. As exemplified in “Password Breaker”, each incorrect query triggers specific password modifications based on predefined mechanisms. Success in this type requires models to accurately understand and apply transformation rules to make informed and targeted queries. •State Operation (SO) : This category introduces hidden mechanics, distinguishing it from the previous two categories. For example, in “Maze Navigation”, models are required to guide an agent to a target location under an initially unknown control system. Success requires models to rationally analyze the current situation and infer the hidden mechanism through appropriate actions, then proceed with subsequent operations based on this understanding. 2https://codeforces.com/ 3https://www.nytimes.com/ 4 •Strategic Gaming (SG) : It features adversarial two-player environments where task outcomes depend on the dynamic interaction between model actions and system responses4. Taking “Knight Battle” as an instance, models should strategically outpace the system to complete objectives, requiring both competitive awareness and efficient execution. By leveraging four distinct task categories, we comprehensively assess LLMs’ multi-turn reasoning capabilities. Specifically, our framework focuses on the following essential types of reasoning. •Inductive Reasoning : This involves forming general conclusions by identifying patterns from specific observations [ 10,26,42]. For example, in “Find the Imposters” of “Information Probing”, models need to gather evidence by querying different group configurations, observe the majority role types within each group, and synthesize these observations to infer the complete role distribution. •Abductive Reasoning : This is the process of inferring the most plausible explanation from limited or incomplete evidence [ 30,18]. In “Dynamic Adaptation”, where the correct answer evolves according to predefined rules, models require to infer the current state of the target answer based on a limited number of interactions. •Deductive Reasoning : This refers to deriving specific conclusions through the application of known rules or logical implications [ 6,29]. In “State Operation”, for instance, models should first infer hidden mechanisms from rule-based environmental feedback and then apply those rules to perform correct reasoning. •Planning : Success in our tasks crucially depends on multi-step planning capabilities [ 36,14,2]. This is particularly evident in “Strategic Gaming”, where models should construct action sequences by anticipating future states and considering both their moves and potential opponent responses. 3.2 Dataset Construction After obtaining the categorized seed task sets, we select 10 representative tasks for each of the four categories, yielding a total of 40 tasks that exhibit
https://arxiv.org/abs/2505.17123v2
diverse interaction patterns and rule structures as detailed in Appendix. Then, we manually convert the seed tasks into structured problem templates. Based on these templates, we develop problem generators with three difficulty levels: "easy", "medium", and "hard". Each level corresponds to different values of n, the parameter that determines the task complexity. We further implement monitors tailored to each task’s interactive rules, enabling the system to extract model queries, provide real-time feedback, and detect conversation termination. For evaluation purposes, we design task-specific evaluators that assess performance based on the complete conversation history, employing metrics aligned with each task’s reasoning objective. To calibrate difficulty levels, we evaluate task solvability using o3-mini across 10 problems for each n, iteratively refining the parameters until the difficulty gradient exhibits both a meaningful progression and reasonable feasibility. Finally, we generate a comprehensive dataset comprising 30 distinct problems per difficulty level for each of 40 tasks, resulting in a total of 3,600 evaluation instances. This structure enables robust and fine-grained assessment of model performance across varying complexity levels. 3.3 Interactive Evaluation As shown in Figure 2, the interaction process begins with the generator providing the problem to the tested model while passing reasoning objective to the monitor. Upon receiving the problem, the model generates response which is then sent to the monitor. The monitor extracts query from the response, computes appropriate feedback, and returns it to the model. Based on the feedback, the model adjusts its reasoning and continues responding. This iterative cycle repeats until the monitor detects conversation termination conditions. Finally, the evaluator receives the complete conversation history and analyzes it using various metrics. To illustrate this process, let’s consider “Find the Impostors”. The generator first creates problems across three difficulty levels by varying the parameter n. Along with each problem, it generates 4Our experimental results show that models struggle to achieve high accuracy even in simple scenarios with random system actions, leading us to adopt random system responses as our evaluation baseline. 5 IP DA SO SG A VG Model E M H E M H E M H E M H E M H Reasoning Model o3-mini 60.33 41.56 28.22 40.33 24.18 17.13 38.61 27.00 20.22 85.00 74.44 59.17 56.07 41.80 31.19 R1 39.22 25.00 11.11 34.58 23.11 15.22 47.67 38.56 32.78 73.00 62.67 57.67 48.62 37.33 29.19 QwQ-32B 53.56 28.22 19.00 38.33 20.44 12.00 36.67 29.89 25.33 70.00 56.33 46.00 49.64 33.72 25.58 R1-Distill-Llama-70B 33.78 13.11 6.33 25.50 11.00 5.67 15.56 10.78 7.89 61.11 44.17 28.89 33.99 19.76 12.19 R1-Distill-Qwen-32B 26.78 10.11 3.22 10.50 3.22 1.67 7.11 4.22 3.11 39.44 24.44 15.28 20.96 10.50 5.82 R1-Distill-Qwen-7B 3.89 2.33 1.11 0.44 0.00 0.00 0.67 1.11 0.22 3.67 2.67 1.00 2.17 1.53 0.58 R1-Distill-Qwen-1.5B 0.67 0.78 0.33 0.00 1.00 0.11 0.00 0.00 0.00 0.67 0.67 0.00 0.33 0.61 0.11 Non-Reasoning Model GPT-4o 29.11 10.56 6.89 22.92 11.56 7.00 19.73 15.11 11.56 42.22 30.56 22.78 28.50 16.94 12.06 Qwen-Max 33.89 11.56 7.33 27.42 17.67 8.11 20.15 13.67 10.78 49.17 33.61 22.50 32.66 19.13 12.18 gemma-3-27b-IT 31.00 9.78 9.67 18.92 9.67 6.33 16.00 10.00 5.67 16.89 4.72 5.15
https://arxiv.org/abs/2505.17123v2
20.70 8.54 6.70 gemma-3-12b-IT 24.78 8.33 4.56 15.03 8.44 5.89 12.22 4.56 3.56 12.61 9.17 5.17 16.16 7.63 4.79 gemma-3-4b-IT 11.44 4.56 2.44 8.61 6.00 4.11 9.00 4.22 2.89 10.67 2.33 0.67 9.93 4.28 2.53 Qwen2.5-72B-IT 38.22 20.00 10.89 23.22 12.44 6.33 14.78 11.00 7.89 41.50 32.78 26.67 29.43 19.06 12.94 Qwen2.5-32B-IT 33.44 14.67 12.44 19.69 12.89 6.22 23.67 17.67 14.44 42.00 25.00 19.76 29.70 17.56 13.22 Qwen2.5-7B-IT 27.44 11.44 3.67 18.33 9.33 6.22 9.67 6.00 4.89 22.67 10.00 8.33 19.53 9.19 5.78 Qwen2.5-1.5B-IT 2.22 0.11 0.22 6.44 4.33 0.78 9.44 0.89 1.33 17.67 14.67 12.00 8.94 5.00 3.58 Llama-3.1-70B-IT 40.11 21.22 11.89 23.81 12.00 6.78 16.78 11.44 8.78 36.50 25.33 20.72 29.30 17.50 12.04 Llama-3.1-8B-IT 22.67 10.00 4.89 13.58 5.78 4.67 12.56 5.33 3.78 11.00 5.67 3.00 14.95 6.69 4.08 Mistral-Small-24B-IT-2501 18.67 7.78 4.56 17.92 6.22 5.00 19.56 10.00 6.78 25.56 12.83 12.28 20.42 9.21 7.15 Ministral-8B-IT-2410 8.89 4.22 2.00 13.69 5.67 5.11 16.67 11.56 4.33 21.33 5.33 8.67 15.15 6.69 5.03 A VG 27.01 12.39 12.77 18.96 10.25 6.22 17.32 11.65 8.81 34.13 23.87 18.78 24.36 14.63 10.34 Table 1: Model Accuracy on MTR-Bench. IT: Instruction-based models. IP: Information Probing. DA: Dynamic Adaptation. SO: State Operation. SG: Strategic Gaming. E / M / H : Easy / Medium / Hard. The best results (column-wise) for reasoning and non-reasoning models are highlighted in purple and red , respectively. Their second-best results are shown in bold . reasoning objective in the form of binary sequences of length n, where 0 denotes impostors and 1 represents non-impostors (e.g., “000011” for n= 6). During the interaction, the monitor validates model responses against two specific patterns: “My Query: a, b, c ” and “My Answer: x1, x2, ..., x k”. Any response not matching these patterns is rejected. For valid queries in the format “My Query: a, b, c ”, the monitor returns “1” if the specified positions contain more impostors according to the answer sequence, and “0” otherwise. When the model submits a final answer, the monitor responds with either “Correct” or “Incorrect” and terminates the conversation if correct. Additionally, the monitor enforces a maximum round limit. Upon conversation completion, the evaluator processes the entire dialogue history, determining accuracy based on whether the final response received a “Correct” feedback, and calculates other metrics as defined in Section 2. The difficulty calibration process begins with initial testing using n= 6,7,8, generating 10 problems with their reasoning objectives per difficulty level. When these values fail to produce sufficient performance gradients, the generator iteratively tests different values until finding suitable ones (e.g., n= 6,9,12). Once appropriate difficulty parameters are established, we proceed with large-scale evaluation, generating 30 problems per difficulty level and testing them across all models. 4 Experiment In this section, we conduct extensive experiments to evaluate various LLMs on MTR-Bench, guided by the following research questions: - RQ1: How do current LLMs perform overall on our benchmark? - RQ2: How does those LLMs performance vary under increasing reasoning turns? - RQ3: Does superior performance equate to greater efficiency in the number of interactions? - RQ4:
https://arxiv.org/abs/2505.17123v2
How do the LLMs’ instruction following abilities and basic reasoning capabilities under multi-turn scenarios? - RQ5: Which reasoning patterns are relatively more important in multi-turn reasoning scenarios? 6 5 7 9 11 13 15 Turns0.00.51.0Accuracy Information Probing - Easy 5 7 9 11 13 15 Turns0.00.51.0Accuracy Information Probing - Medium 5 7 9 11 13 15 Turns0.00.51.0Accuracy Information Probing - Hard 5 7 9 11 13 15 Turns0.000.250.50Accuracy Dynamic Adaption - Easy 5 7 9 11 13 15 Turns0.000.250.50Accuracy Dynamic Adaption - Medium 5 7 9 11 13 15 Turns0.000.250.50Accuracy Dynamic Adaption - Hard 5 7 9 11 13 15 Turns0.000.250.50Accuracy State Operation - Easy 5 7 9 11 13 15 Turns0.000.250.50Accuracy State Operation - Medium 5 7 9 11 13 15 Turns0.000.250.50Accuracy State Operation - Hard o3-mini DeepSeek-R1 QwQ-32B GPT-4o Qwen-MaxFigure 3: Model accuracy v.s. interaction turns across different tasks and difficulty levels. 4.1 Experiment Setup Model Selection We evaluate both reasoning-enhanced LLMs and non-reasoning LLMs in our experiments. Among the reasoning-enhanced models, we include o3-mini [ 16], DeepSeek-R1 [ 7], QwQ-32B [ 34], and DeepSeek-R1-Distilled Series [ 7]. For non-reasoning models, we select GPT- 4o [15], Qwen-Max [ 41], Gemma-3 [ 32], Qwen2.5 [ 41], Llama-3.1 [ 8], and Mistral Series [ 1]. This diverse selection of both open-source and closed-source models ensures comprehensive coverage of current LLM capabilities in multi-turn reasoning scenarios. For all models, we limit the maximum number of interaction turns to 15 due to the substantial costs with GPU usage and API calls. 4.2 Main Performance (RQ1) We first present the overall results of models on four reasoning tasks of our datasets in Table 1. From the results, we can observe the following conclusions: •Impact of Task Difficulties : Across all models, performance decreases progressively from “easy” to “medium” to “hard”. This demonstrates the rationality of our dataset’s difficulty stratification. •Comparison Between Reasoning and Non-Reasoning Models : When comparing state-of-the-art reasoning models (e.g., o1, R1) with non-reasoning models, it is evident that reasoning models sig- nificantly outperform their non-reasoning counterparts. Notably, even smaller-parameter reasoning models (e.g., QwQ-32B) surpass the strongest non-reasoning models within the same series (e.g., Qwen-Max). This highlights the necessity of enhancing reasoning capabilities in model design. •Comparison Between Non-Reasoning Models and its Distilled Versions : Comparing the non- reasoning and reasoning-specific version (e.g., R1-Distill) of the same model series shows nearly equivalent performance. While R1-Distill excels in math and code-related tasks, it fails to generalize effectively on our OOD tasks. This indicates that merely applying SFT distillation is insufficient to generalize reasoning abilities, underscoring the necessity of reinforcement learning [19]. •Task-Specific Observations : A closer inspection of individual tasks reveals that while o3-mini consistently outperforms other models, particularly in IP and SG, its performance is similarly to QwQ-32B and R1 in DA and SO. The distinction of the two categories lies in the nature of environmental feedback: in DA and SO tasks, the feedback is less straightforward, requiring models to first correctly interpret the feedback before proceeding with their reasoning. This additional interpretation and reasoning may deviate significantly from training distribution. •Performance of
https://arxiv.org/abs/2505.17123v2
Small Models : Models with fewer than 7B parameters achieve almost no mean- ingful scores, further emphasizing the difficulty of our benchmark. Consequently, in subsequent analyses, we will focus on models with 32B or more parameters. 7 0 25 50 75 100 Percentage (%)R1 vs o3-miniR1 vs QwQ-32BQwQ-32B vs o3-mini 53.8% 22.7% 23.5%44.8% 22.4% 32.9%50.5% 20.9% 28.5%Infomation Probing 0 25 50 75 100 Percentage (%)63.9% 8.2% 27.8%55.7% 11.3% 33.0%50.5% 20.6% 28.9%Dynamic Adaption 0 25 50 75 100 Percentage (%)25.0% 57.9% 17.1%27.7% 55.7% 16.6%21.8% 54.7% 23.5%State Operation 0 25 50 75 100 Percentage (%)33.8% 45.7% 20.5%22.2% 50.7% 27.1%38.1% 43.4% 18.5%Stragetic Gaming Less Tie MoreFigure 4: Efficiency comparison of interaction turns between models on correctly-answered problems. For each pair (A vs B), A is labeled as Less if it requires fewer turns than B, and More otherwise. A higher proportion of Less indicates superior efficiency in problem-solving. o3-mini DeepSeek-R1QwQ-32B R1-Distill-Llama-70B R1-Distill-Qwen-32BR1-Distill-Qwen-7BR1-Distill-Qwen-1.5BGPT-4o Qwen-Max gemma-3-27b-IT gemma-3-12b-ITgemma-3-4b-ITQwen2.5-72B-IT Qwen2.5-32B-ITQwen2.5-7B-ITQwen2.5-1.5B-IT Meta-Llama-3.1-70B-ITMeta-Llama-3.1-8B-IT Mistral-Small-24B-IT-2501Ministral-8B-IT-24100102550100Invalid Rate (%)5 10 15 Figure 5: Invalid rate across evaluated models. Larger rate indicates weaker instruction-following and reasoning capabilities. 4.3 Turn Analysis (RQ2) In this section, we analyze how the number of interaction turns affects model performance. Figure 3 illustrates the accuracy of five representative models across various tasks and difficulty levels, with different numbers of interaction turns. Our analysis focuses on four key perspectives: •Task-Specific Analysis : IP benefits the most from increased interaction turns. In contrast, for DA and SO, additional turns do not always lead to significant performance gains. This suggests that even current reasoning models are primarily strong in direct reasoning based on inductive inference, but still weak in deductive and abductive reasoning, which rely on premise assumptions. •Reasoning vs. Non-Reasoning Models : Overall, the accuracy improvement of non-reasoning models with increasing turns is significantly lower than that of reasoning models. This indirectly suggests that non-reasoning models are less effective in utilizing feedback in multi-turn dialogues. •Comparison among Reasoning Models : We find that o3-mini does not have a clear advantage across arbitrary numbers of turns, especially when the number of reasoning turns is small (e.g., 5). However, as the number of turns increases, o3-mini demonstrates the most significant improvement in accuracy, particularly in IP. This further underscores o3-mini’s strong abilities in leveraging and integrating historical interaction information over multiple turns. 4.4 Efficiency Analysis (RQ3) To further analyze the relationship between performance and efficiency, we conduct an analysis of three reasoning models. Specifically, we select a random sample of 100 problems that are correctly answered by all three models for each task type. We then compare the number of interaction turns required by each model pair to success, and calculate their efficiency scores defined in Section 2. As shown in Figure 4, surprisingly, among the three models, o3-mini, which demonstrates the best performance, is relatively the least efficient, while R1 achieves the highest efficiency. This suggests that higher performance does not necessarily translate to better efficiency in terms of interaction turns. Combined with the conclusions in Section 4.2, the superior performance of o3-mini does not necessarily lie in its efficient reasoning. Instead, it
https://arxiv.org/abs/2505.17123v2
may be more adept at long-term planning compared to others, making reasonable use of feedback in each turn to tackle more complex tasks. 8 IP DA SA SG Model Ass. Ver. Pla. Fee. Ass. Ver. Pla. Fee. Ass. Ver. Pla. Fee. Ass. Ver. Pla. Fee. QwQ-32B 11.1 6.9 2.3 7.2 11.6 7.7 2.7 6.2 10.0 5.2 3.9 5.5 8.7 5.4 4.1 3.1 Deepseek-R1 10.6 6.6 2.2 5.2 11.1 7.0 2.3 4.1 9.9 5.3 3.8 3.7 7.0 4.1 3.1 1.9 R1-Distill-Qwen-32B 7.6 2.7 2.7 3.0 8.7 3.3 3.8 3.0 8.2 2.9 3.5 4.3 8.1 2.8 4.0 2.9 Table 2: Pattern analysis on MTR-Bench. Ass.: Associate. Ver.: Verify. Pla.: Plan. Fee.: Feedback. 4.5 Invalid Operation Analysis (RQ4) To better understand the poor performance of current LLMs on our benchmark, we conduct a manual review of model responses. Our analysis reveals that beyond limitations in long-term reasoning ability, a significant factor is the presence of “Invalid Operations” even in the best-performing models. These invalid operations fall into two categories: instruction-following failures where models fail to format queries according to format requirements, and operational failures where models cannot perform legitimate operations (e.g., making out-of-bounds moves in “KnightBattle”), which often requires basic reasoning capabilities. As shown in Figure 5, we can lead to the following conclusions: •Overall, smaller models exhibit higher “Invalid Rate” (IR), particularly 1.5B-sized models which struggle with basic operation validity, reflecting their limited instruction-following capabilities. •Surprisingly, distilled models show higher IR than their original versions, suggesting that while distillation may enhance reasoning, it potentially compromises stability in multi-turn interactions. •Comparing state-of-the-art reasoning models with non-reasoning models, the former exhibit lower IR, further confirming the superior capabilities of reasoning models in multi-turn scenarios. 4.6 Reasoning Pattern Analysis (RQ5) To gain deeper insights into the reasoning capabilities of models on our benchmark, we conduct a reasoning pattern analysis on three open-source reasoning models. Specifically, using Qwen2.5-72B as the analyzer, we measure the average per-turn frequency of four reasoning patterns: original problem recall ( Associate ), error checking ( Verify ), strategic planning ( Plan ), and feedback analysis (Feedback ). The results are summarized in Table 2, from which we draw the following conclusions: •Stronger reasoning models QwQ-32B and R1 demonstrate superior capabilities in “Associate”, “Verify”, and “Feedback” compared to R1-Distill-32B, indicating these three abilities are crucial for multi-turn reasoning. Enhancement of these capabilities could potentially yield improvement. •Although planning is essential for multi-turn tasks, the three models show similar planning frequen- cies across most tasks. However, SG exhibits notably higher planning frequency, suggesting that competitive scenarios inherently demand stronger strategic planning capabilities. 5 Related Work Reasoning Evaluation of LLMs Existing methods for evaluating LLMs’ reasoning primarily rely on static, single-turn benchmarks across various domains including mathematical [ 11,5,21], code [ 3,17], commonsense [ 44,11,31,20], and logical reasoning [ 9,23]. However, these static datasets are susceptible to data leakage [ 28] and performance saturation [ 27], and fail to simulate real-world reasoning scenarios [ 12]. To address these limitations, recent works like MT-Bench [ 45], MINT [ 37] and GameArena [ 12] have adopted multi-turn dialogue
https://arxiv.org/abs/2505.17123v2
settings. However, MT-Bench primarily focuses on conversational coherence rather than reasoning capabilities, MINT focuses on tool usage evaluation, while GameArena’s human-in-the-loop approach introduces bias and reduces efficiency. In contrast, MTR-Bench provides an automated framework that effectively evaluates multi-turn reasoning. Dynamic Evaluation Recent works have also explored using games for dynamic evaluation of LLMs [ 39,25]. Various studies have employed grid-based games [ 35], communication games [ 40], and adversarial language games [ 4]. Whether through self-play, competition between different LLM-agents, or human-LLM interactions, these approaches mainly focus on adversarial scenarios and are typically limited to three or fewer game types, sometimes even a single game. In contrast, 9 MTR-Bench encompasses four distinct types across 40 diverse tasks, significantly expanding the evaluation scope to achieve comprehensive assessment. 6 Conclusion In this paper, we present MTR-Bench, a comprehensive benchmark for evaluating LLMs’ multi-turn reasoning capabilities. The benchmark comprises 40 diverse tasks across four reasoning categories with adjustable difficulty levels, supported by an automated evaluation framework. Our extensive experiments reveal both strengths and limitations of current LLMs in interactive reasoning, providing valuable insights for future research in LLM evaluation. References [1] Mistral AI. https://mistral.ai/news/mistral-small-3. Hugging Face , 2025. [2]Anurag Ajay, Seungwook Han, Yilun Du, Shuang Li, Abhi Gupta, Tommi S. Jaakkola, Joshua B. Tenenbaum, Leslie Pack Kaelbling, Akash Srivastava, and Pulkit Agrawal. Compositional foun- dation models for hierarchical planning. In Thirty-seventh Conference on Neural Information Processing Systems , 2023. [3]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-V oss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. CoRR , abs/2107.03374, 2021. [4]Pengyu Cheng, Tianhao Hu, Han Xu, Zhisong Zhang, Yong Dai, Lei Han, nan du, and Xiaolong Li. Self-playing adversarial language game enhances LLM reasoning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [5]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. CoRR , abs/2110.14168, 2021. [6] Antonia Creswell, Murray Shanahan, and Irina Higgins. Selection-inference: Exploiting large language models for interpretable logical reasoning. In The Eleventh International Conference on Learning Representations , 2023. [7]DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue,
https://arxiv.org/abs/2505.17123v2