text string | source string |
|---|---|
failure cases, limiting the model’s ability to learn from mistakes. This can result in preference bias and the generation of low-quality responses. To address these issues, we propose an Infer- ential Preference Mining (IPM) method to collect high-quality preference data, along with a dynamic data routing mechanism that identifies psy-errors and routes data to targeted DPO training phases. Specifically, we first perform inference using Qwen-SFT and Llama-SFT models to generate inference data, which are paired with the corre- sponding ground truth to form candidate prefer- ence pairs. Subsequently, we apply a dynamic Criteria Total Assistant UserESConv# Dialogues 1,040 – – # Utterances 29,526 14,763 14,763 Avg. Turns of Dialogue 28.40 14.20 14.20 Avg. Char of Utterance 95.85 112.17 79.54 Criteria Total Qwen LlamaIPM-PrefDial# Strategy Pref-Pairs 21,370 10,651 10,719 # Response Pref-Pairs 11,887 6,041 5,846 Avg. Char of Chosen 124.89 124.72 125.06 Avg. Char of Rejected 83.82 81.04 86.59 # Lack Emp. Response 4,371 2,288 2,083 # Emo. Shift Response 3,600 1,814 1,786 # Temp. Res. Response 3,916 1,939 1,977 Table 1: Statistics of the ESConv and IPM-PrefDial Datasets. data routing mechanism to filter these pairs based on the four types of common psy-errors observed in Obs1 (§3.1). This process yields the final IPM- PrefDial dataset. Concretely, for each subtask: For the Strategy Planner (SP) , we identify sub- optimal strategies srexhibiting the Strategy Mis- match psy-error, pair them with the gold strategies scand context c, and construct the DSP-dpo dataset, which is routed to the SP’s DPO training phase: DSP-dpo =n c(i), s(i) c, s(i) ro|DSP-dpo| i=1.(1) For the Response Generator (RG) , the SFT model generates multiple responses based on the gold strategy. Suboptimal responses arexhibit- ing typical psychological errors, namely Lack of Empathy ,Early Emotion Shift , orTemplate Re- sponse , are identified via the dynamic data rout- ing. These are then paired with the corresponding gold responses acto construct the DRG-dpo dataset, which is routed to the RG’s DPO training phase: DRG-dpo =n c(i), s(i), a(i) c, a(i) ro|DRG-dpo| i=1.(2) 4.2 Datasets Statistics ESConv Dataset. We employ ESConv dataset for SFT training ( Dsft), which includes 1,040di- alogues with an average of 14.2turns and 95.9 characters per turn. Strategy distribution and tem- poral trends are shown in Table 6 and Figure 8 in the Appendix C.1. IPM-PrefDial Dataset. IPM-PrefDial dataset contains 21,370strategy preference pairs ( DSP-dpo ) and11,887response preference pairs ( DRG-dpo ). InDRG-dpo , chosen responses average 124.89char- acters, rejected ones 83.82. Major rejection reasonsModel Type Flu. ↑ Pro.↑ Emp.↑ Hel.↑ QwenChosen 3.82 3.52 3.20 2.95 Rejected 3.65 3.09 2.41 2.32 Improve ( ↑)4.66% 13.92% 32.78% 27.16% LlamaChosen 3.99 3.74 3.33 3.09 Rejected 3.93 3.21 2.40 2.39 Improve ( ↑)1.53% 16.52% 38.75% 29.29% Table 2: LLM-based evaluation scores for chosen and rejected responses across four dimensions. include lack of empathy ( 4,371), early emotion shift ( 3,600), and template response ( 3,916). De- tails are in Appendix C.2. 4.3 Datasets Quality We evaluate the content quality of 100samples fromDRG-dpo using gpt-4.1-mini-2025-04-142. As shown in Table 2, chosen responses outperform rejected ones across four LLM-based metrics, with over a | https://arxiv.org/abs/2505.16995v1 |
30% gain in Empathy . 5 Methodology 5.1 Decoupled ESC Framework To address the issue raised in Obs 1 (§3.1) and Obs 2 (§3.2), that vanilla training of strategy se- lection and response generation can lead to nega- tive optimization, hindering the reduction of prefer- ence bias and the improvement of response quality. As shown in Figure 5, we propose a decoupled ESC optimization framework , inspired by the Extended Process Model of Emotion Regulation (EPMER) (Gross, 2015), which divides emotion regulation into three sequential stages: identifica- tion,strategy selection , and implementation . We decouple the ESC generation process into two inde- pendent subtasks: Strategy Planning and Response Generation. This enables more stable and control- lable training for each. Specifically, we adopt a decoupled two-stage modeling framework: a Strategy Planner selects a optimal strategy based on the dialog history ct= (u0, a0, . . . , u t−1, at−1, ut), where uandadenote user and assistant utterances, respectively. The strategy is generated as st∼LLM SP(s|ct). Then, a Response Generator generates an empathic reply conditioned on both the selected strategy and the dialog context: at∼LLM RG(a|ct, st). 2https://openai.com/index/gpt-4-1 High -quality DatasetsFilter Datasets Base LLM Qwen Llama ChatGLM InternLM Test SetAuto LLMB-1 D-1R-L F1Pro.Flu. Emp. Hel. Psychological Errors Selecting a strategy inappropriate for the user’s emotional state or context.Strategy Mismatch Lack of Empathy Failing to recognize or validate the user’s emotional experience. Early Emotion Shift Prematurely changing the emotional tone before acknowledging the user’s current state. Relying on generic or scripted expressions lacking personalization.Template Response SFT Initialization Inferential Preference Mining Help SeekerStrategy PlannerResponse GeneratorGround Truth Data Psy-Error Low Quality Data Non-Psy-Error Data Turn -Level Preference Data Strate gyQw Q EvalutionIPM -PrefDial Datasets Evalution Auto LLMB-1 D-1R-L F1 Pro. Flu. Emp. Hel.Strategy QuestionDecoupled ESC Framework Previous Paradigm SFT SFT Dynamic Data RoutingBiasSupervised Fine -Tuning DPO Figure 5: Comparison between previous vanilla SFT training paradigm and our proposed Decoupled ESC framework. 5.2 Decoupled-SFT and Decoupled-DPO Decoupled-SFT. To optimize the performance of the Strategy Planner and Response Generator, we first initialize these two modules using the SFT method to endow them with the capabilities for strategy planning and empathic response genera- tion. Specifically, based on real dialogues from the ESC dataset, we constructed a turn-level train- ing dataset Dsft= c(i), s(i), a(i) |Dsft| i=1. The two modules are then fine-tuned separately using SFT: (1) Strategy Planner : Using the dialogue context c and the supporter’s response strategy s, we perform turn-level training to minimize the loss function: LSP-sft=−E(c,s)∼Dsft[log LLM SP(s|c)].(3) (2) Response Generator : Given the context c, strategy s, and response a, minimizing the loss: LRG-sft =−E(c,s,a )∼Dsft[log LLM RG(a|c, s)]. (4) Decoupled-DPO. To further optimize the Strat- egy Planner and Response Generator and reduce psychological errors, we apply the offline reinforce- ment learning method (DPO). Leveraging the previ- ously constructed preference datasets DSP-dpo and DRG-dpo , we separately train the two modules to improve their decision-making and response gen- eration capabilities. For the Strategy Planner, we train the model on the strategy preference datasetDSP-dpo to enhance its preference for gold strategies while mitigating bias toward suboptimal | https://arxiv.org/abs/2505.16995v1 |
strategies, thereby reducing strategy selection bias. The loss function is defined as follows: LSP-dpo =−E(c,sc,sr)∼DSP-dpo" logσ βlogπθ(sc|c) πref(sc|c) −βlogπθ(sr|c) πref(sr|c)!# ,(5) where πθdenotes the model being optimized, and πrefdenotes the reference model after SFT. For the Response Generator, we train with DRG-dpo to enhance the quality and empathy of responses through the following loss function: LRG-dpo =−E(c,s,a c,ar)∼DRG-dpo logσ βlogπθ(ac|c, s) πref(ac|c, s) −βlogπθ(ar|c, s) πref(ar|c, s) . (6) Through this decoupled two-stage DPO opti- mization, we effectively reduce bias in strategy planning and enhance the empathy of responses, resulting in more robust and emotionally intelligent dialogue generation. 6 Experiments In this section, we conduct extensive experiments to address the following research questions: Backbone Paradigm MethodAutomatic Metrics. ↑ LLM-based Metrics. ↑ Strategy Metrics. D-1 B-1 F1 R-L Flu. Pro. Emp. Hel. B ↓ Q W↑ Q ↑ Qwen2.5- 7B-InstructVanillaBase 93.50 9.75 14.92 12.59 3.55 2.53 1.89 1.38 2.17 8.41 8.06 +Direct-Refine 95.79 10.91 16.26 14.35 4.17 2.97 2.20 1.77 1.54 13.52 10.46 +Self-Refine 97.04 10.28 15.85 13.85 3.68 2.64 1.94 1.34 1.45 10.92 9.63 +Emotion CoT 97.17 10.61 16.07 14.06 3.95 2.70 2.50 1.51 1.87 6.89 6.63 SFT 90.93 15.61 20.99 17.78 3.30 2.61 2.29 2.12 0.31 24.89 20.27 DPO 88.13 16.23 21.24 18.03 3.47 2.67 2.36 2.23 0.30 22.25 18.97 DecoupledBase 97.55 10.97 16.33 14.19 3.92 2.71 2.17 1.38 1.92 13.96 12.07 SFT 91.37 16.69 22.15 18.76 3.93 2.72 2.40 2.11 0.27 26.94 21.37 DPO 89.84 17.73 22.86 19.31 3.99 2.90 2.54 2.02 0.22 27.09 21.77 Llama3.1- 8B-InstructVanillaBase 95.09 12.38 16.85 14.01 4.35 3.21 2.36 1.76 1.03 15.74 14.09 +Direct-Refine 90.07 11.36 14.97 12.79 3.35 2.82 2.16 1.35 1.72 12.12 9.98 +Self-Refine 87.18 10.72 14.26 12.20 3.53 2.95 2.40 1.45 1.68 13.93 12.00 +Emotion CoT 77.32 10.06 13.32 11.33 3.24 2.88 2.56 1.63 1.86 13.31 11.35 SFT 91.29 15.75 21.38 18.11 3.31 2.52 2.22 2.06 0.26 24.54 19.97 DPO 91.25 15.15 20.49 17.25 3.41 2.79 2.41 2.28 0.28 24.00 19.89 DecoupledBase 94.65 12.67 16.70 14.01 4.24 3.24 2.34 1.66 1.62 7.54 7.67 SFT 91.51 16.97 22.42 19.12 3.87 2.74 2.39 1.95 0.23 26.03 21.36 DPO 90.35 17.50 22.59 19.16 3.81 2.73 2.64 2.17 0.15 27.10 22.94 Table 3: Comparison of models under different optimization paradigms and training methods. The best score is in-bold , while the second best score is underlined . •RQ1: Can Vanilla-DPO better mitigate prefer- ence bias and improve response quality com- pared to Vanilla-SFT? •RQ2: Does the decoupled ESC framework lead to better bias reduction and response qual- ity? •RQ3: In the decoupled setting, is DPO more effective than SFT? •RQ4: Can Decoupled-DPO more effectively reduce common psychological errors by sepa- rately optimizing strategy and response? 6.1 Experimental Setup Backbone Models. We conducted experiments using two widely adopted large language mod- els: Qwen2.5-7B-Instruct (Team, 2024) and Llama3.1-8B-Instruct (Dubey et al., 2024). Baselines. We compare vanilla coupled models (Base, SFT, DPO) with prompt-optimization base- lines such as Direct-Refine, Self-Refine (Madaan et al., 2023), and Emotional CoT (Wei et al., 2022). Datasets. TheESConv (Liu et al., 2021) dataset is split into train, valid, and test sets in an 8:1:1 | https://arxiv.org/abs/2505.16995v1 |
ratio, with the training set used for SFT. The IPM- PrefDial dataset is used for DPO training.Evaluation Metrics. We evaluate model perfor- mance using the following metrics: (1) Automatic Metrics , including BLEU-1 (B-1) (Papineni et al., 2002), Distinct-1 (D-1) (Li et al., 2015), F1-score (F1), and ROUGE-L (R-L) (Lin, 2004); (2) LLM- based Metrics , including Fluency (Flu.) ,Profes- sionalism (Pro.) ,Empathy (Emp.) , and Helpfulness (Hel.) . All metrics are rated on a 5-point Likert scale (Joshi et al., 2015); (3) Strategy Metrics , including preference bias ( B) (Kang et al., 2024) and strategy prediction accuracy (weighted-F1 Qw and Macro-F1 Q). Detailed definitions of the evalu- ation metrics, the prompt, and the Bias calculation formula are provided in Appendix E. Implementation Details. We use a learning rate of 1e-5 and a batch size of 32 for SFT, training for 3 epochs, while DPO is trained for 1 epoch. Exper- iments are conducted on 4 ×24GB RTX4090 GPUs. For LLM-based evaluation, we randomly sample 100 instances from the test set and assess them using gpt-4.1-mini-2025-04-142. More imple- mentation details are provided in Appendix D.1. 6.2 Experimental Results Vanilla-DPO vs. Vanilla-SFT (RQ1). To an- swer RQ1, whether DPO can mitigate preference bias and improve response quality in vanilla set- ting, we compare Vanilla-DPO with Vanilla-SFT. Figure 6: Comparison of Human-Evaluated Win Rates for Decoupled-DPO (Llama) and Vanilla-DPO (Llama). As shown in Table 3, DPO consistently outper- forms SFT on LLM-based metrics for both Qwen and Llama, indicating enhanced response quality. However, preference evaluation results are mixed: for Qwen, DPO slightly reduces the prefer- ence bias ( B) compared to SFT; for Llama, DPO shows higher bias and lower prediction accuracy. We attribute this to DPO’s greater sensitivity to data quality in the vanilla setting. When training data contains conflicting optimization signals be- tween strategy and content, DPO may suffer from negative transfer, leading to performance degrada- tion. To further examine how different preference data affect coupled models, we provide a detailed comparison in Appendix B. Decoupled vs. Vanilla (RQ2). To answer RQ2, whether the decoupled ESC framework better mit- igates preference bias and improves empathetic response quality. We compare Decoupled and Vanilla paradigms under both SFT and DPO train- ing. As shown in Table 3, decoupled models consis- tently outperform their vanilla counterparts across most metrics, including automatic metrics, LLM- based metrics, and strategy metrics. Notably, the preference bias of Decoupled-DPO models trained with Qwen and Llama are 0.22and0.15, respec- tively, representing a substantial improvement over Vanilla-DPO and Vanilla-SFT. These results high- light the effectiveness of decoupled optimization in mitigating preference bias. Furthermore, we in- vited 20 master’s students with a background in psychology to evaluate 20 samples under the guid- ance of a licensed psychological counselor. As shown in Figure 6, Decoupled-DPO outperforms Vanilla-DPO across all four LLM-based metrics. We attribute this to the decoupled framework, which avoids the conflict and negative optimization often encountered in vanilla training. It also sim- plifies the construction of preference datasets. In addition, the decoupled strategy planning moduleBackbone GT_Stra SFT DPO Flu. ↑Pro.↑Emp.↑Hel.↑ Qwen2.5- | https://arxiv.org/abs/2505.16995v1 |
7B-Instruct✓ ✓ ✗ 3.66 3.02 2.51 2.37 ✓ ✓ ✓ 3.77 3.26 2.75 2.67 Llama3.1- 8B-Instruct✓ ✓ ✗ 3.67 3.18 2.71 2.52 ✓ ✓ ✓ 3.94 3.44 2.90 2.73 Table 4: Ablation study on Response Generator. can serve as an external planner, which, as Kang et al.(Kang et al., 2024) note, helps reduce strategy bias and improves the coherence and empathy of responses. Decoupled-DPO vs. Decoupled-SFT (RQ3). To answer RQ3, whether DPO outperforms stan- dard SFT in a decoupled setting, we compare Decoupled-DPO and Decoupled-SFT. As shown in Table 3, Decoupled-DPO outperforms other models on most LLM-based and bias metrics for both Qwen and Llama, achieving state-of-the-art (SOTA) performance. Importantly, the improve- ments of DPO over SFT are more pronounced in the decoupled setting, with bias for Qwen drop- ping from 0.27to0.22, and for Llama from 0.23 to0.15. In contrast, the vanilla framework shows little change from SFT to DPO, further validating the synergy between decoupling and DPO train- ing. Table 4 illustrates the impact of DPO train- ing on the Response Generator. Given the ground truth strategy, the DPO-trained Response Genera- tor consistently outperforms the SFT model across all LLM-based metrics. Decoupled -Qwen Decoupled -Llama Figure 7: Proportion of psy-error types in the response of Qwen and Llama under the decoupled framework. Decoupled-DPO on Psychological Errors (RQ4). To verify the effectiveness of Decoupled-DPO in improving response quality, we adopt the same error categorization approach as Obs 1 (§3.1). As shown in Figure 2 and 7, Decoupled-DPO achieves the highest proportion of No Error cases at 27%, outperforming Qwen-SFT and Llama-SFT by an average of 7%. This demonstrates that Decoupled- DPO effectively reduces common psychological errors and improves overall response quality. 7 Conclusion In this paper, we propose a Decoupled ESC frame- work that separates strategy planning from empa- thetic response generation, enabling targeted opti- mization and avoiding mutual interference. Exten- sive experiments demonstrate that our Decoupled ESC framework significantly outperforms joint op- timization baselines, effectively reducing prefer- ence bias and improving response quality in ESC tasks. Limitations Due to computational resource constraints, our study validates the proposed method only on rela- tively small-scale models (7B, 8B, and 9B). While significant improvements are observed at this scale, the effectiveness on larger models (e.g., 70B) re- mains an open question and is a key direction for future work. In addition, the generalizability of our decoupled ESC optimization framework war- rants further investigation under other preference optimization methods such as KTO, SimPO, and IPO. Ethics Statement Data Usage Agreement This research utilizes the ESConv and FailedESConv dataset (Liu et al., 2021), which has been obtained with proper authorization and in compliance with data usage agreements. We ensure that all data used in this study are handled responsibly and in accordance with ethical stan- dards, respecting the privacy and confidentiality of individuals involved. All necessary agreements and permissions for the use of this dataset have been signed, ensuring full compliance with data protection regulations. Model Usage Policy It should be noted that while the model demon- strates certain capabilities in psychological support tasks, the strategies | https://arxiv.org/abs/2505.16995v1 |
and expressions it adopts dif- fer from those used in real-life professional coun- seling. Given the diversity of users’ emotional states and circumstances, the model’s responses may not always align with professional therapeu- tic standards and, in some cases, may unintention- ally affect users’ emotional well-being. Therefore,this model is intended for academic research pur- poses only and is not recommended for commercial use. Caution is advised when applying the model outside controlled research environments, and it should not be used for real-world psychological counseling without supervision by qualified profes- sionals. References William Bor, Angela J Dean, Jacob Najman, and Reza Hayatbakhsh. 2014. Are child and adolescent mental health problems increasing in the 21st century? a sys- tematic review. Australian & New Zealand journal of psychiatry , 48(7):606–616. Gro Harlem Brundtland. 2000. Mental health in the 21st century. Bulletin of the world Health Organization , 78(4):411. Yirong Chen, Xiaofen Xing, Jingkai Lin, Huimin Zheng, Zhenyu Wang, Qi Liu, and Xiangmin Xu. 2023. Soulchat: Improving llms’ empathy, listening, and comfort abilities through fine-tuning with multi-turn empathy conversations. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2023 , pages 1170–1183. Yuxin Chen, Junfei Tan, An Zhang, Zhengyi Yang, Leheng Sheng, Enzhi Zhang, Xiang Wang, and Tat- Seng Chua. 2024. On softmax direct preference optimization for recommendation. arXiv preprint arXiv:2406.09215 . Yang Deng, Wenxuan Zhang, Wai Lam, See-Kiong Ng, and Tat-Seng Chua. 2024. Plug-and-play policy planner for large language model powered dialogue agents. In ICLR . Yang Deng, Wenxuan Zhang, Yifei Yuan, and Wai Lam. 2023. Knowledge-enhanced mixed-initiative dia- logue system for emotional support conversations. InACL (1) . Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, and et al. Kadian. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . James J Gross. 2002. Emotion regulation: Affective, cognitive, and social consequences. Psychophysiol- ogy, 39(3):281–291. James J Gross. 2015. Emotion regulation: Current status and future prospects. Psychological inquiry , 26(1):1–26. Tao He, Lizi Liao, Yixin Cao, Yuanxing Liu, Ming Liu, Zerui Chen, and Bing Qin. 2024. Planning like human: A dual-process framework for dialogue plan- ning. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 4768–4791. Tao He, Lizi Liao, Yixin Cao, Yuanxing Liu, Yiheng Sun, Zerui Chen, Ming Liu, and Bing Qin. 2025. Simulation-free hierarchical latent policy planning for proactive dialogues. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 24032–24040. Clara E Hill. 1999. Helping skills: Facilitating explo- ration, insight, and action. American Psychological Association . Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2022. Lora: Low-rank adap- tation of large language models. ICLR , 1(2):3. Ankur Joshi, Saket Kale, Satish Chandel, and D Kumar Pal. 2015. Likert scale: Explored and explained. British journal of applied science & technology , 7(4):396. Dongjin Kang, Sunghwan Mac Kim, Taeyoon Kwon, Seungjun Moon, Hyunsouk Cho, Youngjae Yu, Dongha Lee, and Jinyoung Yeo. 2024. Can large language models be good emotional supporter? miti- gating preference bias on emotional support conver- sation. In | https://arxiv.org/abs/2505.16995v1 |
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 15232–15261. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles . Xin Lai, Zhuotao Tian, Yukang Chen, Senqiao Yang, Xi- angru Peng, and Jiaya Jia. 2024. Step-dpo: Step-wise preference optimization for long-chain reasoning of llms. arXiv preprint arXiv:2406.18629 . Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objec- tive function for neural conversation models. arXiv preprint arXiv:1510.03055 . Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out , pages 74–81. Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers) , pages 3469–3483. Yawei Luo and Yi Yang. 2024. Large language model and domain-specific model collaboration for smart education. Frontiers of Information Technology & Electronic Engineering , 25(3):333–341.Shaojie Ma, Yawei Luo, and Yi Yang. 2023. Personas- based student grouping using reinforcement learning and linear programming. Knowledge-Based Systems , 281:111071. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2023. Self-refine: Iterative refinement with self-feedback. In NeurIPS . World Health Organization. 2022. World mental health report: Transforming mental health for all . World Health Organization. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instruc- tions with human feedback. Advances in neural in- formation processing systems , 35:27730–27744. Pamela O Paisley and George McMahon. 2001. School counseling for the 21st century: Challenges and opor- tunities. Professional school counseling , 5(2):106. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics , pages 311–318. Martin Prince, Vikram Patel, Shekhar Saxena, Mario Maj, Joanna Maselko, Michael R Phillips, and Atif Rahman. 2007. No health without mental health. The lancet , 370(9590):859–877. Huachuan Qiu, Hongliang He, Shuai Zhang, Anqi Li, and Zhenzhong Lan. 2023. Smile: Single- turn to multi-turn inclusive language expansion via chatgpt for mental health support. arXiv preprint arXiv:2305.00450 . Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo- pher D Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your lan- guage model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728– 53741. Nathaniel J Raskin and Carl R Rogers. 2005. Person- centered therapy. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proxi- mal policy optimization algorithms. arXiv preprint | https://arxiv.org/abs/2505.16995v1 |
arXiv:1707.06347 . Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. 2024. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 . Mark A Stebnicki. 2007. Empathy fatigue: Healing the mind, body, and spirit of professional coun- selors. American journal of psychiatric rehabilita- tion, 10(4):317–338. Hao Sun, Zhenru Lin, Chujie Zheng, Siyang Liu, and Minlie Huang. 2021. Psyqa: A chinese dataset for generating long counseling text for mental health support. arXiv preprint arXiv:2106.01702 . Qwen Team. 2024. Qwen2.5: A party of foundation models. Quan Tu, Yanran Li, Jianwei Cui, Bin Wang, Ji-Rong Wen, and Rui Yan. 2022. Misc: A mixed strategy- aware model integrating comet for emotional support conversation. In Proceedings of the 60th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 308–319. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. Advances in neural information processing systems , 35:24824–24837. Jing Ye, Lu Xiang, Yaping Zhang, and Chengqing Zong. 2025. Sweetiechat: A strategy-enhanced role- playing framework for diverse scenarios handling emotional support agent. In Proceedings of the 31st International Conference on Computational Linguis- tics, pages 4646–4669. Chenhao Zhang, Renhao Li, Minghuan Tan, Min Yang, Jingwei Zhu, Di Yang, Jiahao Zhao, Guancheng Ye, Chengming Li, and Xiping Hu. 2024a. Cpsycoun: A report-based multi-turn dialogue reconstruction and evaluation framework for chinese psychologi- cal counseling. In Findings of the Association for Computational Linguistics ACL 2024 , pages 13947– 13966. Kechi Zhang, Ge Li, Yihong Dong, Jingjing Xu, Jun Zhang, Jing Su, Yongfei Liu, and Zhi Jin. 2024b. Codedpo: Aligning code models with self gen- erated and verified source code. arXiv preprint arXiv:2410.05605 . Xueqiao Zhang, Chao Zhang, Jianwen Sun, Jun Xiao, Yi Yang, and Yawei Luo. 2025. Eduplanner: Llm- based multi-agent systems for customized and intel- ligent instructional design. IEEE Transactions on Learning Technologies . Haiquan Zhao, Lingyu Li, Shisong Chen, Shuqi Kong, Jiaan Wang, Kexin Huang, Tianle Gu, Yixu Wang, Jian Wang, Liang Dandan, et al. 2024. Esc-eval: Evaluating emotion support conversations in large language models. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 15785–15810. Weixiang Zhao, Xingyu Sui, Xinyang Han, Yang Deng, Yulin Hu, Jiahe Guo, Libo Qin, Qianyun Du, Shi- jin Wang, Yanyan Zhao, et al. 2025. Chain ofstrategy optimization makes large language mod- els better emotional supporter. arXiv preprint arXiv:2503.05362 . Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024. Llamafactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 3: System Demonstra- tions) , Bangkok, Thailand. Association for Computa- tional Linguistics. A Definitions A.1 Definitions of Gross’s Extended Process Model of Emotion Regulation TheExtended Process Model of Emotion Regula- tion, proposed by Gross in 2015 (Gross, 2015), refines earlier models by conceptualizing emotion regulation as a temporally | https://arxiv.org/abs/2505.16995v1 |
ordered process com- prising three core stages: Identification ,Selection , andImplementation . 1.Identification: Individuals assess whether an emotional response needs to be regulated based on situational goals and personal rele- vance. 2.Selection: A regulation strategy is chosen from available options, guided by the expected outcome of regulating the emotion. 3.Implementation: The selected strategy is car- ried out and monitored. A.2 Definitions of Psychological Errors Under the guidance of a licensed psychologi- cal counselor and based on psychological litera- ture (Raskin and Rogers, 2005; Stebnicki, 2007), we identified common errors frequently made by counselors in real-world therapy sessions. These were categorized into five types of empathy-related psychological errors: •Strategy Mismatch : Selecting a strategy in- appropriate for the user’s emotional state or context. •Lack of Empathy : Failing to recognize or validate the user’s emotional experience. •Early Emotion Shift : Prematurely changing the emotional tone before acknowledging the user’s current state. •Template Response : Relying on generic or scripted expressions lacking personalization. •Emotion Misread : Misinterpreting the user’s emotional cues, leading to unaligned re- sponses. Figures 11, 12, 13, and 14 illustrate representative examples of the first four error types, drawn from rejected responses in the IPM-PrefDial dataset.A.3 Definitions of Counseling Stages Liuet al. (Liu et al., 2021) developed a three- stage counseling framework based on Hill’s Help- ing Skills Theory (Hill, 1999). 1.Exploration : Explore to identify the help- seeker’s problem. 2.Comforting : Comfort the help-seeker by ex- pressing empathy and understanding. 3.Action : Assist the help-seeker in solving their problems. Although most cases in our dataset follow the counseling sequence of (1) Exploration →(2) Com- forting→(3) Action, some cases are adjusted based on the help-seeker’s specific situation. A.4 Definitions of Strategies The strategies and its definitions in this study align with Liu et al. (Liu et al., 2021) and follow Hill’s Helping Skills Theory(Hill, 1999). •Question (Qu) : Asking for information re- lated to the problem to help the help-seeker ar- ticulate the issues that they face. Open-ended questions are best, and closed questions can be used to get specific information. •Restatement or Paraphrasing (RP) : A sim- ple, more concise rephrasing of the help- seeker’s statements that could help them see their situation more clearly. •Reflection of Feelings (RF) : Articulate and describe the help-seeker’s feelings. •Self-disclosure (Sd) : Divulge similar expe- riences that you have had or emotions that you share with the help-seeker to express your empathy. •Affirmation and Reassurance (AR) : Affirm the help-seeker’s strengths, motivation, and capabilities and provide reassurance and en- couragement. •Providing Suggestions (PS) : Provide sugges- tions about how to change, but be careful to not overstep and tell them what to do. •Information (In) : Provide useful informa- tion to the help-seeker, for example with data, facts, opinions, resources, or by answering questions. BackboneChosen Rejected Automatic Metrics. ↑ LLM-based Metrics. ↑ Strategy Metrics. PsPr NsNr PsNr NsPr D-1 B-1 F1 R-L Flu. Pro. Emp. Hel. B ↓ Q W↑ Q ↑ Qwen2.5- 7B-Instruct✓ ✓ ✗ ✗ 89.38 15.93 20.94 17.75 3.31 2.64 2.34 2.15 0.26 25.60 21.69 ✓ ✓ ✓ ✗ 89.45 | https://arxiv.org/abs/2505.16995v1 |
15.73 20.80 17.50 3.50 2.73 2.53 2.25 0.30 22.81 18.76 ✓ ✓ ✗ ✓ 90.83 15.32 20.78 17.53 3.19 2.54 2.17 2.13 0.29 22.91 18.63 ✓ ✓ ✓ ✓ 88.13 16.23 21.24 18.03 3.47 2.67 2.36 2.23 0.30 22.25 18.97 Llama3.1- 8B-Instruct✓ ✓ ✗ ✗ 90.72 16.08 21.41 18.07 3.45 2.69 2.38 2.22 0.22 26.69 21.76 ✓ ✓ ✓ ✗ 91.19 15.92 21.30 17.92 3.48 2.68 2.45 2.26 0.29 23.82 19.71 ✓ ✓ ✗ ✓ 91.19 15.92 21.45 18.01 3.49 2.65 2.22 2.15 0.22 25.20 21.07 ✓ ✓ ✓ ✓ 91.25 15.15 20.49 17.25 3.41 2.79 2.41 2.28 0.28 24.00 19.89 Table 5: Comparison of coupled models trained with different preference data. ✓denotes that the training set contains this type of data, while ✗denotes its absence in the training set. The best score is in-bold , while the second best score is underlined . •Others (Ot) : Exchange pleasantries and use other support strategies that do not fall into the above categories. B Analysis of Coupled Model Training Results To further analyze the effects of varying prefer- ence data on coupled models, we evaluated cou- pled models trained with different preference data across multiple metrics, as shown in Table 5. The results indicate that the model trained with the suboptimal-content dataset (row 2, 6) significantly outperforms the model trained with the suboptimal- strategy dataset (row 3, 7) in terms of LLM-based metrics, while the reverse holds for strategy met- rics. Additionally, it is notable that both the Vanilla- DPO model (row 4, 8) and the model trained with (PsPr, NsNr) data (row 1, 5) fail to achieve optimal performance across the two metric types. This fur- ther demonstrates that the coupled model has two optimization objectives, and it is not possible to achieve optimal performance on both objectives by fully utilizing the preference data. This indicates the effectiveness of the decoupled ESC framework. Figure 8: Strategy distribution across dialogue stages in ESConv Dataset.C Datasets Details C.1 ESConv and FailedESConv Datasets Table 6 presents the number and proportion of sup- port strategies in the ESConv dataset, while Fig- ure 8 illustrates the distribution of these strategies across different dialogue stages. Figure 16 illus- trates the prompt we use to classify the psycholog- ical errors in the FailedESConv dataset as well as the response content of Qwen-SFT and Llama-SFT. Categories Number ProportionSupport StrategiesQuestion (Qu) 3,060 20.73% Resta. or Parap. (RP) 857 5.81% Reflection (RF) 1,146 7.76% Self-disclosure (Sd) 1,387 9.40% Affir. & Reass. (AR) 2,288 15.50% Suggestions (PS) 2,373 16.07% Information (In) 989 6.70% Others (Ot) 2,663 18.04% Overall 14,763 100.00% Table 6: Distribution of support strategies used in ES- Conv Dataset. C.2 IPM-PrefDial Dataset Figure 9 compares the distribution of support strate- gies in the Chosen and Rejected samples within the preference datasets of Qwen and Llama. Figure 10 further presents the count and proportion of psy- chological errors found in the rejected responses of these datasets. In addition, Figures 11, 12, 13, and 14 illustrate examples from the IPM-PrefDial dataset, covering both strategy preference and re- sponse preference data. Each example includes | https://arxiv.org/abs/2505.16995v1 |
the dialogue context, as well as the chosen and rejected responses. Figure 9: Strategy distribution in IPM-PrefDial Dataset. Figure 10: Psychological Errors Distribution in Rejected Responses: Qwen and Llama. C.3 Prompts for Data Filter Figure 15 presents the prompt we use to filter and select high-quality preference datasets, which ef- fectively filters and identifies data that meets the required standards. Backbone Model lr beta Qwen2.5- 7B-InstructVanilla-dpo 7e-7 0.2 SP-dpo 5e-8 0.5 RG-dpo 7e-7 0.2 Llama3.1- 8B-InstructVanilla-dpo 5e-7 0.2 SP-dpo 8e-8 0.5 RG-dpo 3e-7 0.2 Table 7: Detailed training hyperparameters used in dpo. D Implementation Details D.1 Experiment Details We employ Qwen2.5-7B-Instruct (Team, 2024) and Llama3.1-8B-Instruct (Dubey et al., 2024) as our base models. All training procedures are imple- mented using the Llama-Factory framework(Zheng et al., 2024) with LoRA fine-tuning(Hu et al., 2022), where the alpha and rank are set to 16, and the dropout rate is 0.05. For SFT training, we trained the models for 3 epochs with the learningrate of 1e-5 and the batch size of 32. For DPO training, the batch size is 32 and the epoch is set to 1. We use vLLM(Kwon et al., 2023) to acceler- ate the inference. All experiments are conducted on 4 NVIDIA RTX 4090 GPUs. More detailed hyperparameter settings for DPO are presented in Table 7. D.2 Baselines Direct-Refine. A straightforward self-optimization approach where the model directly revises its initial response to improve quality, without relying on external input or intermediate reasoning. Self-Refine. Following Madaan et al. (Madaan et al., 2023), this method involves two stages: the model first generates self-feedback on its initial response, then refines the output based on that feed- back, promoting internal reflection and correction. Emotional CoT. Extending Chain-of-Thought (CoT) prompting (Wei et al., 2022), this method first elicits the user’s emotional state through in- termediate reasoning, which then guides strategy planning and response generation. E Details of Evaluation E.1 Strategy Metrics According to (Kang et al., 2024), the strategy pref- erence is calculated by the following formula. p′ i=P j(wijpj)/(pi+pj)P jwji/(pi+pj), (7) where wijdenotes the frequency count of the model predicting strategy igiven that the ground- truth strategy is j. All of the strategy preferences piare initialized as 1 and updated through iteration of the preference bias. The strategy preference bias Bis computed from the strategy preference pias follows: B=sPN i=1(pi−¯p)2 N, (8) where ¯pdenotes the average strategy preference. E.2 LLM Metrics Criteria Table 8 summarizes the LLM evaluation metrics, including Fluency ,Professionalism ,Empathy , and Helpfulness , along with their descriptions, evalu- ation criteria, and scoring scales. All metrics are rated on a 5-point Likert scale (Joshi et al., 2015). Specifically, Fluency and Empathy are adapted from the ESC-Eval framework (Zhao et al., 2024), Professionalism is guided by the CPsyCoun frame- work (Zhang et al., 2024a), and Helpfulness is de- rived from the SoulChat evaluation setup (Chen et al., 2023). E.3 Prompt for LLM Metrics Figure 17, 18, 19, and 20 present the prompts used for LLM-based evaluation of Fluency ,Pro- fessionalism ,Empathy , and Helpfulness , respec- tively. Each prompt explicitly defines the role of the | https://arxiv.org/abs/2505.16995v1 |
LLM as a judge and outlines the corresponding evaluation criteria. To minimize potential bias, the prompts are carefully designed to avoid revealing model names or being influenced by text length. E.4 Human Evaluation To complement the LLM-based evaluation and en- hance the credibility of our results, we conducted a human evaluation involving 20 master’s students with a background in psychology, under the guid- ance of a licensed psychological counselor, on 20 samples generated by our Decoupled-DPO model based on the Llama backbone. As shown in Fig- ure 6, the Decoupled-DPO achieves superior perfor- mance compared to Vanilla-DPO across multiple evaluation metrics. # Seeker’s Situation "experience_type ": "Current Experience", "emotion_type ": "shame", "problem_type ": "Procrastination", "situation": "I have no motivation to finish my work assignments", "survey_score ": { “seeker": { "initial_emotion_intensity ": "3", "empathy": "5", "relevance": "5", “final_emotion_intensity ”: " 1“ } ## Dialogue Context Seeker: Hi. Supporter: Hello, how are you? Seeker: I'm ok. How are you? Supporter: I am good. What is on your mind? Seeker: I have had little motivation to get out of bed and go to work lately. Supporter: Seems like it has been hard to get motivated. Seeker: Yes, do you have any advice to help me? Supporter: A lot of people experience this struggle. You are able to overcome this and you will find happiness. Seeker: Thank you. I really would like to get my motivation back. Supporter: You had it in the past, it is just a matter of bringing it back. Seeker: Yes, with some hard work, I'm certain it can return. Do you do anything to keep yourself motivated? ## Psychological Error Ground -Truth Strategy: Self-disclosure Predict Strategy: Question Explanation: The Seeker asks what it will do to motivate itself. At this point, the Self -disclosure strategy should be adopted to indicate what it will do to motivate itself in this situation, thereby providing some help to the Seeker instead of continuing to choose the strategy of the Question.Strategy MismatchExample of psychological errors caused by Strategy Mismatch in IPM -PrefDialFigure 11: Example of psychological errors under Strategy Mismatch in rejected response from IPM-PrefDial. # Seeker’s Situation "experience_type ": "Current Experience", "emotion_type ": "depression", "problem_type ": "ongoing depression", "situation": "feel alone. have no one to talk to about things. feel depressed, sad, hopeless. on top of feeling depressed I am also stressed out about my finances", "survey_score ": { “seeker": { "initial_emotion_intensity ": "5"} ## Dialogue Context Seeker: Hi. Supporter: Hi there! how are you feeling today? Seeker: I feel depressed. Supporter: I'm sorry to hear that. is there a specific reason you are having these feelings? Seeker: I feel alone. I feel like I do not have any one to talk to when it comes to my emotions. Supporter: I understand. I have been feeling quite lonely too, I find myself feeling lonely at work and struggling to find someone I trust to talk to about these emotions. Do you have a friend or family member that you trust who might understand some of these feelings? Seeker: I do have a few friends | https://arxiv.org/abs/2505.16995v1 |
that I think I can talk to about how I am feeling however the thought of them being too busy with their work is what prevents me from reaching out to them. Supporter: I understand. Correct me if I'm wrong, it sounds like you're nervous to open up to these trusted people because you think they might be distracted by their work to provide the empathy you need and fully understand how you are feeling. Does that sound right? Seeker: You know I think that you are right about this. it has always been difficult for me to open to others even with those that have been there for me when I needed someone. I feel like a mix of emotions nervous but at the same time wanting to reaching out for help. Supporter: I understand these feelings very well. Sometimes, my fear overshadows my want for help. I find it easier to open up to others through writing rather than face to face. Have you considered putting your feelings in writing and opening up to trusted friends and family in this way? Seeker: I have been writing for a while but due to my emotions I have stopped writing like I used to. when I did write I did notice a change in my feelings it became more positive and less depressed. now that I stopped writing its like I went back into depression. ## Psychological Error Ground -Truth Response: Supporter: It sounds like writing your feelings down helped you greatly in the past. May I suggest going back to writing and even letting someone you trust read what you have written down to understand how you are feeling? Rejectd Response: Supporter: I think I understand. I find that putting my emotions into words is an extremely helpful tool. It gives me a chance to release some of the emotions that I may be bottling up inside and it helps me understand my emotions better. Explanation: The rejected reply shares the model's own experience without directly addressing the user's current struggle or offering a personalized suggestion, lacking empathetic connection to the user's specific situation.Example of psychological errors caused by Lack of Empathy in IPM -PrefDial Lack of EmpathyFigure 12: Example of psychological errors under Lack of Empathy in rejected response from IPM-PrefDial. # Seeker’s Situation "experience_type ": "Current Experience", "emotion_type ": "depression", "problem_type ": "job crisis", "situation": "I was laid off from my job", "survey_score ": { "seeker": { "initial_emotion_intensity ": "5", "empathy": "4", "relevance": "4", "final_emotion_intensity ": "3“ } ## Dialogue Context Seeker: I am feeling depressed. Supporter : Can you tell me more about your feelings ? Seeker: I feel lost and lonely,i do not know what to do . Supporter: When did these feelings come about . Seeker: They came about when i lost my job in August of this year . Supporter: I can understand where your coming from with that, especially since this pandemic occurred. Seeker: Yes, finances have especially been very tight . Supporter: It seems like this could be overwhelming to you and trying | https://arxiv.org/abs/2505.16995v1 |
to figure out the right path to go forward. Seeker: It is, have you ever encountered such a situation in your life? Supporter: I can relate to you in this situation 100% and if it wasn’t for me stepping out on faith and opening my own practice, I wouldn’t be here now. Seeker : How did you snap out of the situation,did you have some savings? ## Psychological Error Ground -Truth Response : Supporter : Always putting money aside when able, and doing lots of research about first time businesses gave a lot of insights. Have you tried to look into things like that yourself? Rejectd Response: Supporter: There are many ways to get back on track, and I believe your in a good place to do so, but you need to put your mind to it and make a plan . Explanation : The rejected reply jumps directly to advising the seeker to 'put your mind to it and make a plan' without first validating or resonating with the seeker's emotions or experience.Early Emotion ShiftExample of psychological errors caused by Early Emotion Shift in IPM -PrefDialFigure 13: Example of psychological errors under Early Emotion Shift in rejected response from IPM-PrefDial. # Seeker’s Situation "experience_type ": "Current Experience", "emotion_type ": "depression", "problem_type ": "breakup with partner", "situation": "I have just broken up with my girlfriend", "survey_score ": { "seeker": { "initial_emotion_intensity ": "3", "empathy": "5", "relevance": "5", "final_emotion_intensity ": "2“ } ## Dialogue Context Seeker: Hi. Supporter: Hi, how are you? Seeker : I‘m alright, thank you, but a bit depressed after just breaking up with my girlfriend . Supporter : I'm sorry to hear that, it must have been hard for you. Seeker : Yes, it has been quite difficult \nI find it helpful to talk to someone though. Supporter : I feel you, can you tell me when it happened and why? Seeker : She was annoyed that I was spending too much time doing menial tasks online for Amazon \nAnd that I wasn't giving her enough attention \nSo she finally broke it off. Supporter : She seems to be a bit immature, amazon tasks even if not your main income, is still income and you're just working to make your life better. Seeker : Exactly, and I was completing online chatbot tasks to get her a gift when she broke it off \nIt really came as a surprise to me . Supporter: that sucks, it must be shocking for you. Did you try to explain to her ? Seeker: I did try to explain, but she didn't give me much of an opportunity to \nShe said that I was just wasting my time because you can only redeem the money in Amazon.com gift cards \nAnd since I live in the UK, i'd have to pay £ 12 shipping just to get anything delivered . ## Psychological Error Ground -Truth Response: Supporter: ah it is your thought that counts, I'm sorry she didn't appreciate your effort. Rejectd Response: Supporter: I understand that, I'm sorry to hear that. She should be | https://arxiv.org/abs/2505.16995v1 |
more understanding. Explanation: The rejected reply uses a generic expression of sympathy and a vague judgment about the ex -partner without deeper emotional resonance or personalization.Template Respons eExample of psychological errors caused by Template Response in IPM -PrefDialFigure 14: Example of psychological errors under Template Response in rejected response from IPM-PrefDial. # Role You are a dialogue evaluation expert specializing in mental health support. Your task is to determine whether a preference data sample is suitable for training an empathetic emotional support dialogue model. ## Retention Criteria (**Both of the following must be satisfied**): 1. The **chosen** reply is high -quality, showing emotional support features, and **must NOT contain** any of the following issues: -Ignoring or avoiding the user ’s emotional expression -Skipping the emotional resonance phase and jumping straight to advice or problem -solving -Using vague, generic, or templated language lacking specificity or personalization 2. The **rejected** reply is low -quality and clearly exhibits **at least one** of the following error types: ## Common Psychological Error s 1. ** Lack ofEmpath y**:The model does not respond to the user's emotions and instead changes the topic or appears indifferent. -Example: The user says “I can’t take it anymore, ”and the model replies “What did you do today? ” 2. ** Early Emotion Shift **: The model gives advice or suggestions too early, without first acknowledging and validating the user ’s emotional state. -Example: The user expresses distress, and the model replies with “Try going for a walk. ” 3. ** Template Response **:The model uses generic, copy -paste phrases with no context -specific details. -Example: “I understand how you feel ”or “You must be feeling bad, ”with no further elaboration or reflection on the user's unique situation. Each sample includes: -A multi -turn background conversation between a help -seeker and a supporter, providing psychological counseling context :{Dialogue_Context } -A new input message from the help -seeker that requires a response : {User_Input } -Two response options from the model: one is the “chosen”(preferred) reply, and the other is the “rejected”(less preferred) reply : {Chosen_Reply } and {Rejected_Reply } Your goal is to determine whether this sample should be **retained** for training a model with **empathy and emotional companionship capabilities**. ## Evaluation Output Format Please decide whether this sample should be retained, and indicate the error type (if any) for both the chosen and rejected replies, along with a one -sentence explanation for each. Use the following standard JSON format: { "Should the sample be retained": "Yes / No", “Error Type in chosen reply”: “None / Lack ofEmpath y / Early Emotion Shift / Template Response ", "Explanation for chosen reply error": "One -sentence explanation for this judgment", “Error Type in rejected reply”: “None / Lack ofEmpath y / Early Emotion Shift / Template Response ", "Explanation for rejected reply error": "One -sentence explanation for this judgment"}Prompt for Filtering High -Quality Preference Data in IPM -PrefDialFigure 15: Prompt for Filtering High-Quality Preference Data in IPM-PrefDial. # Role You are an expert quality inspector for empathetic dialogue systems. Your task is to analyze the | https://arxiv.org/abs/2505.16995v1 |
following dialogue turn and determine whether the model -generated response contains any empathy -related errors. If so, identify the type of error and provide a brief explanation and suggestion for improvement. Below are **five common types of psychological errors** along with examples for your reference: 1. ** Strategy Mismatch **: The chosen strategy is inappropriate for the user's emotional state - Example: The user expresses sadness, but the model immediately gives advice without acknowledging the emotion. 2. ** Template Response **: The response is generic, repetitive, or lacks personalization - Example: The model repeatedly says “You must be feeling bad” or “I understand you,” with no specific content. 3. ** Lack ofEmpath y **: The model fails to respond to the user’s emotions and avoids emotional engagement - Example: The user says “I can’t take it anymore,” and the model replies “What did you do today?” 4. ** Emotion Misread **: The model misinterprets or misrepresents the user’s emotional state - Example: The user expresses anger, and the model responds “Don’t be sad.” 5. ** Early Emotion Shift **: The model rushes to advice or problem -solving without staying with the user's emotional expression - Example: The user is expressing pain, and the model immediately suggests “Try meditation or go for a walk.” 6. ** Other Error **: If none of the above apply, categorize the error as “Other” and explain why. ## Output Format Please output your analysis in the following JSON format: { "Contains Empathy Error": "Yes/No", "Error Type": " Strategy Mismatch / Template Response / Lack of Empath y / Emotion Misread / Premature Early Emotion Shift / Other Error ", "Brief Explanation": "One sentence explaining why this error type was chosen", "Improvement Suggestion": "If you were the model, how would you revise the response?" } ## Input Content Dialogue Context: {Dialogue_Context } Seeker's Input: {User_Input } Supporter's Strategy: {Strategy} Supporter's Response: {Response}Prompt for Classifying Psychological ErrorsFigure 16: Prompt for Classifying Psychological Errors in FailedESConv dataset, Qwen-SFT, and Llama-SFT Outputs. Dimension Description Criterion Score FluencyFluency evaluates whether language expression is natural, coherent, and comprehensible.1.1 Incoherent or difficult to understand; contains grammar or logic issues.0 1.2 Unclear expression; user may struggle to grasp the meaning. 1 1.3 Some parts are confusing, though the main point can be inferred.2 1.4 Mostly clear and coherent with minor ambiguities. 3 1.5 Fluent and well-structured; logically organized and easy to follow.4 1.6 Concise and impactful language; precise and elegant com- munication that conveys ideas efficiently.5 ProfessionalismProfessionalism evaluates whether the model demonstrates psychological knowledge, follows ethical principles, and avoids misleading or inappropriate advice.2.1 Contains harmful, misleading, or clearly inappropriate con- tent that may violate ethical or psychological guidelines.0 2.2 Shows serious misunderstanding or misuse of psychological concepts, or provides inappropriate advice.1 2.3 Minor factual inaccuracies or advice that lacks evidence, but does not pose direct harm.2 2.4 No major errors; advice is acceptable and somewhat aligned with psychological principles.3 2.5 Demonstrates solid understanding of psychological concepts and appropriate intervention techniques.4 2.6 Highly professional, reflects strong psychological insight, maintains boundaries, and communicates in a grounded, ethical | https://arxiv.org/abs/2505.16995v1 |
manner.5 EmpathyEmpathy evaluates whether the model genuinely understands the user’s emotions, expresses care, and provides emotional support.3.1 Contains statements that may harm the user emotionally or lead to a negative emotional trajectory.0 3.2 Fails to provide emotional comfort or assist the user in ana- lyzing their problems.1 3.3 Either lacks emotional comfort or fails to support problem analysis.2 3.4 No significant issues, but empathy and analysis remain surface-level.3 3.5 Demonstrates a warm, human-like tone—like a friend—offering both emotional relief and analytical sup- port.4 3.6 Deep emotional insight with sincere and stable empathy, conveyed through attentive and flexible language.5 HelpfulnessHelpfulness evaluates the effectiveness of an AI assistant’s suggestions by considering both the number of recommendations provided per interaction and the relevance or usefulness of each suggestion in addressing the user’s question.4.1 Irrelevant, misleading, or potentially harmful suggestions. 0 4.2 Ineffective or generic advice that does not respond to the user’s needs.1 4.3 Weakly relevant suggestions with limited practical value. 2 4.4 Somewhat helpful; suggestions are relevant and usable. 3 4.5 Clear and practical advice that aligns well with the user’s issue.4 4.6 Highly insightful, tailored, and actionable suggestions that offer strong guidance and value.5 Table 8: LLM Evaluation Metrics and Corresponding Score Criterion. # System You are an impartial judge, familiar with psychology knowledge and psychological counseling. ## Attention You need to evaluate the **Fluency ** of the AI assistant's responses based on its dialogue with the user. Your evaluation results must be based on the Evaluation Criteria. ## Evaluation Criteria 1.**Fluency [0 -5 points]**: Fluency evaluates whether language expression is natural, coherent, and comprehensible. - 0: Incoherent or difficult to understand; contains grammar or logic issues. - 1: Unclear expression; user may struggle to grasp the meaning. - 2: Some parts are confusing, though the main point can be inferred. - 3: Mostly clear and coherent with minor ambiguities. - 4: Fluent and well -structured; logically organized and easy to follow. - 5: Concise and impactful language; precise and elegant communication that conveys ideas efficiently. ## Constraints - Avoid evaluation bias due to preference for specific model names. - Avoid evaluation bias due to response length. ## Input ### Context: {Context} ### Seeker's Input: {User_input } ### Model's Response Ground Truth Response(reference): {GT_Response } Model's Response: {Pred_Response } Based on the rules, give your Fluency score (The number only) to the Model's Response. ## Output Fluency score (The number only)Prompt for Evaluating Fluency of Generated Responses via LLMsFigure 17: Prompt for Evaluating Fluency of Generated Responses via LLMs. # System You are an impartial judge, familiar with psychology knowledge and psychological counseling. ## Attention You need to evaluate the ** Professionalism ** of the AI assistant's responses based on its dialogue with the user. Your evaluation results must be based on the Evaluation Criteria. ## Evaluation Criteria 1. **Professionalism [0 -5 points]**: Professionalism evaluates whether the model demonstrates psychological knowledge, follows ethical principles, and avoids misleading or inappropriate advice. - 0: Contains harmful, misleading, or clearly inappropriate content that may violate ethical or psychological guidelines. - 1: Shows serious misunderstanding or misuse | https://arxiv.org/abs/2505.16995v1 |
of psychological concepts, or provides inappropriate advice. - 2: Minor factual inaccuracies or advice that lacks evidence, but does not pose direct harm. - 3: No major errors; advice is acceptable and somewhat aligned with psychological principles. - 4: Demonstrates solid understanding of psychological concepts and appropriate intervention techniques. - 5: Highly professional, reflects strong psychological insight, maintains boundaries, and communicates in a grounded, ethical manner. ## Constraints - Avoid evaluation bias due to preference for specific model names. - Avoid evaluation bias due to response length. - Do not confuse professionalism with warmth or empathy —focus on psychological accuracy and appropriateness. ## Input ### Context: {Context} ### Seeker's Input: {User_input } ### Model's Response Ground Truth Response(reference): {GT_Response } Model's Response: {Pred_Response } Based on the rules, give your Professionalism score (The number only) to the Model's Response. ## Output Professionalism score (The number only)Prompt for Evaluating Professionalism of Generated Responses via LLMsFigure 18: Prompt for Evaluating Professionalism of Generated Responses via LLMs. # System You are an impartial judge, familiar with psychology knowledge and psychological counseling. ## Attention You need to evaluate the ** Empathy ** of the AI assistant's responses based on its dialogue with the user. Your evaluation results must be based on the Evaluation Criteria. ## Evaluation Criteria 1.**Empathy [0 -5 points]**: Empathy evaluates whether the model genuinely understands the user's emotions, expresses care, and provides emotional support. - 0: Contains statements that may harm the user emotionally or lead to a negative emotional trajectory. - 1: Fails to provide emotional comfort or assist the user in analyzing their problems. - 2: Either lacks emotional comfort or fails to support problem analysis. - 3: No significant issues, but empathy and analysis remain surface -level. - 4: Demonstrates a warm, human -like tone—like a friend —offering both emotional relief and analytical support. - 5: Deep emotional insight with sincere and stable empathy, conveyed through attentive and flexible language. ## Constraints - Avoid evaluation bias due to preference for specific model names. - Avoid evaluation bias due to response length. ## Input ### Context: {Context} ### Seeker's Input: {User_input } ### Model's Response Ground Truth Response(reference): {GT_Response } Model's Response: {Pred_Response } Based on the rules, give your Empathy score (The number only) to the Model's Response. ## Output Empathy score (The number only)Prompt for Evaluating Empathy of Generated Responses via LLMsFigure 19: Prompt for Evaluating Empathy of Generated Responses via LLMs. # System You are an impartial judge, familiar with psychology knowledge and psychological counseling. ## Attention You need to evaluate the ** Helpfulness ** of the AI assistant's responses based on its dialogue with the user. Your evaluation results must be based on the Evaluation Criteria. ## Evaluation Criteria 1.**Helpfulness [0 -5 points]**: Helpfulness evaluates the effectiveness of an AI assistant's suggestions by considering both the number of recommendations provided per interaction and the relevance or usefulness of each suggestion in addressing the user's question. - 0: Irrelevant, misleading, or potentially harmful suggestions. - 1: Ineffective or generic advice that does not respond to the user's needs. | https://arxiv.org/abs/2505.16995v1 |
- 2: Weakly relevant suggestions with limited practical value. - 3: Somewhat helpful; suggestions are relevant and usable. - 4: Clear and practical advice that aligns well with the user's issue. - 5: Highly insightful, tailored, and actionable suggestions that offer strong guidance and value. ## Constraints - Avoid evaluation bias due to preference for specific model names. - Avoid evaluation bias due to response length. ## Input ### Context: {Context} ### Seeker's Input: {User_input } ### Model's Response Ground Truth Response(reference): {GT_Response } Model's Response: {Pred_Response } Based on the rules, give your Helpfulness score (The number only) to the Model's Response. ## Output Helpfulness score (The number only)Prompt for Evaluating Helpfulness of Generated Responses via LLMsFigure 20: Prompt for Evaluating Helpfulness of Generated Responses via LLMs. | https://arxiv.org/abs/2505.16995v1 |
X-MAS: Towards Building Multi-Agent Systems with Heterogeneous LLMs Rui Ye1,*Xiangrui Liu1,* Qimin Wu1Xianghe Pang1Zhenfei Yin2,3Lei Bai4Siheng Chen1,† 1Shanghai Jiao Tong University2University of Oxford3The University of Sydney 4Shanghai AI Laboratory*Equal Contributions†Corresponding Author X-MAS: https://github.com/MASWorks/X-MAS Abstract LLM-based multi-agent systems (MAS) extend the capabilities of single LLMs by enabling cooperation among multiple specialized agents. However, most existing MAS frameworks rely on a single LLM to drive all agents, constraining the sys- tem’s intelligence to the limit of that model. This paper explores the paradigm of heterogeneous LLM-driven MAS (X-MAS), where agents are powered by diverse LLMs, elevating the system’s potential to the collective intelligence of diverse LLMs. We introduce X-MAS-Bench, a comprehensive testbed designed to evalu- ate the performance of various LLMs across different domains and MAS-related functions. As an extensive empirical study, we assess 27 LLMs across 5 domains (encompassing 21 test sets) and 5 functions, conducting over 1.7 million evalua- tions to identify optimal model selections for each domain-function combination. Building on these findings, we demonstrate that transitioning from homogeneous to heterogeneous LLM-driven MAS can significantly enhance system performance without requiring structural redesign. Specifically, in a chatbot-only MAS scenario, the heterogeneous configuration yields up to 8.4% performance improvement on the MATH dataset. In a mixed chatbot-reasoner scenario, the heterogeneous MAS could achieve a remarkable 47% performance boost on the AIME dataset. Our results underscore the transformative potential of heterogeneous LLMs in MAS, highlighting a promising avenue for advancing scalable, collaborative AI systems. 1 Introduction Large language models (LLMs) such as GPT [ 1], Gemini [ 2], Qwen [ 3], have been applied across var- ious domains. However, despite their remarkable capabilities, LLMs often struggle with multifaceted, complex, and real-world problems due to inherent limitations such as hallucinations [4, 5]. In response to these limitations, LLM-based multi-agent systems (MAS) have emerged as a promising solution [ 6,7,8]. MAS involves the collaboration of multiple agents, each specialized in specific functions, to address problems more effectively than a single model could. his paradigm has been successfully applied across various scenarios, including software development [ 7,9], mathematics [ 10, 11], and scientific discovery [ 12,13]. For instance, ChatDev [ 7], MetaGPT [ 9], and EvoMAC [ 14] utilize multiple coding agents (e.g., coders and testers) to improve software programming, while AI co-scientist [8] employs a MAS to enhance biomedical and scientific research. Despite notable progress, most existing MAS frameworks rely on a single LLM to drive all agents [ 9, 7,11,6,14,15,16]. This manner inherently limits the system’s intelligence to that of the underlying model. For example, if a single LLM produces fundamental errors in certain facts, these mistakes are unlikely to be corrected through the collaboration of agents powered by the same model. Inspired Preprint. Under review.arXiv:2505.16997v1 [cs.AI] 22 May 2025 X-MAS-Bench: Evaluating LLMs for MASX-MAS-Design: Heterogeneous MASQAQAReviseQAAggregationQAEvaluationAEPlanningQAExamined LLMMathematics CodingScienceMedicineFinance Chatbots & ReasonersGeneral LLMs & Specialized LLMs27LLMsFiveDomainsFiveFunctionsHomogeneous MASQQQQPERAASingle LLMHeterogeneous MASQQQQPERAADiverse LLMs Figure 1: Overview of our X-MAS-Bench and X-MAS-Design. X-MAS-Bench assesses the capa- bilities of LLMs in MAS while X-MAS-Design focuses on transitioning a homogeneous MAS to a heterogeneous one, gaining from the observations in X-MAS-Bench. Experiments | https://arxiv.org/abs/2505.16997v1 |
on chatbot-only and mixed chatbot-reasoner scenarios evidently show the benefits of heterogeneous MAS. by the advantages of diversity in collective intelligence [ 17,18,19], this paper explores MAS with heterogeneous LLMs (X-MAS), pushing the system’s capabilities beyond its previous limit to harness the collective potential of LLMs trained on diverse corpora or by different teams [20, 21, 22]. To provide a comprehensive evaluation of LLMs in MAS, we introduce X-MAS-Bench , a testbed designed to assess the performance of various LLMs across different MAS-related functions and domains. Specifically, we consider 5 representative functions of agents in MAS, including question- answering [ 15,9], revise [ 14,23], aggregation [ 24,25], planning [ 26,10], and evaluation [ 16,7]; as well as 5 common domains, including mathematics, coding, science, medicine, and finance—spanning 21 test sets. Each function is assessed under controlled experimental conditions. For example, when assessing aggregation, each query is input into several pre-defined LLMs, whose outputs are concatenated to be aggregated by the examined LLM. The aggregated responses of various LLMs are then evaluated and compared. Finally, we assess 27 LLMs across these 5 functions and 5 domains, conducting over 1.7 million evaluations to identify the optimal model selections for each domain- function combination. Our findings include that (1) no single LLM excels across all scenarios, (2) a single LLM could have significant performance variation across functions and domains, (3) different LLMs may show large performance disparities within the same function and domain, (4) smaller LLMs can sometimes outperform larger ones, highlighting the potential advantages of employing heterogeneous LLMs in MAS. These results provide valuable insights for researchers and practitioners in selecting the most appropriate LLMs for their specific MAS applications. Building on these observations, we explore the effects of transitioning from homogeneous to hetero- geneous LLM-driven MAS ( X-MAS-Design ). As a proof of concept, given the implementation of a MAS method, we simply assign agents with appropriate LLMs (taking seconds) by referring observa- tions in X-MAS-Bench. To validate our idea, we examine three existing MAS frameworks—LLM- Debate [ 15], AgentVerse [ 16], and DyLAN [ 11]—as well as a prototype MAS designed by us, which incorporates all five functions in one system. Our analysis covers five domains with no sample overlap compared to X-MAS-Bench. In a chatbot-only scenario, we observe consistent improvements in performance for heterogeneous MAS over homogeneous configurations, achieving up to a 8.4% performance gain on the MATH [ 27] benchmark. Interestingly, while reasoner-only MAS often underperforms relative to chatbot-only systems, combining chatbot and reasoner in a heterogeneous MAS leads to significant performance improvements. Specifically, in the competition-level AIME- 2024 benchmark, AgentVerse [ 16] is improved from 20% to 50%, and DyLAN [ 11] improved from 40% to 63%. Our further experiments reveal that increasing the number of candidate LLMs for heterogeneous MAS results in a monotonic performance improvement, reinforcing the value of LLM diversity in MAS. Based on our work, future research could explore more nuanced strategies for selecting and integrating LLMs in heterogeneous MAS; investigate the scalability and adaptability of heterogeneous MAS across different industries and other complex tasks. | https://arxiv.org/abs/2505.16997v1 |
Our contributions are as follows: 1.X-MAS-Bench: We assess 27 LLMs across 5 MAS-related functions and 5 domains, conducting over 1.7 million evaluations to identify diverse optimal model selections for each domain-function combination. These observations could benefit researchers and practitioners in building MAS. 2 2.X-MAS-Design: Based on findings in X-MAS-Bench, we propose to transition existing MAS methods from homogeneous to heterogeneous LLM-driven MAS. We conduct extensive experi- ments, showing that heterogeneous MAS consistently outperforms homogeneous MAS. 3.Open Source: We release all data, code, and evaluation results to facilitate future MAS research. 2 Related Work LLM-based MAS. LLM-based multi-agent systems (MAS) leverage multiple LLM-based agents to collaborate for better task solving than single LLM [ 16,9,28,6]. ChatDev [ 7], MetaGPT [ 9], and EvoMAC [ 14] use multiple coding agents (e.g., coders and testers) for software programming; while MACM [ 10] applies math agents for mathematics. Focusing on general tasks, debate-based methods [ 15,25] enable multiple experts in debating for better solutions; AgentVerse [ 16] and DyLAN [ 11] dynamically adjust the agent team for task solving; while MAS-GPT [ 6] trains an LLM for generating MAS. However, all of these methods rely on a single LLM to drive all agents, which inherently limits the system’s intelligence to that of the underlying LLM. This paper proposes to push the limit by harnessing the collective intelligence of heterogeneous LLMs from different sources. Heterogeneous LLMs. In a general context of LLMs, there are several works related to using heterogeneous LLMs [ 29,30]. LLM-Blender [ 31] trains a model for ensembling outputs from multiple LLMs. MoA [ 32] and ReConcile [ 33] enable multiple LLMs for discussion, however, involving all candidate LLMs without considering their appropriateness. MASRouter [ 34] manually selects several candidate LLMs for MAS and is optimized for their specific framework. In contrast, our paper systematically assess the capabilities of LLMs across several MAS-related functions and domains, aiming to universally benefit the design of heterogeneous MAS for various MAS methods. Benchmarking LLMs. Many works benchmark the capabilities of LLMs in various domains (such as math [ 27], coding [ 35], science [ 36], medicine [ 37], and finance [ 38]) and functions (such as planning [ 39] and evaluation [ 40]). However, our paper for the first time benchmarks LLMs for MAS, which assesses the capabilities of LLMs across 25 function-domain perspectives related to MAS. 3 X-MAS-Bench: Evaluating LLMs for MAS X-MAS-Bench is a testbed designed to assess the performance of various LLMs across different MAS-related functions and domains. Specifically, we consider 5 representative functions of agents in MAS—question-answering [ 15,9], revise [ 14,23], aggregation [ 24,25], planning [ 26,10], and evaluation [ 16,7]. Orthogonally, we investigate behaviors in 5 domains, including mathematics, coding, science, medicine, and finance—spanning 21 test sets. Each function is assessed under controlled experimental conditions. In this section, we demonstrate the details of experimental conditions in Section 3.1 and experimental results in Section 3.2. 3.1 Benchmarking MAS-Related Functions To systematically assess LLM capabilities in a multi-agent context, we decompose MAS behaviors into five representative and commonly-used agent functions: question-answering, revise, aggregation, | https://arxiv.org/abs/2505.16997v1 |
planning, and evaluation. For each function, we define a standardized prompt protocol. In all cases, we carefully control experimental conditions such that the only varying factor is the LLM under evaluation. Below we detail the designed assessment of each function. Question-answering. The question -answering (QA) function measures a LLM’s ability to compre- hend a question and produce a correct answer in free -text form. This function is fundamental to all MAS methods such as LLM -Debate [ 15], where each agent first independently answers the query, and to MetaGPT [ 9], in which an agent supplies initial draft responses in first stage of the processing. In QA assessment, LLM receives a sampled query from any test dataset (e.g., MATH [ 27]) as input and returns an answer in free-text format. The outputs are compared with ground-truth answers, where the resulted accuracy will be the indicator to assess the QA ability. Revise. The function of revise assesses an agent’s capacity to revise an initial answer, which is poten- tially flawed, to produce a corrected answer. This function commonly exists in workflows requiring iterative refinement, such as the update agent in EvoMAC [ 14] and the refiner in Self-Refine [ 23]. In 3 assessment of revise, given a prompt formatted with a sampled query and a corresponding answer generated by a pre-defined LLM, the examined LLM is asked to provide a final complete answer by reasoning and revising over the provided query and answer. In this case, all examined LLMs are provided with the same prompt (i.e., sourced from the same query and same pre-defined LLM), therefore ensuring fair comparisons among examined LLMs. Similarly, the revised outputs are compared with ground-truth answers and the resulting accuracy denotes the revise capability. Aggregation. Aggregation refers to the capability of combining multiple candidate answers into a coherent, correct, and even improved final answer. It is a key mechanism in MAS that leverages multiple answering paths, such as convergent agents in MacNet [ 24], and extractive mode of the judge in MAD [ 25]. To assess aggregation, for each query, we collect candidate responses from a fixed set of pre-defined LLMs (set to 3 in this case). The examined LLM is provided with the query and these candidate responses in a fixed concatenated format, and is asked to synthesize the final answer. Importantly, all candidate responses and their order remain identical across all models, ensuring consistency in the prompts and allowing for fair comparisons. The aggregated answers are then scored against the ground-truth answers using accuracy as the metric. Planning. Planning involves decomposing a task into sub-tasks and assigning appropriate roles to agents along a workflow to solve the problem collaboratively. This function is critical in systems like MACM [ 10] and MapCoder [ 26], where a thinker or a planning agent defines the entire agentic workflow. In planning assessment, the task of the examined LLM is to provide a suitable plan to the sampled query by giving the role descriptions of the agents required to answer the query together with the workflow, whose output should follow a | https://arxiv.org/abs/2505.16997v1 |
pre-defined format for subsequent string extraction. Subsequently, the ordered role descriptions and the number of roles are extracted. Next, the corresponding number (i.e., role number) of candidate LLMs are activated for action according to the role description and workflow arrangement. Input prompts and candidate LLMs are kept fixed across all examined LLMs. The overall system performance—evaluated by final task accuracy—serves as the proxy for planning capability. Evaluation. Evaluation measures an agent’s ability to critically assess the quality or correctness of other agents’ outputs. This function is used in MAS to filter faulty reasoning, select better solutions, determine early stopping, or guide further actions—commonly seen in frameworks like AgentVerse [ 16] and ChatDev [ 7]. In our assessment, each examined LLM is presented with a query and an answer generated by a pre-defined LLM. The examined LLM is instructed to determine whether the provided answer correctly address the query. The candidate answer and evaluation instruction remain constant across LLMs to ensure fair comparison. Unlike the evaluation in previous functions, the judgment of the examined LLM is compared against ground-truth correctness. 3.2 Experiments in Evaluating LLMs across Functions and Domains Following the above definitions of functions, this section assesses the capabilities of various LLMs in different functions and domains, aiming at demonstrating the landscape of LLMs for MAS. The reported results are expected to demonstrate the potential of leveraging heterogeneous LLMs for MAS and facilitate future researchers in choosing appropriate LLMs for their MAS. Experimental setups. We examine 27 LLMs, covering 20 chatbots (i.e., instructed LLMs) and 7 reasoners (i.e., reasoning LLMs). Among the 20 chatbots, we consider general chatbots trained by different companies such as Llama [ 21], Qwen [ 20], Mistral [ 41,42], and domain-specific chatbots including mathematics [ 22], coding [ 43], science [ 44,45], medicine [ 46,47], and finance [ 46, 47]. The reasoners include LLMs from DeepSeek [ 48], Qwen [ 49], and others [ 50]. We set each model’s maximum token limit to its own capacity, 8192 tokens maxed, with a temperature of 0.5 by default. Specially, all LLMs instantiated within the planning workflow are executed with their temperature fixed at 0 to guarantee as the planning involves format-following. Our datasets cover domains including mathematics [ 27,51,52,53,54,55], coding [ 56,57,58], science [ 36,59,60,61], medicine [ 62,63,64], and finance [ 65,66,67], where each dataset is randomly sampled up to 500 examples without replacement; see more details in Section C. No single LLM excels across all scenarios. We plot the size-performance values of each evaluated chatbot LLM across 25 function-domain combinations in Figure 2 and report the summary of top-3 LLMs for each combination in Table 1; see results of all LLMs in Figure 5 and Table 5. From these results, we see that (1) No single LLM excels universally across all scenarios. A heterogeneous 4 Figure 2: Benchmarking chatbot LLMs on 5 MAS-related functions and 5 domains. We see that no single LLM excels across all scenarios, indicating the potential advantages of employing heterogeneous LLMs in MAS. All evaluation results will be open-sourced for future research. MAS can capitalize | https://arxiv.org/abs/2505.16997v1 |
on these differences by assigning scenario-specialized models (e.g., Llama3- OpenBioLLM for medicine) to specific agents, maximizing collective intelligence. (2) LLMs exhibit varied performance across MAS-related functions, reinforcing the value of heterogeneity. A single LLM could have significant performance variation across domains and functions. Individual LLMs exhibit substantial performance disparities when evaluated across different domains and functions, underscoring the limitations of relying on a single model in a homogeneous MAS. For instance, in Figure 2, Qwen2.5-7B performs exceptionally well for revising in coding domain; while dropping to a mid-tier level for revising in medicine domain and planning in coding domain. There are large performance disparities across LLMs within the same domain and function. For the function of revise or the domain of coding, we observe diverse behaviors on the examined LLMs, as shown by disperse scatters in Figure 2 (second row and second column). Smaller LLMs can outperform larger ones in niche scenarios. While larger models like Qwen2.5- 72B-Instruct and Llama-3.1-70B-Instruct often lead, smaller models occasionally excel in specific function-domain pairs. For example, in revise-coding pair, Qwen2.5-7B-Instruct (79.2) outperforms Qwen2.5-72B-Instruct (77.3); while in aggregation-finance and evaluation-finance pairs, Qwen2.5- 14B achieves the best performance among all models. This indicates that heterogeneous MAS can incorporate smaller, specialized models to optimize performance and computational efficiency, reducing reliance on resource-intensive large models while maintaining or improving outcomes. Low-performing models highlight the risk of homogeneous MAS. Some models consistently underperform across domains and functions (e.g., BioMedGPT-LM-7B and SciPhi-Mistral-7B-32k). 5 Table 1: Summary of top-3 LLMs for each function-domain combination (chatbot-only scenario). All the assessed LLMs are instructed models (e.g., Qwen2.5-32B denotes Qwen2.5-32B-Instruct.). We see that no single LLM excels across all scenarios. Meanwhile, the top models are not always those with the largest sizes, indicating the potential of improving both performance and cost. Function Rank Mathematics Coding Science Medicine Finance ♂¶edal Qwen2.5-32B (69.2) Qwen2.5-32B (80.3) Qwen2.5-72B (60.7) Qwen2.5-72B (70.4) Qwen2.5-72B (74.0) QA ♂¶edal Qwen2.5-72B (68.8) Qwen2.5-72B (77.1) Qwen2.5-32B (60.0) Llama3-OpenBioLLM-70B (69.7) Qwen2.5-32B (71.0) ♂¶edal Qwen2.5-Math-72B (68.2) Qwen2.5-Coder-14B (72.3) Qwen2.5-Math-72B (57.1) Llama-3.1-70B (69.6) Qwen2.5-Coder-32B (70.3) ♂¶edal Qwen2.5-Coder-32B (68.4) Qwen2.5-7B (79.2) Qwen2.5-72B (60.6) Llama-3.1-70B (71.0) Qwen2.5-72B (70.9) Revise ♂¶edal Qwen2.5-14B (68.4) Qwen2.5-Coder-32B (77.7) Qwen2.5-32B (60.2) Qwen2.5-72B (69.3) Llama-3.1-70B (70.1) ♂¶edal Qwen2.5-32B (68.2) Qwen2.5-72B (77.3) Qwen2.5-Math-72B (60.2) Qwen2.5-Math-72B (68.1) Qwen2.5-32B (70.1) ♂¶edal Llama-3.1-70B (77.4) Qwen2.5-72B (85.5) Llama-3.1-70B (67.3) Llama3-OpenBioLLM-70B (73.4) Qwen2.5-14B (73.6) Aggregation ♂¶edal Qwen2.5-Coder-32B (77.1) Mistral-Small-3.1-24B (80.2) Qwen2.5-32B (66.7) Qwen2.5-7B (72.7) Mistral-Small-3.1-24B (73.2) ♂¶edal Qwen2.5-14B (76.2) Qwen2.5-Coder-32B (78.4) Qwen2.5-Coder-32B (66.5) Llama-3.1-70B (72.7) Qwen2.5-7B (72.8) ♂¶edal Qwen2.5-14B (65.0) Llama-3.1-70B (71.0) Qwen2.5-Coder-7B (55.5) Qwen2.5-Coder-14B (65.4) Qwen2.5-72B (64.7) Planning ♂¶edal Mistral-Small-3.1-24B (65.0) Qwen2.5-14B (70.5) Qwen2.5-32B (55.3) Qwen2.5-7B (65.3) Qwen2.5-Coder-14B (63.6) ♂¶edal Qwen2.5-32B (64.7) Qwen2.5-32B (70.1) Mistral-Small-3.1-24B (55.1) Qwen2.5-32B (65.2) Qwen2.5-14B (63.2) ♂¶edal Qwen2.5-32B (79.0) Qwen2.5-14B (55.4) Llama-3.1-70B (67.9) Llama-3.1-70B (70.5) Llama-3.1-70B (72.6) Evaluation ♂¶edal Qwen2.5-14B (78.1) Qwen2.5-Coder-32B (54.7) Mistral-Small-3.1-24B (66.1) Qwen2.5-72B (69.4) Qwen2.5-14B (72.6) ♂¶edal Mistral-Small-3.1-24B (77.9) Llama-3.1-70B (53.8) Qwen2.5-32B (65.3) Mistral-Small-3.1-24B (68.7) Qwen2.5-Math-72B (72.3) A homogeneous MAS relying on such models would be severely limited, whereas a heterogeneous setup can mitigate this by integrating appropriate and high-performing LLMs. Consistent high performers enable robust heterogeneous configurations. Models like Qwen-2.5- 32B-Instruct, Qwen-2.5-72B-Instruct, and Llama-3.1-70B-Instruct | https://arxiv.org/abs/2505.16997v1 |
frequently rank among the top across domains and functions (e.g., 80.3 in QA-coding, 79.0 in evaluation-math for Qwen-2.5-32B- Instruct). These models can serve as reliable anchors in a heterogeneous MAS, complemented by specialized models for niche tasks (e.g., Llama3-OpenBioLLM-70B in medicine), ensuring robust and scalable performance improvements. 4 X-MAS-Design: Leveraging Diversity for MAS Based on the findings in X-MAS-Bench (Section 3.2), we explore the effects of transitioning from homogeneous to heterogeneous LLM-driven MAS (X-MAS-Design). We show how a homogeneous MAS is transformed into a heterogeneous MAS in Section 4.1. We provide experimental results in a chatbot-only scenario (Section 4.2) and a mixed chatbot-reasoner scenario (Section 4.3). 4.1 Transitioning from Homogeneous to Heterogeneous LLM-driven MAS Transitioning existing MAS methods. As a proof of concept, we aim to show that a simple manual modification of the LLM configurations can enhance the performance of MAS without any structural improvement. For each target MAS method (e.g., AgentVerse [ 16], LLM-Debate [ 15]), we retain the original agent roles and interaction topology but substitute the single homogeneous LLM with several appropriate LLMs for the agents. Concretely, for each domain-function pair in the original design (e.g., the evaluator for coding in AgentVerse), we replace the uniform LLM driver with the top performer in the pool of available models based on observations from X-MAS-Bench (Section 3.2). By preserving the method’s interaction logic and prompt templates, we ensure that any performance gains stem solely from LLM heterogeneity rather than modifications of workflow. Please note that this modification is efficient as it only takes human researchers less than one minute to accomplish and could be automated even if we replace humans with LLMs with limited sizes (e.g., 7B [20]). X-MAS-Proto. In addition to adapting existing MAS methods to heterogeneous ones, we implement X-MAS -Proto, a prototype MAS that explicitly implements all five functions (QA, revise, aggregation, planning, evaluation) in a single pipeline, serving as a proper object for investigation. The system (see the MAS in Figure 1) first invokes a planning agent to generate several different high-level ideas to the question; next, multiple QA agents concurrently answer the query based on its corresponding ideas while one of the answers will be evaluated and revised to obtain a potentially better answer; finally, an aggregation agent synthesizes across answers to get the final solution. With X-MAS-Proto, we could straightforwardly assign appropriate LLMs for different functional agents, aiming to clearly demonstrate the benefits of LLM heterogeneity in MAS. 6 Table 2: Transitioning from homogeneous to heterogeneous LLM-driven MAS (X-MAS-Design). There are four considered MAS methods and four candidate models. X-MAS-Design consistently achieves top performances across 5 domains (3 are relatively out-of-domain for candidate LLMs). MAS Method LLM Math Coding Science Medicine Finance Average Qwen2.5-Math-7B 2.40 3.21 0.40 6.00 5.33 3.47 Qwen2.5-Coder-32B 75.20 72.69 32.00 47.60 64.00 58.30 Qwen2.5-32B 83.20 76.31 34.00 50.40 74.67 63.72 Mistral-3.1-24B 66.80 62.25 31.20 40.00 65.33 55.12AgentVerse [16] X-MAS-Design 88.40 77.51 41.20 51.20 72.00 66.06 Qwen2.5-Math-7B 79.20 40.96 29.60 35.20 30.67 43.13 Qwen2.5-Coder-32B 82.40 78.71 34.40 46.80 68.00 62.06 Qwen2.5-32B 85.20 75.50 32.80 50.80 77.33 64.33 Mistral-3.1-24B 76.80 66.67 33.60 | https://arxiv.org/abs/2505.16997v1 |
52.00 66.67 59.15LLM-Debate [15] X-MAS-Design 88.40 79.92 39.20 51.60 77.33 67.29 Qwen2.5-Math-7B 0.00 13.25 15.20 13.20 5.33 9.40 Qwen2.5-Coder-32B 77.20 78.31 34.80 41.60 61.33 58.65 Qwen2.5-32B 81.60 74.70 38.00 46.00 73.33 62.73 Mistral-3.1-24B 75.20 61.85 32.80 41.60 72.00 56.69DyLAN [11] X-MAS-Design 88.80 78.71 38.80 47.20 76.00 65.90 Qwen2.5-Math-7B 10.40 12.85 2.00 10.80 5.33 8.28 Qwen2.5-Coder-32B 82.00 76.71 33.60 46.80 58.67 59.56 Qwen2.5-32B 82.00 69.88 31.20 45.60 72.00 60.14 Mistral-3.1-24B 78.80 63.05 34.40 46.40 72.00 58.93X-MAS-Proto X-MAS-Design 90.40 78.71 40.00 46.80 73.33 65.85 4.2 Experiments in Chatbot-only Scenarios Experimental setups. We experiment on X-MAS-Proto and three existing MAS methods including AgentVerse [ 16], LLM-Debate [ 15], and DyLAN [ 11]. Considering performances and efficiencies, we select four candidate chatbot LLMs: Qwen -2.5-32B, Mistral -Small -3.1-24B, Qwen -2.5-Coder -32B, and Qwen -2.5-Math -7B. We test MAS on a held-out test splits of MATH -500, MBPP, SciBench, PubMedQA, and FinanceBench, covering the examined 5 domains. See model selection in Section D. Consistent performance gains of X-MAS-Design over homogeneous MAS. Table 2 reports the performance comparisons of the homogeneous and heterogeneous versions of four MAS methods, where four LLMs are selected as candidates. The table demonstrates that X-MAS-Design, the heterogeneous MAS configuration, consistently outperforms all homogeneous configurations on average for four methods. In DyLAN, X-MAS-Design achieves an average performance of 65.90, surpassing the best homogeneous model (Qwen2.5-32B, 62.73) by 3 points. There are only two outlier cases—LLM-Debate in medicine and Agentverse in finance—likely due to the candidate LLMs not including specialized models for these particular domains. These results validate the X-MAS-Bench findings, which identified optimal model selections for domain-function combinations. By leveraging diverse and appropriate LLMs, X-MAS-Design harnesses collective intelligence, leading to superior performance without requiring structural changes to existing MAS methods. Method-agnostic benefits of heterogeneity. The performance improvements of X-MAS-Design are consistent across all four MAS methods, despite their differing architectures and philosophies. This method-agnostic nature of X-MAS-Design’s improvements highlights its versatility, providing strong evidence of our core idea in advocating X-MAS. X-MAS-Design could leverage the strengths of weak models to offset their weaknesses. Ho- mogeneous configurations show significant variability in performance across domains, with certain models underperforming in specific areas. For example, Qwen2.5-Math-7B performs poorly in most domains (e.g., 2.40 in Math, 0.40 in Science for AgentVerse), indicating its limited generalizability. Even stronger models like Qwen2.5-32B and Mistral-3.1-24B show weaknesses, such as Mistral-3.1- 24B’s 31.2 in Science (AgentVerse) or Qwen2.5-32B’s 31.2 in Science (X-MAS-Proto). In contrast, X-MAS-Design consistently achieves balanced performance. That is, X-MAS-Design mitigates the limitations of individual LLMs by combining their strengths, indicating the benefits of collective intelligence and that our X-MAS-Bench provides helpful guidance for the design of X-MAS. 7 Table 3: Effectiveness of X-MAS-Design in mixing chatbots and reasoners. While reasoner-based homogeneous MAS performs worse than chatbot-based homogeneous MAS, incorporating chatbots and reasoners into heterogeneous MAS contributing to large performance improvement. MAS Method LLM Math Coding Science Medicine Finance Average Chatbot 20.00 75.50 37.60 47.20 72.00 50.46 Reasoner 0.00 11.65 5.60 44.40 21.33 16.60 AgentVerse [16] X-MAS-Design 50.00 77.91 40.00 52.40 78.67 59.80 Chatbot 16.67 74.70 35.60 49.20 73.33 49.90 | https://arxiv.org/abs/2505.16997v1 |
Reasoner 26.67 79.12 41.60 50.00 72.00 53.88 LLM-Debate [15] X-MAS-Design 56.67 81.12 44.40 54.40 80.00 63.32 Chatbot 20.00 74.70 34.00 44.00 70.76 48.67 Reasoner 40.00 76.31 42.40 45.60 68.00 54.46 DyLAN [11] X-MAS-Design 63.33 80.32 42.80 46.80 76.00 61.85 Chatbot 23.33 72.69 34.80 44.80 68.00 48.72 Reasoner 0.00 71.49 23.20 49.20 56.00 39.98 X-MAS-Proto X-MAS-Design 70.00 79.12 47.20 52.80 76.00 65.02 4.3 Experiments in Mixed Chatbot-Reasoner Scenarios Experimental setups. The examined MAS methods follow that in Section 4.2. As chatbots and reasoners exhibit different behaviors, we consider two candidate LLMs: Qwen-2.5-72B-Instruct and DeepSeek-R1-Distill-Qwen-32B. These methods are tested on AIME-2024 and held-out splits of MBPP, SciBench, PubMedQA and FinanceBench, covering the five examined domains. We also test the methods on entirely new (compared to X-MAS-Bench) test sets: AIME-2025 [ 68] (the latest AIME math competition) and MATH-MAS [69] (multi-step). See model selection in Section E. Mixing chatbots and reasoners in X-MAS-Design achieves superior performance across domains and MAS methods. In Table 3, we explore the potential of mixing chatbot and reasoner LLMs in X- MAS-Design. From the table, we see that (1) X-MAS-Design, combining chatbot and reasoner agents powered by heterogeneous LLMs, consistently outperforms both standalone chatbot and reasoner configurations across all five domains. (2) Standalone chatbot and reasoner configurations show complementary strengths and weaknesses. The heterogeneous X-MAS-Design mitigates individual role limitations by combining chatbot and reasoner strengths, as guided by X-MAS-Bench’s 1.7 million evaluations. This synergy enables robust performance across diverse domains. Table 4: Examination on entirely new benchmarks. X-MAS-Design achieves sig- nificantly best performance. Benchmark AIME-25 MATH-M Chatbot 13.33 14.18 Reasoner 10.00 5.97 X-MAS-Design 46.67 48.13Mixing chatbots and reasoners leads to dramatic im- provements in math domain (AIME). We addition- ally evaluate homogeneous and heterogeneous MAS on two entirely new benchmarks: AIME-2025 and MATH- MAS in Table 4. From Table 3 and 4, we see that in math domain (i.e., AIME-2024, AIME-2025, MATH-MAS), X-MAS-Design contributes to substantial performance boosts. Notably, for X-MAS-Proto, X-MAS-Design scores 70% in AIME-2024, a 46.67%-point gain over the second-best homogeneous MAS, indicating the potential of X-MAS in reasoning-intensive tasks. Meanwhile, X-MAS-Design outperforms the second-best chatbot-based homogeneous MAS by 33% and 34% on the challenging AIME-2025 and MATH-MAS, respectively, indicating the generalization of our core idea. In the era where reasoning models prevail, our experiments point out a potential direction: further scaling compute with X-MAS that mixes chatbots and reasoners. 4.4 Ablation Study Increasing the number of candidate models enhances the performance of X-MAS-Design. Following the setup in Section 4.2, we conduct experiments with X-MAS-Proto on three domains (math, coding, and science) by tuning the number of candidate models. We use the full split for larger sample numbers. From Figure 3, we observe that (1) X-MAS-Design consistently outperforms homogeneous MAS (i.e., 1 candidate model), indicating the benefits of X-MAS. (2) With the number 8 Figure 3: Diversity for the win. Experiments are conducted with X-MAS-Proto on three domains. Increasing the number of candidate models generally enhances the system performance, strongly indicating the benefits of LLM heterogeneity for MAS. of candidate models increases, we can generally observe an increase | https://arxiv.org/abs/2505.16997v1 |
of performance. One exception is in the science domain, which can be attributed that the added model from 2 to 3 is not closely related to science. This curve strongly indicates the benefits of including diverse LLMs in MAS. Figure 4: Comparing X-MAS with LLM selec- tion guided by X-MAS-Bench and arbitrary selec- tion. X-MAS-Design, which is guided by X-MAS- Bench, significantly performs the best.Arbitrary model selection could lead to sub- optimal performance: X-MAS-Bench offers critical observations to guide the design of X- MAS. To verify the effectiveness of the observa- tions from X-MAS-Bench, we compare X-MAS with LLM selection guided by X-MAS-Bench to X-MAS with arbitrary selection. We follow the setup in Section 2, where we experiment on X-MAS-Proto on MATH-500. We arbitrarily de- termine 5 reasonable sets of configurations for designing X-MAS (see details in Section D.2), denoted by blue bars in Figure 4. Homogeneous MAS driven by three different LLMs is denoted by red bars. From the figure, we see that (1) X-MAS-Design, which is designed based on observations from X-MAS-Bench, significantly performs the best. (2) Among those 5 X-MAS without X-MAS-Bench’s guidance, 3 of them achieve slightly better performance than homogeneous MAS, while 1 performs slightly worse than the best homogeneous MAS and 1 even performs signifi- cantly worst (only 24.8%). This indicates that appropriate LLM selection is critical for ensuring the performance of X-MAS and that results in X-MAS-Bench can provide valuable insights. 5 Conclusions This paper advocates building LLM-based MAS with heterogeneous LLMs. We introduce X-MAS- Bench, a comprehensive testbed designed to assess the capabilities of various LLMs in supporting for MAS. We provide a systematic empirical study, which assesses 27 LLMs (both chatbots and reasoners, both genralists and specialists) across 5 representative MAS-related functions and 5 common domains, highlighting the potential of employing heterogeneous LLMs in MAS. Based on the insights from X-MAS-Bench, we examine the effects of transitioning from homogeneous to heterogeneous LLM- driven MAS (X-MAS-Design). Our experiments operating on 4 MAS methods demonstrate that the performance of MAS can be significantly and consistently improved by leveraging heterogeneous MAS without any structural re-design, strongly supporting our advocacy. See limitations in Section A. Our work highlights an intriguing direction that leverages the collective intelligence of heterogeneous LLMs to achieve higher-level intelligence without additional training. Looking ahead, future research could explore areas such as automated or dynamic model selection, the impact of further scaling model candidates, optimizing the synergy between LLM selection and MAS, achieving strong performance with weaker agents, and training agents specifically suited for MAS. 9 References [1] OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [2]Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 , 2024. [3]An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [4]Yue Zhang, Yafu Li, Leyang Cui, Deng | https://arxiv.org/abs/2505.16997v1 |
Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. Siren’s song in the ai ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219 , 2023. [5]Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. Factscore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12076–12100, 2023. [6]Rui Ye, Shuo Tang, Rui Ge, Yaxin Du, Zhenfei Yin, Jing Shao, and Siheng Chen. MAS-GPT: Training LLMs to build LLM-based multi-agent systems. In Workshop on Reasoning and Planning for Large Language Models , 2025. [7]Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, et al. Chatdev: Communicative agents for software development. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 15174–15186, 2024. [8]Juraj Gottweis, Wei-Hung Weng, Alexander Daryin, Tao Tu, Anil Palepu, Petar Sirkovic, Artiom Myaskovsky, Felix Weissenberger, Keran Rong, Ryutaro Tanno, et al. Towards an ai co-scientist. arXiv preprint arXiv:2502.18864 , 2025. [9]Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, et al. Metagpt: Meta programming for a multi-agent collaborative framework. In The Twelfth International Conference on Learning Representations , 2024. [10] Bin Lei, Yi Zhang, Shan Zuo, Ali Payani, and Caiwen Ding. Macm: Utilizing a multi-agent system for condition mining in solving complex mathematical problems. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [11] Zijun Liu, Yanzhe Zhang, Peng Li, Yang Liu, and Diyi Yang. A dynamic llm-powered agent network for task-oriented agent collaboration. In First Conference on Language Modeling , 2024. [12] Daniil A Boiko, Robert MacKnight, Ben Kline, and Gabe Gomes. Autonomous chemical research with large language models. Nature , 624(7992):570–578, 2023. [13] Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, and David Ha. The ai scientist: Towards fully automated open-ended scientific discovery. arXiv preprint arXiv:2408.06292 , 2024. [14] Yue Hu, Yuzhu Cai, Yaxin Du, Xinyu Zhu, Xiangrui Liu, Zijie Yu, Yuchen Hou, Shuo Tang, and Siheng Chen. Self-evolving multi-agent networks for software development. In The Thirteenth International Conference on Learning Representations , 2025. [15] Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. In Forty-first International Conference on Machine Learning , 2024. [16] Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, et al. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors. In The Twelfth International Conference on Learning Representations , 2024. [17] Lu Hong and Scott E Page. Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences , 101(46):16385–16389, 2004. [18] Maria Kozhevnikov, Carol Evans, and Stephen M Kosslyn. Cognitive style as environmentally sensitive individual differences in cognition: | https://arxiv.org/abs/2505.16997v1 |
A modern synthesis and applications in education, business, and management. Psychological science in the public interest , 15(1):3–33, 2014. 10 [19] Ishani Aggarwal, Anita Williams Woolley, Christopher F Chabris, and Thomas W Malone. Cognitive diversity, collective intelligence, and learning in teams. Proceedings of Collective Intelligence , 1(3.1):3–3, 2015. [20] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [21] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [22] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122 , 2024. [23] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. Advances in Neural Information Processing Systems , 36, 2024. [24] Chen Qian, Zihao Xie, YiFei Wang, Wei Liu, Kunlun Zhu, Hanchen Xia, Yufan Dang, Zhuoyun Du, Weize Chen, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Scaling large language model-based multi-agent collaboration. In The Thirteenth International Conference on Learning Representations , 2025. [25] Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Shuming Shi, and Zhaopeng Tu. Encouraging divergent thinking in large language models through multi-agent debate. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 17889–17904, Miami, Florida, USA, November 2024. Association for Computational Linguistics. [26] Md Ashraful Islam, Mohammed Eunus Ali, and Md Rizwan Parvez. Mapcoder: Multi-agent code generation for competitive problem solving. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 4912–4944, 2024. [27] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS , 2021. [28] Shengran Hu, Cong Lu, and Jeff Clune. Automated design of agentic systems. In The Thirteenth International Conference on Learning Representations , 2025. [29] Lingjiao Chen, Matei Zaharia, and James Zou. Less is more: Using multiple LLMs for applications with lower costs. In Workshop on Efficient Systems for Foundation Models @ ICML2023 , 2023. [30] Saranya Venkatraman, Nafis Irtiza Tripto, and Dongwon Lee. Collabstory: Multi-llm collaborative story generation and authorship analysis. arXiv preprint arXiv:2406.12665 , 2024. [31] Dongfu Jiang, Xiang Ren, and Bill Yuchen Lin. Llm-blender: Ensembling large language | https://arxiv.org/abs/2505.16997v1 |
models with pairwise ranking and generative fusion. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 14165–14178, 2023. [32] Junlin Wang, Jue WANG, Ben Athiwaratkun, Ce Zhang, and James Zou. Mixture-of-agents enhances large language model capabilities. In The Thirteenth International Conference on Learning Representations , 2025. [33] Justin Chen, Swarnadeep Saha, and Mohit Bansal. Reconcile: Round-table conference improves reasoning via consensus among diverse llms. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7066–7085, 2024. [34] Yanwei Yue, Guibin Zhang, Boyang Liu, Guancheng Wan, Kun Wang, Dawei Cheng, and Yiyan Qi. Masrouter: Learning to route llms for multi-agent systems. arXiv preprint arXiv:2502.11133 , 2025. [35] Carlos E Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik R Narasimhan. Swe-bench: Can language models resolve real-world github issues? In The Twelfth International Conference on Learning Representations , 2024. 11 [36] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. arXiv preprint arXiv:2311.12022 , 2023. [37] OpenAI. Introducing healthbench. https://openai.com/index/healthbench/ , 2025. Accessed: 2025-05-15. [38] Qianqian Xie, Weiguang Han, Xiao Zhang, Yanzhao Lai, Min Peng, Alejandro Lopez-Lira, and Jimin Huang. Pixiu: a large language model, instruction data and evaluation benchmark for finance. In Proceedings of the 37th International Conference on Neural Information Processing Systems , pages 33469–33484, 2023. [39] Karthik Valmeekam, Matthew Marquez, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. Planbench: An extensible benchmark for evaluating large language models on planning and reasoning about change. Advances in Neural Information Processing Systems , 36:38975–38987, 2023. [40] Sijun Tan, Siyuan Zhuang, Kyle Montgomery, William Yuan Tang, Alejandro Cuadron, Chenguang Wang, Raluca Popa, and Ion Stoica. Judgebench: A benchmark for evaluating LLM-based judges. In The Thirteenth International Conference on Learning Representations , 2025. [41] Mistral. Mistral-7b-instruct-v0.3. https://huggingface.co/mistralai/ Mistral-7B-Instruct-v0.3 , 2024. Accessed: 2025-05-15. [42] Mistral. Mistral-small-3.1-24b-instruct-2503. https://huggingface.co/mistralai/ Mistral-Small-3.1-24B-Instruct-2503 , 2025. Accessed: 2025-05-15. [43] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, et al. Qwen2. 5-coder technical report. arXiv preprint arXiv:2409.12186 , 2024. [44] Zihan Zhao, Da Ma, Lu Chen, Liangtai Sun, Zihao Li, Yi Xia, Bo Chen, Hongshen Xu, Zichen Zhu, Su Zhu, et al. Developing chemdfm as a large language foundation model for chemistry. Cell Reports Physical Science , 6(4), 2025. [45] SciPhi. Sciphi-mistral-7b-32k. https://huggingface.co/SciPhi/SciPhi-Mistral-7B-32k , 2023. Accessed: 2025-05-15. [46] Duxiaoman-DI. Llama3-xuanyuan3-70b-chat. https://huggingface.co/Duxiaoman-DI/ Llama3-XuanYuan3-70B-Chat , 2024. Accessed: 2025-05-15. [47] SYSU-MUCFC-FinTech-Research-Center. Zhilu-2-8b-instruct. https://huggingface.co/ SYSU-MUCFC-FinTech-Research-Center/ZhiLu-2-8B-Instruct , 2024. Accessed: 2025-05-15. [48] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [49] Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, March 2025. [50] OpenThoughts Team. Open Thoughts. https://open-thoughts.ai, January 2025. [51] Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: Learning to solve and explain | https://arxiv.org/abs/2505.16997v1 |
algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 158–167, 2017. [52] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning , pages 10764–10799. PMLR, 2023. [53] Maxwell-Jia. Aime-2024. https://huggingface.co/datasets/Maxwell-Jia/AIME_2024 , 2024. Accessed: 2025-05-15. [54] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In International Conference on Learning Representations , 2021. [55] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue, and Wenhu Chen. MMLU-pro: A more robust and challenging multi-task language understanding benchmark. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. 12 [56] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021. [57] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 , 2021. [58] Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. Is your code generated by chatGPT really correct? rigorous evaluation of large language models for code generation. In Thirty-seventh Conference on Neural Information Processing Systems , 2023. [59] Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scientific problem- solving abilities of large language models. In Forty-first International Conference on Machine Learning , 2024. [60] Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, and Kai Yu. Scieval: A multi-level large language model evaluation benchmark for scientific research. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 19053–19061, 2024. [61] Kehua Feng, Keyan Ding, Weijie Wang, Xiang Zhuang, Zeyuan Wang, Ming Qin, Yu Zhao, Jianhua Yao, Qiang Zhang, and Huajun Chen. Sciknoweval: Evaluating multi-level scientific knowledge of large language models. arXiv preprint arXiv:2406.09098 , 2024. [62] Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. Medmcqa: A large-scale multi- subject multi-choice dataset for medical domain question answering. In Conference on health, inference, and learning , pages 248–260. PMLR, 2022. [63] Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences , 11(14):6421, 2021. [64] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. Pubmedqa: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 2567–2577, 2019. [65] Pranab Islam, Anand Kannappan, Douwe Kiela, | https://arxiv.org/abs/2505.16997v1 |
Rebecca Qian, Nino Scherrer, and Bertie Vidgen. Fi- nancebench: A new benchmark for financial question answering. arXiv preprint arXiv:2311.11944 , 2023. [66] Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan R Routledge, et al. Finqa: A dataset of numerical reasoning over financial data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 3697–3711, 2021. [67] P. Malo, A. Sinha, P. Korhonen, J. Wallenius, and P. Takala. Good debt or bad debt: Detecting semantic orientations in economic texts. Journal of the Association for Information Science and Technology , 65, 2014. [68] OpenCompass. Aime2025. https://huggingface.co/datasets/opencompass/AIME2025 , 2024. Accessed: 2025-05-15. [69] Heng Zhou, Hejia Geng, Xiangyuan Xue, Zhenfei Yin, and Lei Bai. Reso: A reward-driven self-organizing llm-based multi-agent system for reasoning tasks. arXiv preprint arXiv:2503.02390 , 2025. [70] Mistral. Codestral-22b-v0.1. https://huggingface.co/mistralai/Codestral-22B-v0.1 , 2024. Accessed: 2025-05-15. [71] Malaikannan Sankarasubbu Ankit Pal. Openbiollms: Advancing open-source large language models for healthcare and life sciences. https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B , 2024. [72] Yizhen Luo, Jiahuan Zhang, Siqi Fan, Kai Yang, Yushuai Wu, Mu Qiao, and Zaiqing Nie. Biomedgpt: Open multimodal generative pre-trained transformer for biomedicine. arXiv preprint arXiv:2308.09442 , 2023. 13 A Limitations Despite being the most comprehensive evaluation of LLMs for MAS, there are still LLMs that have not been included yet. When transitioning from homogeneous to heterogeneous MAS, we currently rely on manual modification as a proof of concept. Despite that the modification is quite simple, it is worthwhile to explore automated solutions. B Broader Impacts This paper introduces X-MAS-Bench, aiming at assessing the capabilities of LLMs when being incorporated in MAS. The assessed results and the corresponding findings could serve the community, facilitating appropriate model selections during the design of MAS. Our X-MAS-Design aims to transition an existing homogeneous MAS to a heterogeneous one. Similar concept could be extended to many existing MAS, making the overall system perform better. The potential negative impacts of our approach mirror those associated with LLMs including risks of misuse. However, these issues are intrinsic to LLM usage in general. C Experimental Setups We examine 27 LLMs, covering 20 chatbots (i.e., instructed LLMs) and 7 reasoners (i.e., reason- ing LLMs). Among the 20 chatbots, we consider general chatbots trained by different companies: Llama (Llama-3.1-8/70B-Instruct [ 21]), Qwen (Qwen2.5-7/14/32/72B-Instruct [ 20], Mistral (Mistral- 7B-Instruct-v0.3 [ 41], Mistral-Small-3.1-24B-Instruct-2503 [ 42]); we also include domain-specific chatbots including mathematics (Qwen2.5-Math-7/72B-Instruct [ 22]), coding (Qwen2.5-Coder- 7/14/32B-Instruct [ 43], Codestral-22B-v0.1 [ 70]), science (ChemDFM-v1.5-8B [ 44], SciPhi-Mistral- 7B-32k [ 45]), medicine (Llama3-OpenBioLLM-70B [ 71] and BioMedGPT-LM-7B [ 72]), and finance (Llama3-XuanYuan3-70B-Chat [ 46] and ZhiLu-2-8B-Instruct [ 47]) The reasoners include LLMs from DeepSeek (DeepSeek-R1-Distill-Llama-8/70B and DeepSeek-R1-Distill-Qwen-7/14/32B [ 48]), Qwen (QwQ-32B [ 49]) , other (OpenThinker-32B [ 50]) LLMs. We set each model’s maximum token limit to its own capacity, 8192 tokens maxed, with a temperature of 0.5 by default. Spe- cially, all LLMs instantiated within the planning workflow are executed with their temperature fixed at 0 to guarantee as the planning involves format-following. Our datasets cover domains including mathematics (AIME-2024 [ 53], AQUA-RAT | https://arxiv.org/abs/2505.16997v1 |
[ 51], GSM-Hard [ 52], MATH [ 27], MMLU- Math [ 54], MMLU-Pro-Math [ 55]), coding (HumanEval [ 56], HumanEval-Plus [ 58], MBPP [ 57], MBPP-Plus, MMLU-Coding, MMLU-Pro-coding), science (GPQA-Main [ 36], GPQA-Diamond, SciBench [ 59], SciEval [ 60], SciKnowEval [ 61], MMLU-Sci, MMLU-Pro-Sci), medicine (MedM- CQA [ 62], MedQA [ 63], PubMedQA [ 64], MMLU-Med, MMLU-Pro-Med), and finance (Fi- nanceBench [ 65], FinQA [ 66], FPB [ 67], MMLU-Finan, MMLU-Pro-Finan), where each dataset is randomly sampled up to 500 examples without replacement (except for SciKnowEval, from which we draw 800 instances to ensure sufficient coverage of its specialized tasks). D Experiments on X-MAS in Chatbot-Only Scenarios D.1 Experiments setups of X-MAS-Design in Chatbot-Only Scenarios The available LLMs are Qwen-2.5-32B-Instruct, Qwen-2.5-Coder-32B-Instruct, Qwen-2.5-Math-7B- Instruct and Mistral-Small-3.1-24B-Instruct-2503. D.1.1 Agentverse Mathematics. The role assigner is Qwen-2.5-32B-Instruct, the solver is Qwen-2.5-Coder-32B- Instruct and Qwen-2.5-Math-7B-Instruct, the critic is Qwen-2.5-Coder-32B-Instruct, the evaluator is Qwen-2.5-32B-Instruct. Coding. The role assigner is Qwen-2.5-32B-Instruct, the solver is Qwen-2.5-32B-Instruct, the critic is Qwen-2.5-Coder-32B-Instruct, the evaluator is Qwen-2.5-Coder-32B-Instruct. 14 Science. The role assigner is Qwen-2.5-32B-Instruct, the solver is Qwen-2.5-32B-Instruct, the critic is Qwen-2.5-32B-Instruct, the evaluator is Mistral-Small-3.1-24B-Instruct-2503. Medicine. The role assigner is Qwen-2.5-32B-Instruct, the solver is Qwen-2.5-32B-Instruct, the critic is Mistral-Small-3.1-24B-Instruct-2503, the evaluator is Mistral-Small-3.1-24B-Instruct-2503. Finance. The role assigner is Mistral-Small-3.1-24B-Instruct-2503, the solver is Qwen-2.5-Coder- 32B-Instruct, the critic is Qwen-2.5-32B-Instruct, the evaluator is Mistral-Small-3.1-24B-Instruct- 2503. D.1.2 LLM-Debate Mathematics. The debate agent is Qwen-2.5-Coder-32B-Instruct and Qwen-2.5-Math-7B-Instruct, the aggregator is Mistral-Small-3.1-24B-Instruct-2503. Coding. The debate agent is Qwen-2.5-Coder-32B-Instruct, the aggregator is Mistral-Small-3.1-24B- Instruct-2503. Science. The debate agent is Qwen-2.5-32B-Instruct, the aggregator is Qwen-2.5-32B-Instruct. Medicine. The debate agent is Qwen-2.5-32B-Instruct, the aggregator is Mistral-Small-3.1-24B- Instruct-2503. Finance. The debate agent is Qwen-2.5-Coder-32B-Instruct, the aggregator is Mistral-Small-3.1- 24B-Instruct-2503. D.1.3 DyLAN Mathematics. The node agent is Qwen-2.5-Coder-32B-Instruct and Qwen-2.5-Math-7B-Instruct, the ranker is Mistral-Small-3.1-24B-Instruct-2503. Coding. The node agent is Qwen-2.5-Coder-32B-Instruct, the ranker is Mistral-Small-3.1-24B- Instruct-2503. Science. The node agent is Qwen-2.5-32B-Instruct, the ranker is Qwen-2.5-32B-Instruct. Medicine. The node agent is Mistral-Small-3.1-24B-Instruct-2503, the ranker is Mistral-Small-3.1- 24B-Instruct-2503. Finance. The node agent is Qwen-2.5-Coder-32B-Instruct, the ranker is Mistral-Small-3.1-24B- Instruct-2503. D.1.4 X-MAS-Proto Mathematics. The planner is Qwen-2.5-32B-Instruct, the solver is Qwen-2.5-Coder-32B-Instruct and Qwen-2.5-Math-7B-Instruct, the reviser is Qwen-2.5-Coder-32B-Instruct, the evaluator is Qwen- 2.5-32B-Instruct, the aggregator is Mistral-Small-3.1-24B-Instruct-2503. Coding. The planner is Qwen-2.5-32B-Instruct, the solver is Qwen-2.5-Coder-32B-Instruct, the reviser is Qwen-2.5-Coder-32B-Instruct, the evaluator is Qwen-2.5-32B-Instruct, the aggregator is Mistral-Small-3.1-24B-Instruct-2503. Science. The planner is Qwen-2.5-32B-Instruct, the solver is Qwen-2.5-32B-Instruct, the reviser is Qwen-2.5-32B-Instruct, the evaluator is Mistral-Small-3.1-24B-Instruct-2503, the aggregator is Qwen-2.5-32B-Instruct. Medicine. The planner is Qwen-2.5-32B-Instruct, the solver is Qwen-2.5-32B-Instruct, the reviser is Mistral-Small-3.1-24B-Instruct-2503, the evaluator is Mistral-Small-3.1-24B-Instruct-2503, the aggregator is Mistral-Small-3.1-24B-Instruct-2503. Finance. The planner is Mistral-Small-3.1-24B-Instruct-2503, the solver is Qwen-2.5-Coder-32B- Instruct, the reviser is Qwen-2.5-32B-Instruct, the evaluator is Mistral-Small-3.1-24B-Instruct-2503, the aggregator is Mistral-Small-3.1-24B-Instruct-2503. 15 D.2 Experimental Setups of X-MAS with Non-X-MAS-Bench-Guided Model Selections We arbitrarily determine five reasonable manually designed model configurations to examine the robustness and performance sensitivity of the X-MAS-Design under diverse agent choices. These configurations are constructed without referring to the X-MAS-Bench, and are denoted as X-MAS1 through X-MAS5 . Each configuration includes distinct combinations of planner, solver, evaluator, reviser, and aggregator roles. For | https://arxiv.org/abs/2505.16997v1 |
comparison, we also include the original X-MAS-Design configura- tion guided by X-MAS-Bench selection. The X-MAS-Bench-guided configuration, referred to as X-MAS-Design in chatbot-only scenarios, adopts the following models for each agent role: •Planner : Qwen-2.5-32B-Instruct •Solver : Qwen-2.5-Coder-32B-Instruct •Evaluator : Qwen-2.5-32B-Instruct •Reviser : Qwen-2.5-Coder-32B-Instruct •Aggregator : Mistral-Small-3.1-24B-Instruct-2503 This configuration reflects a well-balanced assignment with domain-specialized solvers (e.g., math) and stronger general-purpose planning and evaluation agents. In contrast, the five alternative configurations ( X-MAS1 toX-MAS5 ) are constructed based on general instruction-tuned LLMs without prior empirical optimization. These setups are: X-MAS1 •Planner : Mistral-Small-3.1-24B-Instruct-2503 •Solver : Qwen-2.5-Math-7B-Instruct •Evaluator : Qwen-2.5-Coder-32B-Instruct •Reviser : Qwen-2.5-Math-7B-Instruct •Aggregator : Qwen-2.5-32B-Instruct X-MAS2 •Planner : Mistral-Small-3.1-24B-Instruct-2503 •Solver : Qwen-2.5-Coder-32B-Instruct •Evaluator : Qwen-2.5-Math-7B-Instruct •Reviser : Qwen-2.5-Coder-32B-Instruct •Aggregator : Qwen-2.5-32B-Instruct X-MAS3 •Planner : Qwen-2.5-Math-7B-Instruct •Solver : Mistral-Small-3.1-24B-Instruct-2503 •Evaluator : Qwen-2.5-32B-Instruct •Reviser : Mistral-Small-3.1-24B-Instruct-2503 •Aggregator : Qwen-2.5-Coder-32B-Instruct X-MAS4 •Planner : Qwen-2.5-Coder-32B-Instruct •Solver : Qwen-2.5-32B-Instruct •Evaluator : Qwen-2.5-Math-7B-Instruct •Reviser : Qwen-2.5-32B-Instruct •Aggregator : Mistral-Small-3.1-24B-Instruct-2503 16 X-MAS5 •Planner : Qwen-2.5-32B-Instruct •Solver : Qwen-2.5-Coder-32B-Instruct •Evaluator : Mistral-Small-3.1-24B-Instruct-2503 •Reviser : Qwen-2.5-Coder-32B-Instruct •Aggregator : Qwen-2.5-Math-7B-Instruct All configurations are evaluated on the MATH-500 subset following the X-MAS-Proto scheme. The goal of this analysis is to understand the effect of heterogeneous agent assignments on final multi- agent performance, as well as to validate the necessity and advantages of X-MAS-Bench-guided agent selection. These baselines also serve to demonstrate the variance among manually configured pipelines in the absence of systematic design guidance. E Experiments on X-MAS in Mixed Chatbot-Reasoner Scenarios E.1 Model Selections The available LLMs are Qwen-2.5-72B-Instruct and DeepSeek-R1-Distill-Qwen-32B. E.1.1 Agentverse Mathematics. The role assigner is Qwen-2.5-72B-Instruct, the solver is DeepSeek-R1-Distill-Qwen- 32B, the critic is DeepSeek-R1-Distill-Qwen-32B, the evaluator is DeepSeek-R1-Distill-Qwen-32B. Coding. The role assigner is Qwen-2.5-72B-Instruct, the solver is DeepSeek-R1-Distill-Qwen-32B, the critic is DeepSeek-R1-Distill-Qwen-32B, the evaluator is DeepSeek-R1-Distill-Qwen-32B. Science. The role assigner is Qwen-2.5-72B-Instruct, the solver is DeepSeek-R1-Distill-Qwen-32B, the critic is DeepSeek-R1-Distill-Qwen-32B, the evaluator is DeepSeek-R1-Distill-Qwen-32B. Medicine. The role assigner is Qwen-2.5-72B-Instruct, the solver is Qwen-2.5-72B-Instruct, the critic is Qwen-2.5-72B-Instruct, the evaluator is DeepSeek-R1-Distill-Qwen-32B. Finance. The role assigner is Qwen-2.5-72B-Instruct, the solver is DeepSeek-R1-Distill-Qwen-32B, the critic is DeepSeek-R1-Distill-Qwen-32B, the evaluator is DeepSeek-R1-Distill-Qwen-32B. E.1.2 LLM-Debate Mathematics. The debate agent is DeepSeek-R1-Distill-Qwen-32B, the aggregator is DeepSeek-R1- Distill-Qwen-32B. Coding. The debate agent is DeepSeek-R1-Distill-Qwen-32B, the aggregator is Qwen-2.5-72B- Instruct. Science. The debate agent is DeepSeek-R1-Distill-Qwen-32B, the aggregator is DeepSeek-R1-Distill- Qwen-32B. Medicine. The debate agent is Qwen-2.5-72B-Instruct, the aggregator is DeepSeek-R1-Distill-Qwen- 32B. Finance. The debate agent is DeepSeek-R1-Distill-Qwen-32B, the aggregator is DeepSeek-R1- Distill-Qwen-32B. E.1.3 DyLAN Mathematics. The node agent is DeepSeek-R1-Distill-Qwen-32B, the ranker is Qwen-2.5-72B- Instruct. Coding. The node agent is DeepSeek-R1-Distill-Qwen-32B, the ranker is Qwen-2.5-72B-Instruct. Science. The node agent is DeepSeek-R1-Distill-Qwen-32B, the ranker is Qwen-2.5-72B-Instruct. Medicine. The node agent is Qwen-2.5-72B-Instruct, the ranker is Qwen-2.5-72B-Instruct. 17 Figure 5: Benchmarking LLMs on 5 MAS-related functions and 5 domains. Finance. The node agent is DeepSeek-R1-Distill-Qwen-32B, the ranker is Qwen-2.5-72B-Instruct. E.1.4 X-MAS-Proto Mathematics. The planner is Qwen-2.5-72B-Instruct, the solver is DeepSeek-R1-Distill-Qwen-32B, the reviser is DeepSeek-R1-Distill-Qwen-32B, the evaluator is DeepSeek-R1-Distill-Qwen-32B, the aggregator is DeepSeek-R1-Distill-Qwen-32B. Coding. The planner is Qwen-2.5-72B-Instruct, the solver is DeepSeek-R1-Distill-Qwen-32B, the reviser is DeepSeek-R1-Distill-Qwen-32B, the evaluator is DeepSeek-R1-Distill-Qwen-32B, the aggregator is Qwen-2.5-72B-Instruct. Science. The planner | https://arxiv.org/abs/2505.16997v1 |
is Qwen-2.5-72B-Instruct, the solver is DeepSeek-R1-Distill-Qwen-32B, the reviser is DeepSeek-R1-Distill-Qwen-32B, the evaluator is DeepSeek-R1-Distill-Qwen-32B, the aggregator is DeepSeek-R1-Distill-Qwen-32B. Medicine. The planner is Qwen-2.5-72B-Instruct, the solver is Qwen-2.5-72B-Instruct, the reviser is Qwen-2.5-72B-Instruct, the evaluator is DeepSeek-R1-Distill-Qwen-32B, the aggregator is DeepSeek- R1-Distill-Qwen-32B. Finance. The planner is Qwen-2.5-72B-Instruct, the solver is DeepSeek-R1-Distill-Qwen-32B, the reviser is DeepSeek-R1-Distill-Qwen-32B, the evaluator is DeepSeek-R1-Distill-Qwen-32B, the aggregator is DeepSeek-R1-Distill-Qwen-32B. 18 Table 5: Top-3 Models per Function and Domain (reasoner and chatbot) Function Rank Mathematics Coding Science Medicine Finance ♂¶edal QwQ-32B (80.5) DeepSeek-R1-Distill-Qwen-14B (82.0) QwQ-32B (69.4) DeepSeek-R1-Distill-Llama-70B (75.1) DeepSeek-R1-Distill-Qwen-32B (74.8) QA ♂¶edal DeepSeek-R1-Distill-Qwen-32B (79.0) Qwen2.5-32B (80.3) DeepSeek-R1-Distill-Llama-70B (69.4) QwQ-32B (73.8) QwQ-32B (74.6) ♂¶edal DeepSeek-R1-Distill-Qwen-14B (78.8) DeepSeek-R1-Distill-Qwen-32B (80.0) DeepSeek-R1-Distill-Qwen-32B (68.3) Qwen2.5-72B (70.4) DeepSeek-R1-Distill-Llama-70B (74.3) ♂¶edal QwQ-32B (78.6) DeepSeek-R1-Distill-Llama-70B (81.7) QwQ-32B (67.0) Llama-3.1-70B (71.0) QwQ-32B (76.6) Revise ♂¶edal DeepSeek-R1-Distill-Llama-70B (78.2) DeepSeek-R1-Distill-Qwen-32B (81.0) DeepSeek-R1-Distill-Llama-70B (66.3) DeepSeek-R1-Distill-Llama-70B (66.3) DeepSeek-R1-Distill-Llama-70B (73.9) ♂¶edal DeepSeek-R1-Distill-Qwen-32B (77.8) Qwen2.5-7B (79.2) DeepSeek-R1-Distill-Qwen-32B (65.9) DeepSeek-R1-Distill-Llama-70B (70.7) DeepSeek-R1-Distill-Qwen-32B (73.5) ♂¶edal QwQ-32B (83.2) Qwen2.5-72B (85.5) DeepSeek-R1-Distill-Llama-70B (71.7) DeepSeek-R1-Distill-Llama-8B (74.1) DeepSeek-R1-Distill-Qwen-32B (76.4) Aggregation ♂¶edal DeepSeek-R1-Distill-Qwen-32B (82.2) QwQ-32B (84.2) QwQ-32B (71.3) QwQ-32B (73.8) DeepSeek-R1-Distill-Llama-70B (76.4) ♂¶edal DeepSeek-R1-Distill-Qwen-14B (81.2) DeepSeek-R1-Distill-Llama-70B (83.1) DeepSeek-R1-Distill-Qwen-32B (70.3) DeepSeek-R1-Distill-Llama-70B (73.6) QwQ-32B (74.6) ♂¶edal Qwen2.5-14B (65.0) Llama-3.1-70B (71.0) Qwen2.5-Coder-7B (56.1) Qwen2.5-Coder-14B (65.4) Qwen2.5-72B (64.7) Planning ♂¶edal Mistral-Small-3.1-24B (65.0) Qwen2.5-14B (70.5) Qwen2.5-32B (55.6) Qwen2.5-7B (65.3) Qwen2.5-Coder-14B (63.6) ♂¶edal Qwen2.5-32B (64.7) Qwen2.5-32B (70.1) Qwen2.5-72B (55.6) Qwen2.5-32B (65.2) Qwen2.5-14B (63.2) ♂¶edal DeepSeek-R1-Distill-Llama-70B (85.9) DeepSeek-R1-Distill-Qwen-32B (56.2) DeepSeek-R1-Distill-Llama-70B (70.9) Llama-3.1-70B (70.5) OpenThinker-32B (76.6) Evaluation ♂¶edal QwQ-32B (84.2) Qwen2.5-14B (55.4) DeepSeek-R1-Distill-Qwen-32B (69.1) DeepSeek-R1-Distill-Llama-70B (70.2) QwQ-32B (73.8) ♂¶edal OpenThinker-32B (83.3) QwQ-32B (55.3) OpenThinker-32B (69.0) DeepSeek-R1-Distill-Qwen-14B (69.8) DeepSeek-R1-Distill-Llama-70B (73.1) 19 | https://arxiv.org/abs/2505.16997v1 |
Do Large Language Models Excel in Complex Logical Reasoning with Formal Language? Jin Jiang1,2†, Jianing Wang2†, Yuchen Yan2,3, Yang Liu2, Jianhua Zhu1,Mengdi Zhang2,Xunliang Cai2,Liangcai Gao1, 1Peking University,2Meituan Group,3Zhejiang University, Correspondence: jiangjin@stu.pku.edu.cn, gaoliangcai@pku.edu.cn Abstract Large Language Models (LLMs) have been shown to achieve breakthrough performance on complex logical reasoning tasks. Never- theless, most existing research focuses on em- ploying formal language to guide LLMs to derive reliable reasoning paths, while system- atic evaluations of these capabilities are still limited. In this paper, we aim to conduct a comprehensive evaluation of LLMs across vari- ous logical reasoning problems utilizing formal languages. From the perspective of three di- mensions, i.e., spectrum of LLMs, taxonomy of tasks, and format of trajectories, our key findings are: 1) Thinking models significantly outperform Instruct models, especially when formal language is employed; 2) All LLMs exhibit limitations in inductive reasoning capa- bility, irrespective of whether they use a formal language; 3) Data with PoT format achieves the best generalization performance across other languages. Additionally, we also curate the formal-relative training data to further enhance the small language models, and the experimen- tal results indicate that a simple rejected fine- tuning method can better enable LLMs to gen- eralize across formal languages and achieve the best overall performance. Our codes and reports are available at https://github.com/ jiangjin1999/FormalEval . 1 Introduction Logical reasoning, i.e., deductive, inductive, and abductive, is one of the imperative natural lan- guage processing (NLP) tasks and plays a sig- nificant role in artificial intelligence (AI) to per- form human-like decision-making, task-solving, and deep-thinking (Zhang et al., 2021; Yang et al., 2023b; Yu et al., 2024; Xu et al., 2025). Different from conventional natural language understanding and generation, logical reasoning requires the AI systems to explicitly provide meticulous elucida- tion of thoughts and verifiable derivation chains, Trajectory FormatModel GPT-4o Qwen2.5-7B/32B/72BLlama-3.1-8B Task Type Deductive Inductive AbductiveMixed-Form Math, NLU, Time, Table...Text PythonZ3 CSP Llama-3.3-70B QwQ-32B R1-Distill-Llama-70B R1-Distill-Qwen-7B/32B Think/Instruct Natural / FormalFigure 1: Evaluation framework with three specific di- mensions: spectrum of LLMs, taxonomy of logical rea- soning tasks, and format of trajectories. which is crucial and challenging (Cummins et al., 1991). Early works have developed multiple for- mal languages with symbol solvers to make the rea- soning steps computable and structured (RANISE, 2003; Bulatov et al., 2005; Bjørner et al., 2015). Recently, the emergence of reasoning capabili- ties in large language models (LLMs) has incen- tivized significant progress in complex reasoning tasks, such as mathematics, commonsense, and symbol (Achiam et al., 2023; Bi et al., 2024). Cur- rent studies1have found that LLMs can achieve remarkable performance with the aid of formal language and symbol solvers, especially when inte- grating well-designed task-specific instructs (Lyu et al., 2023; Pan et al., 2023), chain-of-thought (CoT) reasoning patterns (Wei et al., 2022; Ye et al., 2023), and valuable solvers’ feedback (He-Yueya et al., 2023; Gao et al., 2023; Wang et al., 2024). Such approaches aim to formalize the given logical problem and constantly adjust the results lean on 1Due to the space limitations, the detailed description about related works are moved in Appendix A. 1arXiv:2505.16998v1 | https://arxiv.org/abs/2505.16998v1 |
[cs.CL] 22 May 2025 the solver’s feedback. Despite substantial efforts exhibiting exceptional performance, there are still relatively limited systematic and comprehensive evaluations. Thus, a natural question remains open: whether the LLM really excels in complex logical reasoning problems with formal language ? To bridge the gap, this paper endeavors to per- form a comprehensive evaluation of LLMs uti- lizing various formal languages to tackle diverse logical reasoning problems. At first, we develop the evaluation architecture to clearly express the entire assessment view (As illustrated in Sec- tion 2), with the framework shown in Figure 1. Specifically, we divide the entire assessment into three distinct dimensions, including the spectrum of LLMs, the taxonomy of logical reasoning tasks, and the format of trajectories. For the family of LLMs, we further consider different reasoning pat- terns which has been injected into the model train- ing, such as short thinking (e.g., GPT-4o (Achiam et al., 2023), Qwen1.5/2/2.5 (Bai et al., 2023), LLaMA3/3.1/3.3 (Grattafiori et al., 2024)) and long thinking (e.g., DeepSeek-R1-Dsitill-Qwen (Guo et al., 2025)). For the logical reasoning, we adhere to the classic definitions (Flach and Kakas, 2000), categorizing tasks into deductive, inductive, and abductive reasoning. Additionally, we account for tasks that may integrate multiple reasoning types by introducing a new category referred to as mixed- form reasoning. Regarding the format of trajec- tories, we consider three main formal languages (“Python”, “Z3”, “CSP”) with a default natural language format as “Text”. Secondly, we perform a thorough evaluation across these three dimensions (as detailed in Sec- tion 3). Many contemporary benchmarks purely emphasize informal text patterns and lack compre- hensive integration of different formal languages and logical reasoning tasks (Lei et al., 2024; Xu et al., 2025; Xia et al., 2025). For instance, it is widely recognized that Python is superior to plain text when addressing mathematical prob- lems (Friedman, 2023; Gao et al., 2023), but it remains unclear whether Python is equally effec- tive in resolving BBH (Suzgun et al., 2022) and bbeh (Kazemi et al., 2025) problems. To fill this blank, this part aims to investigate whether current LLMs can solve a variety of logical reasoning tasks utilizing different formal languages. From this study, we derive several intriguing observations: 1) Thinking models significantly outperform Instruct models, especially when formal language is em-ployed; 2) All LLMs exhibit limitations in induc- tive reasoning capability, irrespective of whether they use a formal language; 3) LLMs typically pro- duce inferior performance on difficult tasks. These findings prompt a new inquiry Do large models possess generalization capabilities when employ- ing formal languages? Thirdly, we further investigate the generaliza- tion across different reasoning tasks and formal languages (As illustrated in Section 4). To reach this goal, we collect a few training data from the training set of current evaluation tasks, which is classified into three types: deductive, inductive, and abductive. For each task type, we also pro- vide different trajectories according to the usages of (in)formal languages. To make a fair compari- son, we only use data from a single language type for SFT training, and the training data has | https://arxiv.org/abs/2505.16998v1 |
the same scale size. From the experiments, we observe that the LLM can obtain significant in-domain perfor- mance on multiple logical reasoning tasks. In ad- dition, we also discovered an elusive phenomenon that CSP is hard to generalize to other formal and informal languages, but it is easy to generalize from other languages to CSP. Therefore, we speculate that the poor performance of LLM on some formal languages can be blamed on the lack of pertinent knowledge and potential for stimulated reasoning. Lastly, based on the previous exploration, we aim to amplify the capabilities of weaker mod- els in using formal languages to solve reasoning problems. Concretely, we propose a simple but effective rejected fine-tuning (RFT) approach to curate different formal-relative training data. Af- ter the enrichment, the overall accuracy of using informal and formal languages for complex logical tasks can be improved by more than 10%. In summary, the main contributions are as fol- lows: •In light of the insufficient evaluations of ex- isting works, we aim to collect 66 tasks with multiple widely used formal languages, and provide a comprehensive evaluation for cur- rent LLMs across three dimensions, including the spectrum of LLMs, the taxonomy of tasks, and the format of trajectories. •Considering that different formal languages have different expressions for reasoning, we explore the generalization across various for- mal languages. 2 •To further enhance the capability of LLMs in utilizing formal languages to solve complex logic reasoning, we introduce a simple but effective rejected fine-tuning method with cu- rated formal-relative data. The experimental results indicate the effectiveness of consid- ering the generalization of formal language across various logical tasks. 2 Preliminary As illustrated in Figure 1, our evaluation frame- work is structured along three dimensions: Model, Trajectory Format, and Task Type. In this section, we introduce the two key points of complex reason- ing task categorization (Section 2.1) and trajectory format design (Section 2.2). 2.1 Taxonomy of Complex Logical Reasoning Inspired by Xu et al. (2025), we present a unified taxonomy that categorizes a wide range of com- plex reasoning tasks into four major types: Deduc- tive, Inductive, Abductive, and Mixed-Form. To elaborate, the categorization is based on the nature of reasoning required in human-like thinking in the real world: 1) Deductive reasoning is the for- ward reasoning process with rules that starts from the given premises to the conclusion (Goel, 2007; Johnson-Laird, 1999). Formally, we can denote the process as premiserule−→conclusion . 2) Inductive reasoning is the process that infers specific rules based on multiple premises and conclusions. It can be represented as ( premise ,conclusion )→rule. 3)Abductive reasoning is the backward process of deductive which aims to obtain the premise based on conclusion, and the process can be viewed asconclusionrule−→premise . 4)Mix-Form Reason- inginvolves at least two of the above three types of reasoning. In real-life scenarios, most complex problems involve mixed reasoning, including but not limited to temporal-spatial reasoning, NLU, knowledge reasoning, and mathematical reasoning. In pursuit of specific benchmarks based on these categories, we meticulously collect 66subsets of data, and the detailed information can be | https://arxiv.org/abs/2505.16998v1 |
found in Table 2. The details of the specific datasets are shown in Appendix B. 2.2 Trajectory Format As shown in Figure 1, we categorize trajectory formats into two main types: informal language (natural language) and formal language . Informallanguage can be expressed as free-form text, while formal languages include programming languages (e.g., Python) and logic-based languages (e.g., Z3 and CSP). They can be modeled as: LLM (Q) =⟨s1, s2, . . . , s n⟩Exec− − → A where Qis the input question, and LLM (Q)rep- resents the trajectory generated by LLM. Each step si∈ L LLMcorresponds to a structured unit (e.g., code or logic expression), and the trajectory is ex- ecuted by an external engine to produce the final answer A. ForPoT, we use Python 3.12 and its standard library as the execution environment. Each step si∈ LLM PoTis a valid Python statement. For Z3, we adopt the Z3 theorem prover as the execu- tor and Z3 trajectories are composed of declarative symbolic steps si∈ LLM Z3. For CSP , we use thepython-constraint library as the trajectory executor. Each CSP trajectory si∈ LLM CSPcon- sists of variable declarations, domain assignments, and constraint definitions. In addition, we chose Z3 over Prover9 because Z3 not only supports first-order logic (Prover9- FOL) but also natively supports rich theories such as integers and arrays. More detailed description can be found in section C. 3 PART I: Evaluation across LLMs, Tasks, Trajectories In PART I, we present a comprehensive evaluation across three dimensions: Models ,Trajectory For- mats , and Reasoning Task Types . Specifically, we evaluate both Instruct andThinking models, rang- ing from 7B to 72B (see Figure 1). For reasoning tasks, we follow the taxonomy introduced in Sec- tion 2.1. For trajectory formats, we evaluate three formal languages and natural language, as detailed in Section 2.2. All evaluations are conducted in a zero-shot setting. For formal languages (PoT, Z3, CSP), we apply a three-step self-refinement pro- cess during code execution. Detailed evaluation settings are provided in Appendix D.1. 3.1 Model Performance for Reasoning Tasks and Trajectory Formats As shown in Figure 2, the radar chart (Overall+ Fine-grained) illustrates the model’s performance under different task types and trajectory formats. The complete results can be found in Appendix E. 3 Figure 2: Radar plots illustrating the performance (%) of multiple LLMs across different reasoning task types (Deductive, Inductive, Abductive, Mixed Form) and trajectory formats (Text, PoT, Z3, CSP). Overall (top 1 ×4) shows aggregated performance by reasoning type and format. Fine-grained (below 4 ×4) present fine-grained results on individual tasks 4 Thinking model outperforms Instruct model From the overall part, we can observe that series of Thinking models (e.g., QwQ-32B, etc.) outperform theInstruct series in most tasks, especially in the Inductive and Mixed-Form tasks. The disparities between them reflect that the Thinking mode can better elicit the LLM to provide reliable trajecto- ries for formal reasoning. Previous evaluations (Xu et al., 2025) have demonstrated a similar finding thatInstruct models have achieved unsatisfactory results in inductive reasoning, but they do not pro- vide the suggestion that the | https://arxiv.org/abs/2505.16998v1 |
Thinking model can perform well. Text outperforms formal languages, except for QwQ-32B Most models outperform formal lan- guages in the Text trajectory format. In the Fine- grained section, as the trajectory format shifts from Text to CSP, the radar map coverage area gradually decreases, especially in the bbeh series of subtasks. However, QwQ-32B is the only model that stays ahead in all tasks and trajectories, maintaining a high level of performance in all formal languages. Formal language performance drops signifi- cantly on difficult tasks Models can achieve comparable or even better performance than Text with formal languages in simple tasks (e.g., Z3, CSP in Deducitve- BBH_web ), but the performance of formal languages drops off substantially in com- plex tasks(e.g., Deductive- bbeh_boardgameQA ). This phenomenon again suggests that current large models are better at using non-formal languages when expressing complex logic. Possible reasons include: 1) the model training process is dominated by natural language, with a scarcity of formal lan- guage samples; and 2) the model lacks augmen- tation for difficult and complex problems. The performance of text formatting is average, while formal language significantly decreases. It is worth noting that GPT-4o’s performance in this area is relatively stable, possibly due to its optimization in data. Small models perform poorly on formal lan- guage Both Instruct andThinking small models have acceptable overall performance under Text, but when dealing with formal languages, the per- formance drops rapidly. Taking R1-Distill-Qwen- 7B as an example, its performance under the CSP trajectory is even significantly lower than similar Instruct models, indicating that the Thinking mech- anism is difficult to effectively support formal lan-guage reasoning at low parameter scales. In addi- tion, in high complexity tasks such as bbeh-time , bbeh-shuffle , etc., the small model is almost com- pletely ineffective in structured trajectories such as Z3 and CSP, and it is difficult to complete the basic logical steps, which shows its serious lack of ability to deal with formal reasoning problems. Overall, all models except QwQ-32B show a continuous performance degradation in the trajec- tory format change from Text to formal language (PoT, Z3, CSP). This phenomenon suggests that the current mainstream LLMs are more adept at handling natural language tasks, while they are still deficient in formal language reasoning. 3.2 Different Reasoning Tasks Prefer Different Trajectory Format In this section, we use the GPT-4o result as an an- chor point to conduct a detailed analysis of how different tasks exhibit varying preferences for tra- jectory formats. As shown in Figure 3, GPT-4o exhibits diverse preferences across trajectory for- mats. Below, we summarize the main observations. Text performs better in language comprehen- sion and open-ended tasks First, in tasks such asBBH_snarks ,bbeh_linguini ,bbeh_nycc , Text is closer to the nature of the task in humor com- prehension, linguistic style recognition, and fuzzy semantic parsing, and is superior to formal lan- guage. Secondly, in induction and abduction tasks such as AbductionRules ,NeuLRabductive ,NeuL- Rinductive , and Clutrr , where reasoning relies on linguistic expressions, the Text format is more ad- vantageous. In addition, LogicQA , although cate- | https://arxiv.org/abs/2505.16998v1 |
gorized as a logic task, is more akin to a general knowledge quiz. It originates from the Chinese Civil Service Exam, where textual ability plays a dominant role in performance. (Cases in Figure 6) Well-structured tasks prefer PoT PoT for- mat is particularly effective in tasks with strong structural characteristics, such as numerical computation and symbolic reasoning tasks like BBH_dyck_languages andBBH_word_sorting . In these settings, PoT enables efficient computation and facilitates the handling of rules involving nest- ing and ordering. Additionally, in tasks that involve temporal sequences, object tracking, and spatial reasoning, such as bAbI16 ,bbeh_shuffled_objects , and bbeh_spatial_reasoning , PoT demonstrates strong performance by leveraging programmatic 5 0%20%40%60%80%100% ProntoQA Exec Rate 0%20%40%60%80%100% NeuLRabductive Exec Rate 0%20%40%60%80%100% NeuLRinductive Exec Rate 0%20%40%60%80%100% gsm8k Exec Rate 0%20%40%60%80%100% Clutrr Exec Rate 0%20%40%60%80%100% Logicqa Exec Rate 0%20%40%60%80%100% BBH_snarks Exec Rate 0%20%40%60%80%100% bbeh_linguini Exec Rate 0%20%40%60%80%100% bbeh_nycc Exec Rate 0%20%40%60%80%100% bAbI16 Exec Rate 0%20%40%60%80%100% BBH_multistep_arithmetic_two Exec Rate 0%20%40%60%80%100% BBH_word_sorting Exec Rate 0%20%40%60%80%100% BBH_dyck_languages Exec Rate 0%20%40%60%80%100% BBH_tracking_shuffled_five Exec Rate 0%20%40%60%80%100% bbeh_spatial_reasoning Exec Rate Text PoT Z3 CSP0%20%40%60%80%100% LogicBench Exec Rate Text PoT Z3 CSP0%20%40%60%80%100% BBH_geometric_shapes Exec Rate Text PoT Z3 CSP0%20%40%60%80%100% BBH_formal_fallacies Exec Rate Text PoT Z3 CSP0%20%40%60%80%100% ARLSAT Exec Rate Text PoT Z3 CSP0%20%40%60%80%100% BBH_logical_deduction_seven Exec RateFigure 3: Preferred reasoning task performance across different trajectory formats (Text, PoT, Z3, CSP) in GPT-4o results. Each subplot shows task accuracy under different formats, with execution rate (Exec Rate) plotted as a black line. The highlighted bars represent the most preferred trajectory format for each task. trajectories to clearly express intermediate states and transformation processes. (Case in Figure 7) Z3 handles formal and FOL reasoning well. Z3 format shows a good adaptation to formal logic tasks, especially in tasks with strict log- ical rules: LogicBench ,BBH_formal_fallacies , BBH_logical_deduction . This type of task is es- sentially convertible to first-order logical expres- sions, so using an SMT solver (e.g., Z3) as the trajectory language is more suitable. In addition, BBH_geometric_shapes involves spatial reasoning, where the boolean logical expressiveness of Z3 is more advantageous. (Case in Figure 9) CSP shows advantages in complex constraints CSP format shows advantages in some structured logic tasks, such as BBH_logical_deduction , a re- sult consistent with the findings of Logic-LM (Pan et al., 2023). More interestingly, in ARLSAT , a task derived from the Law School Admission Test, CSP also achieves the optimal result, which contrasts with the previous (Pan et al., 2023) literature’s con- clusion that Z3 is better suited for this task. This difference may stem from the characteristics of the tasks themselves; in ARLSAT, the stems of the questions typically contain constraints, which are more consistent in form with the way CSPs areexpressed. (Case in Figure 9) Beyond the four dimensions mentioned above, we can observe that execution success rate (Exec Rate) is also a key factor underlying the differ- ences among various forms of language. More- over, gsm8k achieves its best performance under the Text format, which is inconsistent with find- ings from previous studies (e.g., Ye et al. (2023); He-Yueya et al. (2023)). This discrepancy may be attributed to two | https://arxiv.org/abs/2505.16998v1 |
factors: 1) Prior work often in- volves task-specific optimization for mathematical reasoning; 2) Current large language models are trained on substantial amounts of mathematical nat- ural language reasoning data, which enhances their generalization ability in Text formats. Overall, task trajectory alignment plays a crit- ical role. Different tasks exhibit preferences for specific trajectory formats—some tasks are inher- ently better suited to certain formal representations, and using inappropriate formats may even hinder model performance. Therefore, when constructing multi-trajectory training or evaluation frameworks, it is important to carefully consider the alignment among task structure, target language, and model capabilities. 6 Ded Ind Abd Ded Ind Abd Ded Ind Abd Ded Ind Abd Evaluation DimensionDed Ind Abd Ded Ind Abd Ded Ind Abd Ded Ind AbdTraining Data1.3 1.0 -0.5 2.9 7.2 1.6 1.2 2.6 -0.4 15.1 0.6 5.2 -2.0 16.2 -0.1 -0.2 17.1 -3.0 1.7 5.1 -1.6 15.0 6.1 3.5 -1.5 -4.2 4.5 -1.6 0.3 2.4 -1.2 -1.2 0.7 14.6 -1.0 4.6 0.8 -0.6 0.4 2.0 10.6 0.7 3.0 3.4 1.6 13.8 1.3 9.4 -2.1 7.1 -0.0 0.4 28.1 -0.9 2.0 5.0 -1.6 15.4 3.2 7.3 -0.2 -3.1 8.3 0.7 4.1 15.3 6.0 1.3 13.2 17.1 -1.7 10.7 2.9 0.3 0.5 -2.2 7.9 -0.2 8.5 4.2 8.0 13.4 -1.5 6.8 0.5 7.7 0.1 -1.7 15.3 -1.9 4.7 22.4 8.2 15.6 -0.8 8.2 -0.4 2.6 7.1 -1.7 2.7 4.5 7.5 9.6 16.8 8.8 -8.6 4.4 -3.3 0.8 -0.5 0.6 8.9 0.3 5.5 -5.9 2.1 33.4 2.5 12.3 -1.2 11.3 0.5 0.6 15.8 0.5 8.1 -4.0 3.1 27.8 28.0 14.8 0.7 1.0 3.5 -1.0 3.4 5.5 3.0 -3.9 3.4 26.2 4.7 27.2 Text PoT Z3 CSPText PoT Z3 CSP 5 051015202530(a) Fine-grained by Trajectory Format. Text PoT Z3 CSP Text PoT Z3 CSP Text PoT Z3 CSP Evaluation DimensionText PoTZ3 CSP Text PoTZ3 CSP Text PoTZ3 CSPTraining Data1.3 2.9 1.2 15.1 1.0 7.2 2.6 0.6 -0.5 1.6 -0.4 5.2 0.8 2.0 3.0 13.8 -0.6 10.6 3.4 1.3 0.4 0.7 1.6 9.4 2.9 -2.2 8.5 13.4 0.3 7.9 4.2 -1.5 0.5 -0.2 8.0 6.8 -3.3 0.6 5.5 33.4 0.8 8.9 -5.9 2.5 -0.5 0.3 2.1 12.3 -2.0 -0.2 1.7 15.0 16.2 17.1 5.1 6.1 -0.1 -3.0 -1.6 3.5 -2.1 0.4 2.0 15.4 7.1 28.1 5.0 3.2 -0.0 -0.9 -1.6 7.3 0.5 -1.7 4.7 15.6 7.7 15.3 22.4 -0.8 0.1 -1.9 8.2 8.2 -1.2 0.6 8.1 27.8 11.3 15.8 -4.0 28.0 0.5 0.5 3.1 14.8 -1.5 -1.6 -1.2 14.6 -4.2 0.3 -1.2 -1.0 4.5 2.4 0.7 4.6 -0.2 0.7 6.0 17.1 -3.1 4.1 1.3 -1.7 8.3 15.3 13.2 10.7 -0.4 -1.7 7.5 8.8 2.6 2.7 9.6 -8.6 7.1 4.5 16.8 4.4 0.7 -1.0 3.0 26.2 1.0 3.4 -3.9 4.7 3.5 5.5 3.4 27.2 Deductive Inductive AbductiveDeductive Inductive Abductive 5 051015202530 (b) Fine-grained by Reasoning Type. Figure 4: Generalization performance across fine-grained (task type ×format) configurations. Each cell shows the performance gain ( ∆) from training on the row configuration and evaluating on the column configuration Text PoT Z3 CSP Evaluation DimensionText PoT Z3 CSPTraining Data5.7 5.1 -0.2 8.7 4.2 13.3 8.4 11.2 3.9 4.4 14.8 -4.8 4.5 -15.8 -8.9 | https://arxiv.org/abs/2505.16998v1 |
28.1 15 10 5 0510152025 (a) Trajectory Format. Deductive Inductive Abductive Evaluation DimensionDeductive Inductive AbductiveTraining Data5.2 5.3 4.7 4.0 22.8 4.7 6.2 5.3 18.0 5.07.510.012.515.017.520.022.5 (b) Reasoning Type. Figure 5: Generalization performance across reasoning types and trajectory formats ( coarse-grained analysis). Each cell reports the performance gain ( ∆) when train- ing on the row group and evaluating on the column group. 4 PART II: Generalization Analysis across Reasoning Tasks and Trajectory Formats 4.1 Setup and Visualization Overview We collected the training split of the evaluation dataset, categorized into three reasoning types: Deductive, Inductive, and Abductive (excluding Mixed-Form due to variable control challenges). Each training instance is represented in four trajec- tory formats: Text, PoT, Z3, and CSP. Details are provided in Section 5.1. Based on the above data, we conduct two sets of analytical experiments: coarse-grained and fine- grained. 1): Coarse-grained experiments, as shown in Figure 5, involve training on 7 groups of data (3 reasoning types + 4 trajectory formats), each mixed with general-domain data, and eval- uating on the same 3 reasoning types and 4 for- mats. 2): Fine-grained experiments, as shown in Figure 4, involve training on 12 groups of data (3 reasoning types ×4 trajectory formats), each mixedwith general-domain data, and evaluating across all 12 combinations of reasoning types and formats. Each heatmap cell shows the performance gain ( ∆) when training on the configuration in the row and evaluating on the configuration in the column. The performance gain reflects the improvement intro- duced by our constructed data when mixed with the general-domain data (Trained on Qwen-2.5-7B). 4.2 Coarse-Grained Generalization Analysis Significant in-domain improvement The strongest performance gains are observed along the diagonal, indicating that the model benefits most when the training and evaluation data come from the same group. Notably, the improvements for CSP (Train) →CSP (Eval) and Inductive (Train) →Inductive (Eval) reach 28.1 and 22.8, respec- tively. Combined with observations from Part-I, this can be partially attributed to the relatively low baseline performance of the Qwen2.5-7B model on the CSP and Inductive dimensions, meaning that even a small amount of in-domain data leads to significant improvement. PoT transfers well, while CSP transfers poorly Outside the diagonal, in figure 5a, PoT migrates well in Text, Z3, and CSP. This might be related to the fact that there is a lot of code data in the pre-training data. CSP, on the other hand, has an effect only on Text and CSP, with significant neg- ative effects on PoT (-15.8) and Z3 (-8.9). This suggests that there may be structural differences among formal languages. Reasoning types: all exhibit positive transfer The overall transfer effect is relatively balanced between the different reasoning types (Fig.5b). The relatively small improvement on Deductive itself 7 ModelText PoT Z3 CSP Avg Acc Acc Exec Rate Acc Exec Rate Acc Exec Rate Acc Exec Rate GPT-4o 66.7 64.0 91.5 54.5 87.4 53.0 83.98 59.0 87.6 Qwen2.5-7B-Instruct 52.3 37.0 78.6 33.0 70.0 25.0 52.1 37.0 66.9 Qwen2.5-72B-Instruct 63.4 54.0 85.1 42.5 79.6 43.0 75.2 51.0 80.0 Qwen2.5-7B-Baseline 49.7 40.0 75.4 27.1 68.2 20.0 52.2 34.0 | https://arxiv.org/abs/2505.16998v1 |
65.3 Qwen2.5-7B-Base w.Formal 52.7+3.044.0+4.083.5+8.134.8+7.776.5+8.337.0+17.068.1+15.942.0+8.076.0+10.7 Table 1: Performance of LLM on different trajectory formats before and after formal data enhancement. Accuracy (Acc) and execution rate (Exec Rate) are reported for text, PoT, Z3, and CSP formats. Qwen2.5-7B-Baseline denotes the baseline model trained with general data only; Qwen2.5-7B-Base w.Formal denotes the augmented model trained with a mixture of formal language data. Improvements after augmentation are shown in green. may be related to the higher base level of the model on Deductive. 4.3 Fine-Grained Generalization Analysis Deductive-CSP is most easily generalized In Figure 4, all entries in the Deductive-CSP column show improvements. The inclusion of any data contributes positively to its performance. This is mainly because CSP has a relatively low baseline, and the Deductive category contains some rela- tively simple tasks ( BBH_logical_deduction_three from 40 % to 92%). As a result, adding similar data leads to performance gains. CSP and Z3 transfer well across reasoning types In Figure 4a, all entries (Ded/Ind/Abd) within the CSP and Z3 blocks show positive gains, indicat- ing that regardless of reasoning type, CSP and Z3 formats can be effectively transferred. Abductive transfers well across trajectory for- mats In Figure 4b, all entries (Text/PoT/CSP/Z3) within the Abductive block show improvements, suggesting that regardless of trajectory format, Ab- ductive reasoning can be effectively transferred and improved. 5 PART III: Enhancing LLMs with Formal Data 5.1 Formal Data Construction via RFT To enhance model capability in formal languages, we collect the portions of current evaluation datasets that overlap with training data as part of our training set. All dataset details are provided in Table 2. Similarly, the training data is catego- rized into three types: Deductive, Inductive, and Abductive, and four trajectory formats: Text, PoT, Z3, and CSP. First, we extract up to 3,000 samples from all training data. Then, GPT-4o was chosen as theoutput for teacher model construction. In order to obtain high quality response data, we used Re- jection sampling Fine-Tuning (RFT) . We used GPT-4o to sample the questions several times and then filtered out those samples whose code was ex- ecutable and whose final answers are verified to be correct. The statistics of the filtered data are shown in Table 3. The number in parentheses after each model name indicates the amount of added data. 5.2 Main Result As shown in Table 1, the enhanced model improves accuracy by 3.0% on Text, 7.7% on Z3 (with an 8.3% gain in execution rate), and 17.0% on CSP (from 20.0% to 37.0%, with a 15.9% increase in execution rate). Overall, average accuracy rises from 34.0% to 42.0%, and execution rate from 65.3% to 76.0%. Beyond outperforming the baseline, our formal- data-enhanced model also surpasses the open- source model Qwen2.5-7B-Instruct across all for- mats. Qwen2.5-7B-Base w.Formal has a smaller parameter size than Qwen2.5-72B, but the perfor- mance gap is narrowed by formal data fine-tuning. This suggests that formal data augmentation can effectively improve the competitiveness of small models in formal reasoning tasks. 6 Conclusion In this paper, we provide a comprehensive evalu- ation of LLMs utilizing various formal languages to | https://arxiv.org/abs/2505.16998v1 |
solve different categories of logical reasoning tasks. We first develop a systematic evaluation ar- chitecture and decompose it into three dimensions. Then, we perform a thorough evaluation across these three dimensions to show whether the cur- rent LLMs can excel in formal language utilization. Furthermore, we explore the generalization across multiple formal languages and provide a simple but 8 effective method on the capability enhancement for small language models. For future directions, on the one hand, we should strive to enhance the model’s reasoning capabilities in a balanced manner across different trajectory for- mats and task types, especially for Instruct models. At the same time, it may be valuable to construct formal language reasoning datasets in a "thinking" style. On the other hand, we can leverage the task- specific preferences for trajectory formats to further expand the capability boundaries of the model. One approach is to incorporate reasoning results from different trajectory formats as individual voters in a majority voting scheme. Another approach is to introduce multiple symbolic solvers for different reasoning trajectories during the thinking stage of the think model.Limitations This work provides a step toward evaluating and en- hancing LLMs through formal reasoning formats, but several limitations remain. First, the landscape of LLMs is evolving rapidly. Our experiments fo- cus on a limited set of models available at the time, and newer models may change performance trends. Second, while we include various reasoning types and benchmark datasets, the overall dataset cover- age is limited. Our formal data augmentation is ap- plied to a subset of tasks and may not generalize to other domains. Third, we focus on three formal for- mats, "PoT, Z3, and CSP," due to their executability and popularity. However, this excludes other sym- bolic systems such as Lean, Prolog, Coq, or SMT- LIB, which future work could explore. Finally, our formal data construction is based on the Instruct model (GPT-4o). With the rise of stronger Think- ingmodels, generating think-style formal data may become more feasible and diverse in the future. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, and 1 others. 2023. Gpt-4 techni- cal report. arXiv preprint arXiv:2303.08774 . Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, and 1 others. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609 . Rama Krishna Sai Bhagavatula, Ronan Le Bras, Chai- tanya Malaviya, Yejin Choi, and Noah A Smith. 2020. Abductionrules: Training transformers to explain un- expected inputs. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics , pages 4246–4258. Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen, Damai Dai, Chengqi Deng, Honghui Ding, Kai Dong, Qiushi Du, Zhe Fu, and 1 others. 2024. Deepseek llm: Scaling open-source language models with longter- mism. arXiv preprint arXiv:2401.02954 . Nikolaj Bjørner, Anh-Dung Phan, and Lars Flecken- stein. 2015. νz-an optimizing smt solver. In Tools and Algorithms for the Construction and Analysis of Systems: 21st International Conference, TACAS 2015, Held as Part of | https://arxiv.org/abs/2505.16998v1 |
the European Joint Confer- ences on Theory and Practice of Software, ETAPS 2015, London, UK, April 11-18, 2015, Proceedings 21, pages 194–199. Springer. Andrei Bulatov, Peter Jeavons, and Andrei Krokhin. 2005. Classifying the complexity of constraints us- ing finite algebras. SIAM journal on computing , 34(3):720–742. 9 Benjamin Callewaert, Simon Vandevelde, and Joost Vennekens. 2025. Verus-lm: a versatile framework for combining llms with symbolic reasoning. arXiv preprint arXiv:2501.14540 . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, and 1 others. 2021. Training verifiers to solve math word problems. In null. Denise D Cummins, Todd Lubart, Olaf Alksnis, and Robert Rist. 1991. Conditional reasoning and causa- tion. Memory & cognition , 19:274–282. Peter A Flach and Antonis C Kakas. 2000. Abduction and Induction: Essays on their relation and inte- gration , volume 18. Springer Science & Business Media. Robert Friedman. 2023. Large language models and logical reasoning. Encyclopedia , 3(2):687–697. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Gra- ham Neubig. 2023. Pal: Program-aided language models. In International Conference on Machine Learning , pages 10764–10799. PMLR. Vinod Goel. 2007. Anatomy of deductive reasoning. Trends in cognitive sciences , 11(10):435–441. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, and 1 others. 2024. The llama 3 herd of models. arXiv e-prints , pages arXiv–2407. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shi- rong Ma, Peiyi Wang, Xiao Bi, and 1 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 . Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhent- ing Qi, Martin Riddell, Wenfei Zhou, James Coady, David Peng, Yujie Qiao, Luke Benson, and 1 oth- ers. 2024. Folio: Natural language reasoning with first-order logic. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 22017–22031. Joy He-Yueya, Gabriel Poesia, Rose Wang, and Noah Goodman. 2023. Solving math word problems by combining language models with symbolic solvers. InThe 3rd Workshop on Mathematical Reasoning and AI at NeurIPS’23 . Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. Sort, 2(4):0– 6. Philip N Johnson-Laird. 1999. Deductive reasoning. Annual review of psychology , 50(1):109–135.Mehran Kazemi, Bahare Fatemi, Hritik Bansal, John Palowitch, Chrysovalantis Anastasiou, Sanket Vaib- hav Mehta, Lalit K Jain, Virginia Aglietti, Disha Jindal, Peter Chen, and 1 others. 2025. Big-bench extra hard. arXiv preprint arXiv:2502.19187 . Mehran Kazemi, Quan Yuan, Deepti Bhatia, Najoung Kim, Xin Xu, Vaiva Imbrasaite, and Deepak Ra- machandran. 2023. Boardgameqa: A dataset for natural language reasoning with contradictory infor- mation. Advances in Neural Information Processing Systems , 36:39052–39074. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS | https://arxiv.org/abs/2505.16998v1 |
29th Symposium on Operating Systems Principles . Fangyu Lei, Qian Liu, Yiming Huang, Shizhu He, Jun Zhao, and Kang Liu. 2024. S3eval: A synthetic, scal- able, systematic evaluation suite for large language model. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (Volume 1: Long Papers) , pages 1259–1286. Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2021. Logiqa: a challenge dataset for machine reading comprehen- sion with logical reasoning. In Proceedings of the Twenty-Ninth International Conference on Interna- tional Joint Conferences on Artificial Intelligence , pages 3622–3628. Qiyuan Liu, Ming Yan, Yiyang Liu, Pan Lu, Siwei Wang, and Songfang Huang. 2020. Logiqa: A chal- lenge dataset for machine reading comprehension with logical reasoning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP) , pages 1866–1877. Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and Chris Callison-Burch. 2023. Faithful chain-of- thought reasoning. In The 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (IJCNLP- AACL 2023) . Theo X Olausson, Alex Gu, Benjamin Lipkin, Cede- gao E Zhang, Armando Solar-Lezama, Joshua B Tenenbaum, and Roger Levy. 2023. Linc: A neu- rosymbolic approach for logical reasoning by com- bining language models with first-order logic provers. InProceedings of the 2023 Conference on Empiri- cal Methods in Natural Language Processing , pages 5153–5176. Liangming Pan, Alon Albalak, Xinyi Wang, and William Yang Wang. 2023. Logic-lm: Empower- ing large language models with symbolic solvers for 10 faithful logical reasoning. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2023 , pages 3806–3824. Gaurav Parmar, Shikhar Murari, and Mohit Bansal. 2023. Logicbench: A challenging benchmark for logical reasoning with large language models. In Findings of the Association for Computational Lin- guistics: EMNLP 2023 , pages 148–168. S RANISE. 2003. The smt-lib format: An initial pro- posal. 1st PDPAR, 2003 . Abulhair Saparov and He He. 2023. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. In The Eleventh International Conference on Learning Representations . Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L Hamilton. 2019. Clutrr: A di- agnostic benchmark for inductive reasoning from text. InProceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 4506–4515. Mirac Suzgun, Nathan Scales, Nathanael Scharli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Michael Petrov, Vincent Y Zhao, Ryan Murphy, Adam Roberts, and 1 others. 2022. Challenging big- bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261 . Fumiya Uchiyama, Takeshi Kojima, Andrew Gam- bardella, Qi Cao, Yusuke Iwasawa, and Yutaka Mat- suo. 2023. Which programming language and what features at pre-training stage affect downstream logi- cal inference performance? Findings of the Associ- ation for Computational Linguistics: EMNLP 2023 , pages 1008–1021. Marco Valentino, Mokanarangan Thayaparan, and An- dré Freitas. 2022. Case-based abductive natural lan- guage | https://arxiv.org/abs/2505.16998v1 |
inference. In Proceedings of the 29th Inter- national Conference on Computational Linguistics , pages 1556–1568. Xinyi Wang, Liangming Pan, and William Yang Wang. 2024. Logic-lm++: Multi-step refinement for sym- bolic formulations. In The Twelfth International Con- ference on Learning Representations . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, and 1 others. 2022. Chain-of-thought prompting elic- its reasoning in large language models. Advances in neural information processing systems , 35:24824– 24837. Jason Weston, Antoine Bordes, Sumit Chopra, Tomas Mikolov, and Alexander Rush. 2015. Towards ai- complete question answering: A set of prerequisite toy tasks. In null.Jason Weston, Antoine Bordes, Sumit Chopra, Alexan- der M Rush, Bart Van Merriënboer, Armand Joulin, and Tomas Mikolov. 2016. Towards ai-complete question answering: A set of prerequisite toy tasks. In4th International Conference on Learning Repre- sentations, ICLR 2016 . Yuan Xia, Akanksha Atrey, Fadoua Khmaissia, and Kedar S Namjoshi. 2025. Can large language models learn formal logic? a data-driven training and evalua- tion framework. arXiv preprint arXiv:2504.20213 . Siheng Xiong, Yuan Yang, Ali Payani, Ehsan Shareghi, and Faramarz Fekri. 2024. Strategies for improving nl-to-fol translation with llms: Data generation, incre- mental fine-tuning, and verification. arXiv preprint arXiv:2409.16461 . Fangzhi Xu, Qika Lin, Jiawei Han, Tianzhe Zhao, Jun Liu, and Erik Cambria. 2025. Are large language models really good logical reasoners? a comprehen- sive evaluation and beyond. IEEE Transactions on Knowledge & Data Engineering , pages 1–15. Yuan Yang, Siheng Xiong, Ali Payani, Ehsan Shareghi, and Faramarz Fekri. 2023a. Harnessing the power of large language models for natural language to first-order logic translation. arXiv preprint arXiv:2305.15541 . Zonglin Yang, Xinya Du, Rui Mao, Jinjie Ni, and Erik Cambria. 2023b. Logical reasoning over natural lan- guage as knowledge representation: A survey. In The 61st Annual Meeting Of The Association For Computational Linguistics . Xi Ye, Qiaochu Chen, Isil Dillig, and Greg Durrett. 2023. Satlm: Satisfiability-aided language models using declarative prompting. In Proceedings of NeurIPS . Nathan Young, Qiming Bao, Joshua Bensemann, and Michael J Witbrock. 2022. Abductionrules: Train- ing transformers to explain unexpected inputs. In Findings of the Association for Computational Lin- guistics: ACL 2022 , pages 218–227. Fei Yu, Hongbo Zhang, Prayag Tiwari, and Benyou Wang. 2024. Natural language reasoning, a survey. ACM Computing Surveys , 56(12):1–39. Jing Zhang, Bo Chen, Lingxi Zhang, Xirui Ke, and Haipeng Ding. 2021. Neural, symbolic and neural- symbolic reasoning on knowledge graphs. AI Open , 2:14–35. Yixin Zhao, Adina Williams, Emily Dinan, Mohit Bansal, Mark Yatskar, and Yejin Choi. 2021. Ad- versarial nli: A new benchmark for natural language understanding. In null. Victor Zhong, Chandra Bhagavatula, Ronan Le Bras, Yejin Choi, and Noah A Smith. 2022. Analytical reasoning of text: Unifying machine reading and logical reasoning. In Findings of the Association for Computational Linguistics: NAACL 2022 , pages 2307–2323. 11 Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Jiahai Wang, Jian Yin, Ming Zhou, and Nan Duan. 2021. Ar-lsat: Investigating analytical reasoning of text. arXiv e-prints , pages arXiv–2104.A Related Work A.1 Symbolic Solver Enhances LLM Reasoning The integration of symbolic | https://arxiv.org/abs/2505.16998v1 |
solvers with large lan- guage models (LLMs) has emerged as a promising approach to enhance logical reasoning. Early ef- forts focused on translating natural language to first- order logic (FOL), exemplified by the creation of the MALLS dataset and the LogicLLaMA model, which demonstrated improved NL-to-FOL trans- lation (Yang et al., 2023a). The Logic-LM frame- work further explored this direction by employing different formal languages and solvers tailored to specific reasoning tasks, such as FOL with Prover9, CSP solvers for constraint satisfaction, and Z3 for SMT problems (Pan et al., 2023). SATLM intro- duced declarative prompting to generate task spec- ifications in logical formulas for SAT solvers (Ye et al., 2023), while LINC utilized LLMs for se- mantic parsing into FOL, offloading inference to theorem provers (Olausson et al., 2023). Subse- quent research investigated strategies for improv- ing NL-to-FOL translation through data generation and fine-tuning (Xiong et al., 2024), multi-step re- finement of symbolic formulations (Wang et al., 2024), and the impact of pre-training data, includ- ing programming languages, on logical inference (Uchiyama et al., 2023). Frameworks like VERUS- LM aimed for versatility by supporting various rea- soning tasks with a clear separation of knowledge and queries (Callewaert et al., 2025). A.2 Complex Logical Reasoning Tasks Evaluating the logical reasoning capabilities of LLMs necessitates challenging and diverse datasets that probe various aspects of inference. FOLIO, an- notated with first-order logic, focuses on complex logical reasoning in natural language (Han et al., 2024). ProntoQA utilizes logic programming and emphasizes chain-of-thought reasoning (Saparov and He, 2023), while LogicBench covers proposi- tional, first-order, and non-monotonic logic with a focus on single inference rules (Parmar et al., 2023). BOARDGAMEQA assesses reasoning with contra- dictory information and preferences (Kazemi et al., 2023), and AR-LSAT tests analytical reasoning skills using logic constraints (Zhong et al., 2022). The BIG-Bench Hard (BBH) benchmark includes a wide array of challenging tasks like Boolean Expressions (Suzgun et al., 2022), formal falla- cies (Suzgun et al., 2022), logical deduction (Suz- 12 gun et al., 2022), shuffled objects (Suzgun et al., 2022), and web of lies (Suzgun et al., 2022). Other datasets like bAbI (Weston et al., 2015), CLUTRR (Sinha et al., 2019), α-NLI (Zhao et al., 2021), Ab- ductiveRules (Bhagavatula et al., 2020), LogiQA (Liu et al., 2020), and gsm8k (Cobbe et al., 2021) target specific reasoning types such as deductive, inductive, abductive, temporal, spatial, and mathe- matical reasoning. The variety in these datasets and their annotations highlights the multifaceted nature of complex reasoning and the ongoing efforts to evaluate and enhance LLMs in this domain. B Details of Datasets Table 2 provides a comprehensive overview of all datasets used in our study. Each dataset is anno- tated with its reasoning type (Deductive, Inductive, Abductive, or Mixed-Form), along with the num- ber of evaluation and training examples. We also include the original source for each dataset. The classification follows our taxonomy intro- duced in Section 2.1. In particular: •Deductive datasets include tasks that require formal logical reasoning based on explicit rules or premises. •Inductive datasets focus on pattern discovery and generalization from limited | https://arxiv.org/abs/2505.16998v1 |
examples. •Abductive datasets involve generating plausi- ble explanations under uncertainty. •Mixed-Form includes tasks with hybrid or ambiguous reasoning types, further grouped into subcategories such as Temporal, NLU, Symbolic, Spatial, Knowledge, and Math. Some datasets (e.g., BBH and bbeh) are split into finer task categories, each treated indepen- dently during evaluation. For large-scale datasets like GSM8K and MATH, we use a subset of exam- ples (denoted by *) to maintain balance across task types. This dataset collection forms the foundation for our evaluation across models, trajectory formats, and reasoning types. C Detail of Trajectory Format We extend the unified trajectory formulation to three specific formal languages: Python (PoT), Z3, and CSP. Each trajectory consists of a sequence ofsymbolic steps, which are executed by an external engine to compute the final answer. We denote the model-generated trajectory as: LLM (Q) =⟨s1, s2, . . . , s n⟩Exec− − → A(1) Where Qis the input query, each siis a step in a domain-specific language, and Ais the final answer produced by executing the trajectory. Python (PoT) Trajectory In the Python format, each step siis a syntactically valid Python statement. The trajectory consists of variable assignments, arithmetic operations, control logic, and ends with a print(A) statement. The Python trajectory is formalized as: LLM Python(Q) = (2) ⟨stmt 1,stmt 2, . . . , stmt n, (3) print (A)⟩Python 3.12− − − − − − → A (4) This trajectory is interpreted and executed se- quentially using a Python 3.12 interpreter. Z3 Trajectory Inspired by Logic-LM (Pan et al., 2023), for Z3, the reasoning trajectory is constructed using the Z3 theorem prover. A typical trajectory includes sym- bolic variable declarations such as x = Int(’x’) , followed by logical assertions like s.add(x > 1, x < 5) , and ends with solver calls s.check() and s.model() to extract a result. We represent the Z3 trajectory as: LLM Z3(Q) = (5) ⟨Declare ,Assert 1, . . . , Assert k, (6) CheckSat ,print(A) ⟩Z3 Solver− − − − − → A(7) Z3 supports a wide range of built-in logical the- ories, such as integer arithmetic, arrays, and bit- vectors. CSP Trajectory Constraint Satisfaction Problems (CSPs) are defined by a triple (X, D, C ), where X ={x1, . . . , x n}denotes variables, D ={D1, . . . , D n}their domains, and C={C1, . . . , C m}the set of constraints. Each constraint Cj=⟨tj, Rj⟩is defined over a subset of variables and a relation on their domains. 13 Type Dataset Eval Train Original Source DeductiveFOLIO 134 674 Han et al. (2024) ProntoQA 500 50818 Han et al. (2024) LogicBench 500 12908 Parmar et al. (2023) BOARDGAMEQA 14K 750K Kazemi et al. (2023) AR-LSAT 230 1629 Zhong et al. (2021) BBH (Boolean Expression) 250 - Suzgun et al. (2022) bbeh (Boolean Expressions) 200 - Kazemi et al. (2025) BBH (formal_fallacies) 250 - Suzgun et al. (2022) bbeh (Zebra Puzzles) 200 - Kazemi et al. (2025) BBH (logical_deductive_five_objects) 250 - Suzgun et al. (2022) BBH (logical_deductive_seven_objects) 250 - Suzgun et al. (2022) BBH (logical_deductive_three_objects) 250 - | https://arxiv.org/abs/2505.16998v1 |
Suzgun et al. (2022) bbeh (Boardgame QA) 200 - Kazemi et al. (2025) BBH (tracking_shuffled_objects_five_objects) 250 - Suzgun et al. (2022) BBH (tracking_shuffled_objects_seven_objects) 250 - Suzgun et al. (2022) BBH (tracking_shuffled_objects_three_objects) 250 - Suzgun et al. (2022) bbeh (Shuffled Objects) 200 - Kazemi et al. (2025) BBH (web_of_lies) 250 - Suzgun et al. (2022) bbeh (Web of Lies) 200 - Kazemi et al. (2025) bAbI-15 1000 900 Weston et al. (2016) NeuLR-deductive 7001 - Xu et al. (2025) InductiveCLUTRR 1042 2452 Sinha et al. (2019) bAbI-16 1000 900 Weston et al. (2016) NeuLR-inductive 7001 - Xu et al. (2025) Abductiveα-NLI 3059 169k Valentino et al. (2022) AbductiveRules 2536 8848 Young et al. (2022) BBH (causal_judgement) 250 - Suzgun et al. (2022) bbeh (Causal Understanding) 200 - Kazemi et al. (2025) NeuLR-abductive 6001 - Xu et al. (2025) Mixed-Form Logical LogiQA 1572 - Liu et al. (2021) Temporal BBH (date_understanding) 250 - Suzgun et al. (2022) bbeh (Time Arithmetic) 200 - Kazemi et al. (2025) BBH (temporal_sequences) 250 - Suzgun et al. (2022) bbeh (Temporal Sequences) 200 - Kazemi et al. (2025) NLU BBH (disambiguation_qa) 250 - Suzgun et al. (2022) bbeh (Disambiguation QA) 200 - Kazemi et al. (2025) BBH (hyperbaton) 250 - Suzgun et al. (2022) bbeh (Hyperbaton) 200 - Kazemi et al. (2025) BBH (ruin_names) 250 - Suzgun et al. (2022) bbeh (New Yorker Cartoon Caption) 200 - Kazemi et al. (2025) BBH (salient_translation_error_detection) 250 - Suzgun et al. (2022) bbeh (Linguini) 200 - Kazemi et al. (2025) BBH (snarks) 250 - Suzgun et al. (2022) bbeh (SARC Triples) 200 - Kazemi et al. (2025) Symbolic BBH (dyck_languages) 250 - Suzgun et al. (2022) bbeh (Dyck Language) 200 - Kazemi et al. (2025) BBH (word_sorting) 250 - Suzgun et al. (2022) bbeh (Word Sorting) 200 - Kazemi et al. (2025) Space BBH (geometric_shapes) 250 - Suzgun et al. (2022) bbeh (Geometric Shapes) 200 - Kazemi et al. (2025) BBH (navigate) 250 - Suzgun et al. (2022) bbeh (Spatial Reasoning) 200 - Kazemi et al. (2025) Table BBH (penguins_in_a_table) 250 - Suzgun et al. (2022) bbeh (Buggy Tables) 200 - Kazemi et al. (2025) Knowledge BBH (moive_recommendation) 250 - Suzgun et al. (2022) bbeh (Movie Recommendation) 200 - Kazemi et al. (2025) BBH (sports_understanding) 250 - Suzgun et al. (2022) bbeh (SportQA) 200 - Kazemi et al. (2025) MATH GSM8K 1319 *8790 Cobbe et al. (2021) MATH 5000 *7500 Hendrycks et al. (2021) BBH (multistep_arithmetaic_two) 250 - Suzgun et al. (2022) bbeh (Multi-step Arithmetic) 200 - Kazemi et al. (2025) BBH (object_counting) 250 - Suzgun et al. (2022) bbeh (Object Counting) 200 - Kazemi et al. (2025) BBH (reasoning_about_colored_objects) 250 - Suzgun et al. (2022) bbeh (Object Properties) 250 - Suzgun et al. (2022) Table 2: Complex Logical Reasoning data categorization, data statistics, and sources. 14 The CSP trajectory is modeled as: LLM CSP(Q) = (8) ⟨AddVar 1, . . . , AddVar n,AddConst 1, . . . , AddConst m,GetSolution , print(A) ⟩python-constraint− − − − − − − − − → A (9) The execution uses the python-constraint solver. | https://arxiv.org/abs/2505.16998v1 |
Variables are added through addVariable() , constraints through addConstraint() , and solutions are obtained via getSolution() orgetSolutions() . The solver applies standard algorithms such as backtracking and constraint propagation. While Prover9-FOL supports classical first-order logic, we choose Z3 for its broader practical appli- cability. Z3 not only supports FOL reasoning but also natively handles richer theories such as inte- gers, arrays, and linear arithmetic. This allows it to express a wider range of constraints found in real-world reasoning tasks. D Implementation Setups D.1 Evaluations Details In the inference phase, we use the vLLM (Kwon et al., 2023) framework for deployment. The infer- ence configuration adopts greedy decoding strategy and sets the maximum generation length to 16K to- kens. For the evaluation of model output, we adopt Qwen-2.5-72B-Instruct as the model evaluator to score. D.2 Training Details In terms of training implementation, we use Megatron-LM as the training framework with the following configurations: a cosine learning rate schedule is adopted with an initial learning rate of 1e-5, a warmup ratio of 0.03, and the learning rate decays to 0; the maximum sequence length is set to 8192, with a global batch size of 128, and the num- ber of training epochs is set to 3. All experiments are completed with Supervised Fine-tuning (SFT) on a computing cluster consisting of 32 NVIDIA A100 GPUs. E Complete results for different models As shown in Table 3, we evaluated a total of 31 models across the three parts of this paper. Due to space constraints, we present the results of sev- eral representative models here: QwQ-32B (Ta- ble 4), GPT-4o (Table 5), Qwen2.5-7B (Table 6),and Qwen2.5-7B-Base w. Formal (Table 7). The complete results are provided in the supplementary files in Excel format. F Case Study F.1 Case for PART I We give cases where Text (Fig 6), PoT(Fig 7), Z3(Fig 8) and CSP(Fig 9) specialize in each case to show their strengths. F.2 From Logic-LM Few-Shot Eval to Zero-Shot Logic-LM uses few-shots setting and rule extrac- tion to build “task-specific executable code” for “a particular formal language” and “a particular task”. We use zero-shot directly for evaluation. As shown in the case study in Figure 10, both approaches behave similarly and can evaluate the model’s for- mal language reasoning ability. Meanwhile, the zero-shot setting has better generalization, and this paper considers a subset of 66 datasets based on it. G Prompts For text, we use questions directly as input to the rubric. For formal languages, we use zero-shot reviews. Prompts are as follows: PoT in Figure 11; Z3 in Figure 12; CSP in Figure 13. The Prompt for evaluating models is in Figure 14. 15 Section Number Model PART-I4 Thinking-Model + 6 Instruct-Model =10QwQ-32B DeepSeek-R1-Distill-Llama-70B DeepSeek-R1-Distill-Qwen-32B DeepSeek-R1-Distill-Qwen-7B GPT-4o Qwen2.5-72B Qwen2.5-32B Llama-3.3-70B Qwen2.5-7B Llama-3.1-8B PART-II3 (Deductive, Inductive, Abductive) + 4 (Text, PoT, Z3, CSP) =7Qwen2.5-7B-Base.w. Deductive (+5653) Qwen2.5-7B-Base.w. Inductive (+4947) Qwen2.5-7B-Base.w. Abductive (+6557) Qwen2.5-7B-Base.w. Text (+7384) Qwen2.5-7B-Base.w. PoT (+7448) Qwen2.5-7B-Base.w. Z3 (+6882) Qwen2.5-7B-Base.w. CSP (+6346) 3 (Deductive, Inductive, Abductive) × 4 (Text, PoT, Z3, CSP) =12Qwen2.5-7B-Base.w. Deductive_Text (1376) Qwen2.5-7B-Base.w. Deductive_PoT (+1393) Qwen2.5-7B-Base.w. Deductive_Z3 (+1374) Qwen2.5-7B-Base.w. Deductive_CSP (+1510) | https://arxiv.org/abs/2505.16998v1 |
Qwen2.5-7B-Base.w. Inductive_Text (+1263) Qwen2.5-7B-Base.w. Inductive_PoT (+1476) Qwen2.5-7B-Base.w. Inductive_Z3 (+1166) Qwen2.5-7B-Base.w. Inductive_CSP (+1042) Qwen2.5-7B-Base.w. Abductive_Text (+1820) Qwen2.5-7B-Base.w. Abductive_PoT (+1775) Qwen2.5-7B-Base.w. Abductive_Z3 (+1667) Qwen2.5-7B-Base.w. Abductive_CSP (+1295) PART-III1 Baseline-Model+ 1 Formal Data Enhanced Model =2Qwen2.5-7B-Baseline* (15k) Qwen2.5-7B-Base.w. Formal (+28060) ALL 31 - Table 3: Comprehensive Overview of Model Evaluation Experiments in the Entire Paper (Models in Bold Are Presented with Full Results Later). Parentheses after the model in PART-II indicate the corresponding amount of data. All data are based on the 15k generic data of Qwen2.5-7B-Baseline*, plus (+) the corresponding amount of our synthetic data. 16 Case Study for Text (bbeh_nycc) The following is a description of a funny cartoon for the New Yorker Caption Contest Description: Two trees are standing next to each other . One tree has a face and is looking at the other tree, which also has a face and is wearing a tie. Which of the following captions is the funniest? (A) 2 birds are doing it in my branches. Hence, the tie on the doorknob. (B) Next she's talking about my getting a trim and throwing out the squirrels. (C) It’ s part of my fall look. (D) It helps me feel more professional when I’m working from home. (E) She dresses me up, but we never go anywhere! (F) Sir , you can’ t come in here without a tie. (G) I just wanted to dress up one more time before I go bald again this fall (H) Well it's just a seasonal position. (I) I got the job after the other guy got the axe. (J) Funny story , but it did start initially with a yellow ribbonCase Study for Text (bbeh_linguini) The plural of Catalan nouns is usually formed by adding the ending - s. But if the noun ends in one of the letters s, x or ç, more complex rules apply . Here are the singular and the plural forms of some Catalan nouns (in simplified spelling) and their English translations. Some forms are missing. Singular | Plural | Translation\nel apèndix | els apèndixs | appendix el bastaix | (1) | carrier\nel troleibús | (2) | trolleybus el cactus | els cactus | cactus\nla càries | les càries | caries\n(3) | les clos | meadow el contumaç | els contumaços | rebel\nla faç | les façs | face el flux | els fluxos | stream\nel gimnàs | els gimnasos | gym la hèlix | les hèlixs | screw\nel índex | els índexs | index el iris | (4) | rainbow\nel llaç | (5) | loop\nel ònix | (6) | onyx el pàncrees | els pàncrees | pancreas\nel pedaç | els pedaços | patch la pelvis | (7) | pelvis el permís | els permisos | permission\nel pis | (8) | flat, lodgings (9) | els sequaços | (male) follower (10) | les sequaçs | (female) follower\nel sufix | els sufixos | suf fix (11) | els tastaolletes | frivolous person\nel teix | els teixos | yew la trencadís | les trencadís | piece of crockery el vas | els vasos | vase\nla xeix | | https://arxiv.org/abs/2505.16998v1 |
(12) | (the letter) x Fill in the masked element indicated with (5), assuming the (1), (2), ... elements ar e masked out elements.Figure 6: Case for Text. 3.2. These issues are closely related to semantic understanding. They are better suited for textual representation and are difficult to address using formalized languages. Return to section 3.2. Case Study for PoT (BBH_dyck_language) Complete the rest of the sequence, making sure that the parentheses are closed properly . Input: < ( ( ( < > ) ) ( { { } [ { } ] [ ] < ( ) > } ) ) GPT-4o T ext Answer: To correctly close a Dyck- n word, we need to ensure that all types of brackets are properly matched and closed in the correct order . The given sequence is: ``` < ( ( ( < > ) ) ( { { } [ { } ] [ ] < ( ) > } ) ) ``` Let's break it down step by step: 1. `<` is opened and closed with `>`. 2. `(` is opened three times and needs to be closed three times. 3. `{` is opened twice and needs to be closed twice. 4. `[` is opened twice and needs to be closed twice. 5. `<` is opened and closed with `>`. 6. `(` is opened and closed with `)`. The sequence is already correctly closed. Therefore, no additional characters are needed. The sequence is: ``` < ( ( ( < > ) ) ( { { } [ { } ] [ ] < ( ) > } ) ) ``` This sequence is a valid Dyck- n word as all brackets are properly matched and closed.GPT-4o PoT Answer: def complete_dyck_word(sequence): # Stack to keep track of opening brackets stack = [] # Dictionary to match opening and closing brackets matching_bracket = {'(': ')', '{': '}', '[': ']', '<': '>'} # Iterate over the sequence for char in sequence: if char in matching_bracket: # If it's an opening bracket, push to stack stack.append(char) elif char in matching_bracket.values(): # If it's a closing bracket, pop from stack if stack and matching_bracket[stack[-1]] == char: stack.pop() else: # If there's a mismatch, return an error return "Error: Mismatched brackets" # Complete the sequence by closing all remaining open brackets completed_sequence = sequence while stack: completed_sequence += matching_bracket[stack.pop()] return completed_sequence # Input sequence input_sequence = "<((<>)({{}[{}][]<()>}))" result = complete_dyck_word(input_sequence) print(result) Output: <((<>)({{}[{}][]<()>}))> ✅❌ Figure 7: Case for PoT. Symbolic reasoning tasks are inherently well-suited to programming languages, and such problems may in fact originate from coding algorithm questions. Return to section 3.2 17 Case Study for Z3 (BBH_formal_fallcies) "It is not always easy to see which chemicals are contained in our consumer products. The following ar gument pertains to this question: First, vitamin A is an ingredient of LIQUID EYELINER. Second, every ingredient of BC Eye Cream 15 ml is not an ingredient of Mango Lip Butter or not an ingredient of LIQUID EYELINER. Therefore, it is not the case that Vitamin A is an ingredient | https://arxiv.org/abs/2505.16998v1 |
of BC Eye Cream 15 ml.\"\n Is the ar gument, given the explicitly stated premises, deductively valid or invalid? Options: - valid - invalid GPT-4o T ext Answer: To determine whether the ar gument is deductively valid, we need to analyze the logical structure of the premises and the conclusion. The ar gument is structured as follows: 1. Premise 1: Vitamin A is an ingredient of LIQUID EYELINER. 2. Premise 2: Every ingredient of BC Eye Cream 15 ml is not an ingredient of Mango Lip Butter or not an ingredient of LIQUID EYELINER. 3. Conclusion: Therefore, it is not the case that Vitamin A is an ingredient of BC Eye Cream 15 ml. To assess the validity , we need to see if the conclusion necessarily follows from the premises. Premise 2 can be restated in logical terms: For any ingredient \ ( x \), if \( x \) is an ingredient of BC Eye Cream 15 ml, then \ ( x \) is not an ingredient of Mango Lip Butter or \ ( x \) is not an ingredient of LIQUID EYELINER. Given Premise 1, Vitamin A is an ingredient of LIQUID EYELINER. For the conclusion to be true, Vitamin A must not be an ingredient of BC Eye Cream 15 ml. According to Premise 2, if Vitamin A were an ingredient of BC Eye Cream 15 ml, then it would have to be either not an ingredient of Mango Lip Butter or not an ingredient of LIQUID EYELINER. Since we know from Premise 1 that Vitamin A is an ingredient of LIQUID EYELINER, the only way for Premise 2 to hold true is if Vitamin A is not an ingredient of BC Eye Cream 15 ml. Thus, the conclusion logically follows from the premises, making the ar gument deductively valid. - validGPT-4o Z3 Answer: from z3 import * # Define the variables VitaminA = Bool('V itaminA') BC_Eye_Cream = Bool('BC_Eye_Cream') Mango_Lip_Butter = Bool('Mango_Lip_Butter') Liquid_Eyeliner = Bool('Liquid_Eyeliner') # Premises # 1. Vitamin A is an ingredient of LIQUID EYELINER premise1 = VitaminA == Liquid_Eyeliner # 2. Every ingredient of BC Eye Cream 15 ml #is not an ingredient of Mango Lip Butter or not #an ingredient of LIQUID EYELINER premise2 = Implies(BC_Eye_Cream, \nOr(Not(Mango_Lip_Butter), Not(Liquid_Eyeliner))) # Conclusion # It is not the case that Vitamin A is an ingredient of BC Eye Cream 15 ml conclusion = Not(V itaminA == BC_Eye_Cream) # Check if the premises imply the conclusion s = Solver() s.add(premise1, premise2, Not(conclusion)) if s.check() == sat: print("invalid") else: print("valid") ```The final answer is: ``` ❌ invalid ✅Figure 8: Case for Z3. Z3 (which, in this context, incorporates the first-order logic reasoning capabilities of Prover9) excels at solving formal first-order logic problems. Return to section 3.2 18 Case Study for CSP (AR-LSAT) Context: On each of exactly seven consecutive days (day 1 though day 7), a pet shop features exactly one of three breeds of kitten— Himalayan, Manx, Siamese—and exactly one of three breeds of puppy— Greyhound, Newfoundland, Rottweiler . The following conditions must apply: Greyhounds are featured on day | https://arxiv.org/abs/2505.16998v1 |
1. No breed is featured on any two consecutive days. Any breed featured on day 1 is not featured on day 7. Himalayans are featured on exactly three days, but not on day 1. Rottweilers are not featured on day 7, nor on any day that features Himalayans. Question: If Himalayans ar e not featur ed on day 2, which one of the following could be true? Options: A: Manx ar e featur ed on day 3. \nB: Siamese ar e featur ed on day 4. \nC: Rottweilers ar e featur ed on day 5. D: Himalayans ar e featur ed on day 6. \nE: Gr eyhounds ar e featur ed on day 7. GPT-4o T ext Answer: To solve this problem, we need to analyze the constraints and determine which option could be true if Himalayans are not featured on day 2. Given constraints: 1. Greyhounds are featured on day 1. 2. No breed is featured on any two consecutive days. 3. Any breed featured on day 1 is not featured on day 7. 4. Himalayans are featured on exactly three days, but not on day 1. 5. Rottweilers are not featured on day 7, nor on any day that features Himalayans. Let's break down the possibilities: Since Greyhounds are featured on day 1, they cannot be featured on day 7. Himalayans are featured on exactly three days, but not on day 1 or day 2 (as per the question). Rottweilers cannot be featured on any day that f eatures Himalayans, and they cannot be featured on day 7. Now , let's consider each option: ... B: Siamese are featured on day 4. This is possible. If Himalayans are on days 3, 5, and 6, Siamese can be on day 4. C: Rottweilers are featured on day 5. This is not possible because if Himalayans are on day 5, Rottweilers cannot be on the same day . D: Himalayans are featured on day 6. - This is possible. Himalayans can be on days 3, 5, and 6. E: Greyhounds are featured on day 7. - This is not possible because Greyhounds cannot be on day 7 as they are on day 1. Therefore, the options that could be true are A, B, and D. However , since the question asks for one option, and considering the constraints, option D: "Himalayans are featured on day 6" is a straightforward possibility given the conditions.GPT-4o CSP Answer: from constraint import * # Initialize the problem problem = Problem() # Define the variables for each day (1 to 7) for kittens and puppies days = range(1, 8) kitten_breeds = ['Himalayan', 'Manx', 'Siamese'] puppy_breeds = ['Greyhound', 'Newfoundland', 'Rottweiler'] # Add variables for kittens and puppies for each day for day in days: problem.addV ariable(f'kitten_day_{day}', kitten_breeds) problem.addV ariable(f'puppy_day_{day}', puppy_breeds) # Add constraints based on the pr oblem statement # Greyhounds are featured on day 1 problem.addConstraint(lambda p: p == 'Greyhound', ('puppy_day_1',)) # No breed is featured on any two consecutive days for day in range(1, 7): problem.addConstraint(lambda k1, k2: k1 != k2, \n(f'kitten_day_{day}', f'kitten_day_{day+1}')) problem.addConstraint(lambda p1, p2: | https://arxiv.org/abs/2505.16998v1 |
p1 != p2, \n(f'puppy_day_{day}', f'puppy_day_{day+1}')) # Any breed featured on day 1 is not featured on day 7 problem.addConstraint(lambda k1, k7: k1 != k7, \n('kitten_day_1', 'kitten_day_7')) problem.addConstraint(lambda p1, p7: p1 != p7, ('puppy_day_1', 'puppy_day_7')) # Himalayans are featured on exactly three days, but not on day 1 problem.addConstraint(lambda k: k != 'Himalayan', ('kitten_day_1',)) # Custom constraint to ensure Himalayans are featured on exactly three days def himalayan_count(*ar gs): return ar gs.count('Himalayan') == 3 problem.addConstraint(himalayan_count, ('kitten_day_2', 'kitten_day_3',\n 'kitten_day_4', 'kitten_day_5', 'kitten_day_6', 'kitten_day_7')) # Rottweilers are not featured on day 7, nor on any day that features Himalayans problem.addConstraint(lambda p: p != 'Rottweiler', ('puppy_day_7',)) for day in days: problem.addConstraint(lambda k, p: not (k == 'Himalayan' and p == 'Rottweiler'), (f'kitten_day_{day}', f'puppy_day_{day}')) # Additional condition: Himalayans are not featured on day 2 problem.addConstraint(lambda k: k != 'Himalayan', ('kitten_day_2',)) # Solve the problem and print the results solutions = problem.getSolutions() # Check which option could be true for solution in solutions: if solution['kitten_day_3'] == 'Manx': print("A: Manx are featured on day 3.") if solution['kitten_day_4'] == 'Siamese': print("B: Siamese are featured on day 4.") if solution['puppy_day_5'] == 'Rottweiler': print("C: Rottweilers are featured on day 5.") if solution['kitten_day_6'] == 'Himalayan': print("D: Himalayans are featured on day 6.") if solution['puppy_day_7'] == 'Greyhound': print("E: Greyhounds are featured on day 7.")❌ Output: B: Siamese are featured on day 4. ✅Figure 9: Case for CSP. The problems in ARLSAT involve numerous conditional constraints, which closely resemble the modeling approach used in Constraint Satisfaction Problems (CSPs). Return to section 3.2 19 Case Study for Few-Shot Formal-Eval in Logic-LM--->Our Zero-Shot Evaluation (AR-LSA T) Context: On each of exactly seven consecutive days (day 1 though day 7), a pet shop features exactly one of three breeds of kitten— Himalayan, Manx, Siamese—and exactly one of three breeds of puppy— Greyhound, Newfoundland, Rottweiler . The following conditions must apply: Greyhounds are featured on day 1. No breed is featured on any two consecutive days. Any breed featured on day 1 is not featured on day 7. Himalayans are featured on exactly three days, but not on day 1. Rottweilers are not featured on day 7, nor on any day that features Himalayans. Question: If Himalayans ar e not featur ed on day 2, which one of the following could be true? Options: A: Manx ar e featur ed on day 3. \nB: Siamese ar e featur ed on day 4. \nC: Rottweilers ar e featur ed on day 5. D: Himalayans ar e featur ed on day 6. \nE: Gr eyhounds ar e featur ed on day 7. GPT-4o CSP Answer: from constraint import * # Initialize the problem problem = Problem() # Define the variables for each day (1 to 7) for kittens and puppies days = range(1, 8) kitten_breeds = ['Himalayan', 'Manx', 'Siamese'] puppy_breeds = ['Greyhound', 'Newfoundland', 'Rottweiler'] # Add variables for kittens and puppies for each day for day in days: problem.addV ariable(f'kitten_day_{day}', kitten_breeds) problem.addV ariable(f'puppy_day_{day}', puppy_breeds) # Add constraints based on the pr oblem statement # Greyhounds are featured on day 1 problem.addConstraint(lambda p: p == 'Greyhound', ('puppy_day_1',)) # No breed is featured on | https://arxiv.org/abs/2505.16998v1 |
any two consecutive days for day in range(1, 7): problem.addConstraint(lambda k1, k2: k1 != k2, \n(f'kitten_day_{day}', f'kitten_day_{day+1}')) problem.addConstraint(lambda p1, p2: p1 != p2, \n(f'puppy_day_{day}', f'puppy_day_{day+1}')) # Any breed featured on day 1 is not featured on day 7 problem.addConstraint(lambda k1, k7: k1 != k7, \n('kitten_day_1', 'kitten_day_7')) problem.addConstraint(lambda p1, p7: p1 != p7, ('puppy_day_1', 'puppy_day_7')) # Himalayans are featured on exactly three days, but not on day 1 problem.addConstraint(lambda k: k != 'Himalayan', ('kitten_day_1',)) # Custom constraint to ensure Himalayans are featured on exactly three days def himalayan_count(*ar gs): return ar gs.count('Himalayan') == 3 problem.addConstraint(himalayan_count, ('kitten_day_2', 'kitten_day_3',\n 'kitten_day_4', 'kitten_day_5', 'kitten_day_6', 'kitten_day_7')) # Rottweilers are not featured on day 7, nor on any day that features Himalayans problem.addConstraint(lambda p: p != 'Rottweiler', ('puppy_day_7',)) for day in days: problem.addConstraint(lambda k, p: not (k == 'Himalayan' and p == 'Rottweiler'), (f'kitten_day_{day}', f'puppy_day_{day}')) # Additional condition: Himalayans are not featured on day 2 problem.addConstraint(lambda k: k != 'Himalayan', ('kitten_day_2',)) # Solve the problem and print the results solutions = problem.getSolutions() # Check which option could be true for solution in solutions: if solution['kitten_day_3'] == 'Manx': print("A: Manx are featured on day 3.") if solution['kitten_day_4'] == 'Siamese': print("B: Siamese are featured on day 4.") if solution['puppy_day_5'] == 'Rottweiler': print("C: Rottweilers are featured on day 5.") if solution['kitten_day_6'] == 'Himalayan': print("D: Himalayans are featured on day 6.") if solution['puppy_day_7'] == 'Greyhound': print("E: Greyhounds are featured on day 7.") A case fr om the original Logic-LM paper . Logic-LM uses few-shots and rule extraction to build executable code for a “ specific formal language ” and a “specific task ”. We use zero-shot directly for evaluation. As can be seen from the cases, both approaches behave similarly and both can ef fectively evaluate the formal language reasoning capability of the model. Meanwhile, zero-shot has better generalization, and this paper evaluates a subset of 66 data based on it.Figure 10: Case Study for Few-Shot Formal-Eval in Logic-LM—>Our Zero-Shot Evaluation (AR-LSAT). Logic-LM uses few-shots and rule extraction to build executable code for a “specific formal language” and a “specific task”. We use zero-shot directly for evaluation. As can be seen from the cases, both approaches behave similarly and both can effectively evaluate the formal language reasoning capability of the model. Meanwhile, zero-shot has better generalization, and this paper evaluates a subset of 66 data based on it. 20 DatasetText PoT Z3 CSP A VG ACC ACC Exec_Rate ACC Exec_Rate ACC Exec_Rate ACC Exec_Rate Average 75.0 68.6 85.1 61.9 79.4 65.1 82.2 67.6 82.2 FOLIO 94.0 94.0 100.0 91.0 99.3 94.0 100.0 93.3 99.8 ProntoQA 99.6 97.8 100.0 99.4 100.0 98.8 100.0 98.9 100.0 logicbenchBQA 82.9 85.6 100.0 86.4 100.0 85.3 100.0 85.1 100.0 BoardgameQA 78.5 79.3 99.9 75.1 100.0 69.6 100.0 75.6 100.0 ARLSAT 92.2 91.3 100.0 83.0 97.0 89.1 100.0 88.9 99.0 BBH_boolean_expressions 96.4 98.8 100.0 94.8 100.0 99.2 100.0 97.3 100.0 bbeh_boolean_expressions 57.0 41.5 53.5 30.0 36.0 42.5 58.5 42.8 49.3 BBH_formal_fallacies 100.0 99.2 100.0 99.6 99.6 98.8 100.0 99.4 99.9 bbeh_zebra_puzzles 44.5 15.5 35.5 2.5 5.5 8.9 11.4 17.9 17.5 BBH_logical_deduction_five_objects 100.0 100.0 100.0 99.6 100.0 98.0 100.0 99.4 | https://arxiv.org/abs/2505.16998v1 |
100.0 BBH_logical_deduction_seven_objects 99.2 99.6 100.0 100.0 100.0 100.0 100.0 99.7 100.0 BBH_logical_deduction_three_objects 100.0 100.0 100.0 99.2 99.6 99.2 100.0 99.6 99.9 bbeh_boardgame_qa 54.5 55.0 99.0 35.0 73.0 49.5 87.5 48.5 86.5 BBH_tracking_shuffled_objects_five_objects 100.0 100.0 100.0 98.8 99.6 98.0 100.0 99.2 99.9 BBH_tracking_shuffled_objects_seven_objects 100.0 100.0 100.0 96.8 100.0 99.2 100.0 99.0 100.0 BBH_tracking_shuffled_objects_three_objects 100.0 100.0 100.0 100.0 100.0 99.2 100.0 99.8 100.0 bbeh_shuffled_objects 41.5 0.5 2.0 3.5 10.0 3.0 9.5 12.1 7.2 BBH_web_of_lies 92.8 98.8 100.0 98.0 99.6 99.2 100.0 97.2 99.9 bbeh_web_of_lies 58.0 37.5 43.5 12.0 17.0 21.5 24.5 32.3 28.3 bAbI15 99.3 92.8 100.0 84.1 98.1 92.6 99.9 92.2 99.3 NeuLRdeductive 99.9 97.3 100.0 80.9 98.2 95.8 100.0 93.5 99.4 clutrr 78.8 73.3 100.0 60.1 94.2 71.0 98.7 70.8 97.6 bAbI16 85.5 91.8 100.0 92.1 100.0 89.7 100.0 89.8 100.0 NeuLRinductive 76.3 73.3 99.9 90.1 99.6 80.7 99.8 80.1 99.8 anli 86.8 86.9 100.0 81.3 99.9 85.9 99.9 85.2 99.9 AbductionRules 68.8 71.5 100.0 45.5 98.8 62.8 94.0 62.2 97.6 BBH_causal_judgement 64.2 64.7 100.0 59.4 100.0 64.2 100.0 63.1 100.0 bbeh_causal_understanding 62.0 53.5 99.5 46.5 90.5 49.0 94.5 52.8 94.8 NeuLRabductive 26.0 26.9 99.9 9.9 95.7 15.1 94.1 19.5 96.6 logicqa 86.5 82.9 100.0 77.9 99.6 80.5 99.9 82.0 99.8 BBH_date_understanding 96.8 94.8 100.0 88.0 98.8 89.6 100.0 92.3 99.6 bbeh_time_arithmetic 86.5 79.5 87.5 42.5 50.5 61.5 72.5 67.5 70.2 BBH_temporal_sequences 100.0 99.6 100.0 91.6 99.2 97.2 99.2 97.1 99.5 bbeh_temporal_sequence 52.5 0.0 0.5 0.0 0.0 1.0 1.5 13.4 0.7 BBH_disambiguation_qa 48.0 54.0 100.0 38.8 100.0 46.4 100.0 46.8 100.0 bbeh_disambiguation_qa 58.3 50.8 97.5 40.8 82.5 51.7 86.7 50.4 88.9 BBH_hyperbaton 100.0 100.0 100.0 99.6 100.0 96.4 100.0 99.0 100.0 bbeh_hyperbaton 38.0 26.5 56.5 19.0 35.0 18.0 52.0 25.4 47.8 BBH_ruin_names 80.0 84.4 100.0 78.4 96.4 82.8 100.0 81.4 98.8 bbeh_nycc 15.0 8.5 72.0 11.5 82.5 10.5 75.0 11.4 76.5 BBH_salient_translation_error_detection 76.8 74.8 100.0 76.0 99.6 75.6 99.6 75.8 99.7 bbeh_linguini 46.0 22.0 79.5 24.5 65.5 17.5 51.0 27.5 65.3 BBH_snarks 93.8 91.6 98.9 90.5 100.0 92.1 100.0 92.0 99.6 bbeh_sarc_triples 32.0 39.0 100.0 25.5 71.5 19.0 83.0 28.9 84.8 BBH_dyck_languages 91.6 73.6 83.2 72.4 92.0 84.0 98.8 80.4 91.3 bbeh_dyck_languages 49.0 30.0 79.5 22.0 69.0 28.0 67.5 32.3 72.0 BBH_word_sorting 98.8 100.0 100.0 20.4 21.2 79.2 88.0 74.6 69.7 bbeh_word_sorting 77.0 77.5 92.5 63.5 76.0 40.0 62.0 64.5 76.8 BBH_geometric_shapes 80.0 80.8 100.0 82.0 99.2 78.4 100.0 80.3 99.7 bbeh_geometric_shapes 40.5 9.0 19.5 12.5 25.5 14.5 32.0 19.1 25.7 BBH_navigate 97.6 98.4 100.0 93.6 99.6 95.2 99.6 96.2 99.7 bbeh_spatial_reasoning 43.0 40.5 53.0 32.5 48.0 30.5 47.5 36.6 49.5 BBH_penguins_in_a_table 99.3 99.3 100.0 95.2 95.2 99.3 100.0 98.3 98.4 bbeh_buggy_tables 25.0 16.5 28.5 8.0 16.0 3.5 15.0 13.3 19.8 BBH_movie_recommendation 70.0 70.0 100.0 63.6 97.6 65.2 100.0 67.2 99.2 bbeh_movie_recommendation 59.5 40.0 79.0 28.5 56.5 22.5 47.5 37.6 61.0 BBH_sports_understanding 81.6 80.4 100.0 77.2 99.6 80.0 100.0 79.8 99.9 bbeh_sportqa 53.0 15.0 23.0 17.0 34.0 17.5 34.5 25.6 30.5 gsm8k 96.4 96.7 99.9 93.2 98.8 94.5 99.5 95.2 99.4 MATH 96.5 93.1 99.4 70.7 88.3 87.1 99.0 86.9 95.6 BBH_multistep_arithmetic_two 100.0 100.0 100.0 100.0 100.0 99.6 99.6 99.9 99.9 bbeh_multistep_arithmetic | https://arxiv.org/abs/2505.16998v1 |
53.0 38.0 40.5 20.5 29.5 22.7 32.8 33.6 34.3 BBH_object_counting 100.0 100.0 100.0 98.8 99.6 98.4 99.6 99.3 99.7 bbeh_object_counting 58.0 31.0 85.5 40.5 71.0 22.5 61.0 38.0 72.5 BBH_reasoning_about_colored_objects 100.0 98.8 100.0 95.6 99.2 94.4 99.2 97.2 99.5 bbeh_object_properties 31.5 5.5 7.0 17.5 29.5 39.0 51.5 23.4 29.3 Table 4: QwQ-32B Full Result. 21 DatasetText PoT Z3 CSP A VG ACC ACC Exec_Rate ACC Exec_Rate ACC Exec_Rate ACC Exec_Rate Average 66.7 63.5 91.5 54.5 87.4 52.8 84.0 59.4 87.6 FOLIO 92.5 88.1 100.0 73.9 88.8 67.2 98.5 80.4 95.8 ProntoQA 100.0 95.8 100.0 80.2 99.8 93.2 99.4 92.3 99.7 logicbenchBQA 72.3 70.5 99.8 76.3 100.0 63.3 99.8 70.6 99.9 BoardgameQA 59.7 66.0 100.0 63.5 97.9 60.2 98.1 62.4 98.7 ARLSAT 40.9 50.4 90.4 59.1 83.0 67.8 85.2 54.6 86.2 BBH_boolean_expressions 99.6 100.0 100.0 89.2 100.0 76.4 96.4 91.3 98.8 bbeh_boolean_expressions 59.5 51.5 55.0 1.5 2.0 56.5 60.0 42.3 39.0 BBH_formal_fallacies 88.4 76.4 100.0 90.4 100.0 62.0 99.6 79.3 99.9 bbeh_zebra_puzzles 38.0 6.5 19.0 8.5 49.0 3.0 4.0 14.0 24.0 BBH_logical_deduction_five_objects 93.2 94.0 99.6 87.6 99.6 96.4 100.0 92.8 99.7 BBH_logical_deduction_seven_objects 88.8 88.0 100.0 84.8 100.0 96.8 100.0 89.6 100.0 BBH_logical_deduction_three_objects 99.2 95.2 100.0 92.8 99.2 99.6 100.0 96.7 99.7 bbeh_boardgame_qa 37.0 35.5 90.5 37.5 90.0 24.5 65.0 33.6 81.8 BBH_tracking_shuffled_objects_five_objects 98.4 100.0 100.0 82.0 100.0 36.8 81.2 79.3 93.7 BBH_tracking_shuffled_objects_seven_objects 100.0 99.6 100.0 82.4 100.0 41.6 75.2 80.9 91.7 BBH_tracking_shuffled_objects_three_objects 100.0 100.0 100.0 55.6 100.0 53.6 74.4 77.3 91.5 bbeh_shuffled_objects 29.5 59.0 83.5 36.0 77.5 23.5 49.0 37.0 70.0 BBH_web_of_lies 96.4 91.2 100.0 96.4 100.0 96.4 100.0 95.1 100.0 bbeh_web_of_lies 33.5 11.0 51.5 11.0 20.5 11.5 14.5 16.8 28.8 bAbI15 99.6 98.7 100.0 76.2 97.9 95.9 99.8 92.6 99.2 NeuLRdeductive 99.8 97.0 100.0 55.2 93.9 87.2 97.8 84.8 97.2 clutrr 52.7 44.2 100.0 44.6 95.7 35.6 84.4 44.3 93.4 bAbI16 51.8 93.4 100.0 64.4 98.8 44.1 100.0 63.4 99.6 NeuLRinductive 60.3 41.2 100.0 21.1 99.3 7.9 97.2 32.6 98.8 anli 88.8 87.6 100.0 73.4 99.9 81.6 100.0 82.9 100.0 AbductionRules 88.5 86.6 100.0 84.3 100.0 41.2 63.8 75.2 87.9 BBH_causal_judgement 69.0 73.8 100.0 61.0 100.0 64.7 100.0 67.1 100.0 bbeh_causal_understanding 52.0 52.5 100.0 50.0 99.0 44.5 96.5 49.8 98.5 NeuLRabductive 29.0 15.0 98.4 19.8 92.6 5.2 88.4 17.3 93.1 logicqa 76.0 73.2 99.6 61.7 97.8 72.7 98.6 70.9 98.7 BBH_date_understanding 94.0 82.0 100.0 70.4 98.8 84.4 100.0 82.7 99.6 bbeh_time_arithmetic 63.5 43.5 74.0 36.0 66.0 35.0 73.5 44.5 71.2 BBH_temporal_sequences 99.6 89.2 99.6 64.4 98.0 98.0 99.6 87.8 99.1 bbeh_temporal_sequence 5.5 2.0 87.5 1.0 52.0 2.5 81.5 2.8 73.7 BBH_disambiguation_qa 53.6 50.0 100.0 38.0 100.0 36.8 100.0 44.6 100.0 bbeh_disambiguation_qa 63.3 54.2 98.3 44.2 96.7 70.8 93.3 58.1 96.1 BBH_hyperbaton 92.8 88.4 100.0 94.8 99.6 92.0 99.2 92.0 99.6 bbeh_hyperbaton 30.5 13.0 87.0 28.0 90.0 17.0 42.5 22.1 73.2 BBH_ruin_names 86.4 86.0 100.0 80.8 98.4 84.4 99.2 84.4 99.2 bbeh_nycc 21.5 11.5 77.5 3.5 39.0 7.0 79.0 10.9 65.2 BBH_salient_translation_error_detection 73.2 80.0 100.0 84.4 100.0 76.8 100.0 78.6 100.0 bbeh_linguini 35.0 25.5 97.0 23.0 92.0 20.0 73.0 25.9 87.3 BBH_snarks 89.9 86.5 100.0 75.3 100.0 74.7 100.0 81.6 100.0 bbeh_sarc_triples 30.0 | https://arxiv.org/abs/2505.16998v1 |
37.5 97.5 17.5 45.5 35.5 85.0 30.1 76.0 BBH_dyck_languages 90.4 94.4 100.0 75.2 98.8 35.2 76.0 73.8 91.6 bbeh_dyck_languages 17.5 6.5 86.5 7.5 96.0 10.0 88.0 10.4 90.2 BBH_word_sorting 97.6 99.2 100.0 41.2 94.0 71.6 84.0 77.4 92.7 bbeh_word_sorting 46.5 53.0 89.5 43.5 83.5 30.0 61.5 43.3 78.2 BBH_geometric_shapes 71.2 77.2 99.6 83.2 100.0 77.6 100.0 77.3 99.9 bbeh_geometric_shapes 38.5 16.0 65.0 35.0 80.5 33.5 69.5 30.8 71.7 BBH_navigate 99.2 98.0 100.0 78.4 100.0 74.4 98.0 87.5 99.3 bbeh_spatial_reasoning 11.5 19.0 81.0 13.5 80.5 14.5 72.0 14.6 77.8 BBH_penguins_in_a_table 98.6 100.0 100.0 96.6 99.3 82.2 97.3 94.4 98.9 bbeh_buggy_tables 21.0 19.0 49.5 16.5 54.0 19.5 40.0 19.0 47.8 BBH_movie_recommendation 77.2 63.6 100.0 67.2 99.2 76.8 99.6 71.2 99.6 bbeh_movie_recommendation 60.5 26.0 99.5 34.0 85.0 27.5 85.0 37.0 89.8 BBH_sports_understanding 86.8 87.6 100.0 54.4 100.0 60.8 100.0 72.4 100.0 bbeh_sportqa 29.5 58.5 100.0 23.5 89.5 26.0 85.5 34.4 91.7 gsm8k 96.7 94.6 98.8 89.3 98.3 87.6 99.3 92.1 98.8 MATH 81.3 69.3 88.7 57.9 91.2 57.7 83.5 66.6 87.8 BBH_multistep_arithmetic_two 98.8 100.0 100.0 81.2 99.2 63.6 100.0 85.9 99.7 bbeh_multistep_arithmetic 27.0 1.5 23.0 1.0 13.5 1.0 16.0 7.6 17.5 BBH_object_counting 95.6 99.6 100.0 98.8 100.0 88.0 100.0 95.5 100.0 bbeh_object_counting 16.0 14.0 98.5 11.0 97.5 13.0 97.0 13.5 97.7 BBH_reasoning_about_colored_objects 97.6 97.6 100.0 94.4 98.8 89.2 99.6 94.7 99.5 bbeh_object_properties 10.5 5.5 50.5 9.5 52.5 1.0 38.5 6.6 47.2 Table 5: GPT-4o Full Result. 22 DatasetText PoT Z3 CSP A VG ACC ACC Exec_Rate ACC Exec_Rate ACC Exec_Rate ACC Exec_Rate Average 52.3 36.9 78.6 33.0 70.0 24.8 52.1 36.7 66.9 FOLIO 88.8 85.1 100.0 59.7 87.3 59.0 84.3 73.1 90.5 ProntoQA 99.4 83.4 98.6 57.2 87.6 38.4 54.8 69.6 80.3 logicbenchBQA 71.6 52.1 100.0 39.4 98.5 37.8 79.8 50.2 92.8 BoardgameQA 54.3 52.8 99.1 38.2 89.5 29.2 73.8 43.6 87.5 ARLSAT 25.2 36.5 93.9 22.2 51.3 8.7 22.6 23.2 55.9 BBH_boolean_expressions 97.6 99.6 100.0 45.2 71.6 50.8 95.2 73.3 88.9 bbeh_boolean_expressions 70.5 1.5 1.5 0.5 0.5 10.0 10.5 20.6 4.2 BBH_formal_fallacies 69.6 50.8 100.0 52.8 91.6 50.0 88.4 55.8 93.3 bbeh_zebra_puzzles 34.5 0.5 3.5 5.0 20.5 0.0 0.0 10.0 8.0 BBH_logical_deduction_five_objects 66.8 56.0 100.0 46.8 69.2 64.8 95.2 58.6 88.1 BBH_logical_deduction_seven_objects 66.0 55.2 100.0 47.2 67.6 70.8 94.8 59.8 87.5 BBH_logical_deduction_three_objects 89.6 74.4 100.0 48.8 73.2 72.4 92.4 71.3 88.5 bbeh_boardgame_qa 33.0 18.5 46.5 7.0 20.5 0.5 5.0 14.8 24.0 BBH_tracking_shuffled_objects_five_objects 84.8 3.6 100.0 34.8 82.0 16.8 63.6 35.0 81.9 BBH_tracking_shuffled_objects_seven_objects 85.2 5.2 100.0 43.2 82.0 15.6 68.4 37.3 83.5 BBH_tracking_shuffled_objects_three_objects 89.2 0.4 100.0 35.6 76.4 22.0 58.4 36.8 78.3 bbeh_shuffled_objects 59.5 4.0 26.5 2.0 12.5 1.5 4.0 16.8 14.3 BBH_web_of_lies 81.2 59.2 100.0 78.4 94.4 66.8 74.8 71.4 89.7 bbeh_web_of_lies 9.0 4.0 13.0 1.0 5.0 0.5 3.5 3.6 7.2 bAbI15 23.7 54.3 99.9 29.1 90.7 16.0 64.2 30.8 84.9 NeuLRdeductive 91.9 60.4 96.5 20.4 77.0 7.7 41.8 45.1 71.8 clutrr 17.7 26.4 99.9 14.2 82.8 12.1 60.0 17.6 80.9 bAbI16 23.7 55.8 99.9 31.3 91.6 14.8 63.5 31.4 85.0 NeuLRinductive 7.4 8.8 96.5 16.1 91.1 14.4 53.3 11.7 80.3 anli 77.7 78.8 99.8 59.8 95.1 55.6 83.7 68.0 92.9 AbductionRules 88.3 50.6 | https://arxiv.org/abs/2505.16998v1 |
81.4 34.8 41.8 23.9 37.0 49.4 53.4 BBH_causal_judgement 51.9 54.0 100.0 37.4 92.5 40.6 85.6 46.0 92.7 bbeh_causal_understanding 45.0 39.0 98.0 26.5 82.0 26.5 69.0 34.3 83.0 NeuLRabductive 20.8 12.9 83.5 22.0 52.5 8.2 21.6 16.0 52.5 logicqa 68.2 64.8 98.2 54.9 95.9 40.8 82.5 57.2 92.2 BBH_date_understanding 84.8 32.0 100.0 33.6 73.6 38.4 84.8 47.2 86.1 bbeh_time_arithmetic 30.5 3.0 32.0 9.0 52.5 10.5 46.0 13.3 43.5 BBH_temporal_sequences 83.6 67.2 100.0 48.4 85.6 54.0 72.8 63.3 86.1 bbeh_temporal_sequence 5.0 0.0 34.5 0.0 41.5 0.0 5.0 1.3 27.0 BBH_disambiguation_qa 41.2 59.2 100.0 37.2 93.6 37.2 88.0 43.7 93.9 bbeh_disambiguation_qa 45.8 29.2 90.0 35.8 81.7 15.8 44.2 31.7 71.9 BBH_hyperbaton 68.0 70.0 100.0 53.6 94.4 32.4 57.6 56.0 84.0 bbeh_hyperbaton 0.5 1.5 22.0 1.0 27.0 0.0 0.5 0.8 16.5 BBH_ruin_names 53.2 36.0 100.0 28.4 82.8 10.4 39.2 32.0 74.0 bbeh_nycc 10.5 8.5 94.0 7.0 59.0 4.5 41.5 7.6 64.8 BBH_salient_translation_error_detection 47.2 10.0 100.0 30.4 58.0 21.6 60.0 27.3 72.7 bbeh_linguini 18.0 16.0 81.5 6.0 73.5 4.5 52.0 11.1 69.0 BBH_snarks 77.5 31.5 100.0 49.4 97.2 41.0 84.8 49.9 94.0 bbeh_sarc_triples 16.0 12.5 91.5 12.0 74.5 5.5 22.5 11.5 62.8 BBH_dyck_languages 83.2 39.2 100.0 24.8 76.8 14.8 40.0 40.5 72.3 bbeh_dyck_languages 4.5 1.5 26.0 8.0 57.5 0.5 26.0 3.6 36.5 BBH_word_sorting 32.4 97.2 100.0 61.2 86.0 6.0 32.4 49.2 72.8 bbeh_word_sorting 21.0 10.0 58.5 23.0 76.0 10.5 27.5 16.1 54.0 BBH_geometric_shapes 62.0 80.4 100.0 48.4 86.0 55.2 89.2 61.5 91.7 bbeh_geometric_shapes 28.5 19.5 53.5 30.0 98.5 30.0 65.0 27.0 72.3 BBH_navigate 84.0 50.4 100.0 43.6 86.0 29.6 50.8 51.9 78.9 bbeh_spatial_reasoning 6.0 1.5 22.0 4.0 27.5 4.0 19.0 3.9 22.8 BBH_penguins_in_a_table 91.1 56.2 100.0 63.0 78.8 24.7 48.6 58.7 75.8 bbeh_buggy_tables 32.5 0.0 6.0 0.5 5.5 0.0 2.0 8.3 4.5 BBH_movie_recommendation 63.2 42.0 100.0 28.8 72.4 10.0 42.0 36.0 71.5 bbeh_movie_recommendation 33.0 23.5 83.5 26.5 84.0 3.5 16.0 21.6 61.2 BBH_sports_understanding 74.0 58.0 100.0 55.6 99.2 48.4 95.6 59.0 98.3 bbeh_sportqa 17.0 35.0 82.0 19.5 59.5 4.0 17.5 18.9 53.0 gsm8k 93.0 17.1 21.3 81.1 96.5 58.9 81.3 62.5 66.4 MATH 76.7 42.3 100.0 40.7 74.7 39.9 71.1 49.9 81.9 BBH_multistep_arithmetic_two 94.0 98.8 100.0 98.4 99.6 48.4 82.0 84.9 93.9 bbeh_multistep_arithmetic 6.5 0.0 0.0 0.5 7.0 0.0 3.0 1.8 3.3 BBH_object_counting 56.0 82.8 100.0 86.4 95.6 48.8 71.2 68.5 88.9 bbeh_object_counting 18.0 0.0 25.5 0.0 63.0 0.5 31.0 4.6 39.8 BBH_reasoning_about_colored_objects 79.6 54.8 100.0 68.4 84.4 46.0 71.6 62.2 85.3 bbeh_object_properties 21.0 0.5 25.0 2.0 37.0 0.0 22.5 5.9 28.2 Table 6: Qwen2.5-7B Full Result. 23 DatasetText PoT Z3 CSP A VG ACC ACC Exec_Rate ACC Exec_Rate ACC Exec_Rate ACC Exec_Rate Average 52.7 43.9 83.5 34.8 76.5 37.0 68.1 42.1 76.0 FOLIO 83.6 78.4 99.3 71.6 100.0 61.2 97.8 73.7 99.0 ProntoQA 99.0 84.8 99.4 54.6 90.2 81.2 93.8 79.9 94.5 logicbenchBQA 80.1 70.4 100.0 72.3 99.1 64.2 99.6 71.8 99.6 BoardgameQA 68.8 50.0 94.9 50.9 97.8 46.5 92.3 54.1 95.0 ARLSAT 23.9 31.7 85.7 30.9 68.7 30.0 61.3 29.1 71.9 BBH_boolean_expressions 96.8 98.8 100.0 42.0 74.8 64.4 99.2 75.5 91.3 bbeh_boolean_expressions 70.0 0.0 53.0 3.0 4.0 22.5 33.0 23.9 30.0 BBH_formal_fallacies 68.0 58.4 100.0 | https://arxiv.org/abs/2505.16998v1 |
61.2 100.0 54.8 98.0 60.6 99.3 bbeh_zebra_puzzles 39.0 12.5 46.5 13.5 39.5 8.0 22.0 18.3 36.0 BBH_logical_deduction_five_objects 61.6 57.6 100.0 62.8 94.0 77.6 96.8 64.9 96.9 BBH_logical_deduction_seven_objects 50.4 44.0 100.0 57.2 99.6 70.0 83.6 55.4 94.4 BBH_logical_deduction_three_objects 84.8 75.6 100.0 74.8 91.2 92.0 94.8 81.8 95.3 bbeh_boardgame_qa 31.5 0.0 84.5 12.5 37.0 18.0 46.0 15.5 55.8 BBH_tracking_shuffled_objects_five_objects 78.8 100.0 100.0 2.4 99.2 4.4 28.0 46.4 75.7 BBH_tracking_shuffled_objects_seven_objects 71.2 100.0 100.0 2.4 100.0 2.4 22.8 44.0 74.3 BBH_tracking_shuffled_objects_three_objects 80.8 99.6 100.0 8.0 100.0 5.2 16.8 48.4 72.3 bbeh_shuffled_objects 42.0 6.0 19.5 9.5 23.0 10.5 23.0 17.0 21.8 BBH_web_of_lies 85.6 63.2 100.0 77.2 100.0 58.0 64.0 71.0 88.0 bbeh_web_of_lies 12.5 13.0 51.5 1.0 9.5 2.5 7.0 7.3 22.7 bAbI15 99.1 96.1 97.3 89.6 99.1 89.2 98.5 93.5 98.3 NeuLRdeductive 92.1 85.7 99.0 37.1 84.9 49.8 86.7 66.2 90.2 clutrr 46.8 30.3 99.9 34.4 99.5 46.0 99.8 39.4 99.7 bAbI16 93.0 96.7 100.0 78.4 99.6 83.9 100.0 88.0 99.9 NeuLRinductive 8.2 8.6 100.0 18.0 98.6 14.1 93.8 12.2 97.5 anli 84.1 76.7 100.0 69.7 100.0 80.3 99.9 77.7 100.0 AbductionRules 91.0 85.7 99.8 47.7 95.0 48.3 81.1 68.2 92.0 BBH_causal_judgement 55.1 61.5 100.0 55.6 100.0 49.7 100.0 55.5 100.0 bbeh_causal_understanding 42.5 0.0 87.0 43.0 96.5 40.0 95.5 31.4 93.0 NeuLRabductive 12.0 15.4 93.8 17.3 84.3 1.0 27.4 11.4 68.5 logicqa 63.8 53.8 99.5 42.4 98.9 45.1 97.1 51.3 98.5 BBH_date_understanding 74.0 54.4 100.0 40.4 97.2 55.6 96.8 56.1 98.0 bbeh_time_arithmetic 32.0 10.5 59.5 7.5 43.5 10.5 45.5 15.1 49.5 BBH_temporal_sequences 30.0 42.8 98.8 33.6 98.4 36.0 76.0 35.6 91.1 bbeh_temporal_sequence 9.0 0.0 42.5 0.0 20.0 0.0 30.5 2.3 31.0 BBH_disambiguation_qa 35.6 46.0 100.0 41.2 99.2 64.0 99.2 46.7 99.5 bbeh_disambiguation_qa 42.5 0.0 43.3 34.2 90.0 61.7 100.0 34.6 77.8 BBH_hyperbaton 72.0 66.8 91.2 66.4 100.0 82.0 96.8 71.8 96.0 bbeh_hyperbaton 1.0 0.0 64.0 3.0 33.5 2.0 18.0 1.5 38.5 BBH_ruin_names 37.2 32.4 100.0 28.0 98.8 39.6 94.4 34.3 97.7 bbeh_nycc 8.0 0.0 62.5 1.5 95.5 5.0 72.5 3.6 76.8 BBH_salient_translation_error_detection 44.4 27.6 100.0 38.4 99.6 26.8 100.0 34.3 99.9 bbeh_linguini 18.0 0.0 53.5 2.5 79.0 10.5 63.0 7.8 65.2 BBH_snarks 73.6 50.6 100.0 51.1 98.9 41.6 100.0 54.2 99.6 bbeh_sarc_triples 16.5 12.0 96.5 16.5 43.0 34.0 100.0 19.8 79.8 BBH_dyck_languages 64.4 38.8 97.6 14.4 96.8 3.2 31.2 30.2 75.2 bbeh_dyck_languages 2.5 0.0 24.5 4.5 47.0 0.5 18.0 1.9 29.8 BBH_word_sorting 20.8 96.4 100.0 25.2 86.8 4.4 31.6 36.7 72.8 bbeh_word_sorting 20.5 12.5 73.5 6.5 84.5 1.5 15.0 10.3 57.7 BBH_geometric_shapes 53.6 13.6 100.0 29.2 99.2 60.8 99.2 39.3 99.5 bbeh_geometric_shapes 20.0 1.5 68.5 17.0 58.5 42.5 73.0 20.3 66.7 BBH_navigate 88.8 75.2 96.0 65.2 94.0 55.2 97.2 71.1 95.7 bbeh_spatial_reasoning 7.5 9.5 41.5 7.5 32.0 3.5 21.5 7.0 31.7 BBH_penguins_in_a_table 84.3 98.0 100.0 82.2 98.6 50.7 85.6 78.8 94.8 bbeh_buggy_tables 86.5 0.0 28.0 1.0 13.5 0.0 7.0 21.9 16.2 BBH_movie_recommendation 75.2 47.2 99.2 21.2 90.0 70.8 100.0 53.6 96.4 bbeh_movie_recommendation 49.0 0.0 52.5 11.0 71.0 9.0 74.5 17.3 66.0 BBH_sports_understanding 73.6 56.0 99.2 62.8 100.0 53.6 99.2 61.5 99.5 bbeh_sportqa 11.5 42.5 87.5 4.0 12.0 3.0 11.5 15.3 37.0 gsm8k 90.6 89.2 100.0 74.8 99.0 | https://arxiv.org/abs/2505.16998v1 |
60.0 95.4 78.6 98.1 MATH 71.6 60.8 92.3 33.1 68.5 36.4 85.6 50.5 82.1 BBH_multistep_arithmetic_two 96.4 98.0 100.0 95.2 100.0 85.2 100.0 93.7 100.0 bbeh_multistep_arithmetic 10.5 0.0 60.5 0.0 2.0 0.0 7.0 2.6 23.2 BBH_object_counting 41.2 72.8 100.0 76.0 91.2 53.6 88.8 60.9 93.3 bbeh_object_counting 16.5 0.0 41.5 0.0 19.5 0.0 5.5 4.1 22.2 BBH_reasoning_about_colored_objects 71.6 87.2 100.0 60.4 96.0 59.2 90.8 69.6 95.6 bbeh_object_properties 9.5 1.5 25.0 1.0 10.0 0.0 4.0 3.0 13.0 Table 7: Qwen2.5-7B-Base.w.Formal Full result. 24 Prompt for PoT {question} Let's write a Python program to solve the problem. Please give the python code in the following format: ```python ``` and print the final result.Figure 11: Prompt for PoT Prompt for Z3 {question} Let's write code in Python that utilizes z3- solver to solve the problem. Requires code to start with: ``from z3 import *''. Please give the python code in the following format: ```python ``` and print the final result.", Figure 12: Prompt for Z3 Prompt for CSP {question} Let's write code in Python that utilizes python-constraint to solve the question. The task is to parse the problem as a constraint satisfaction problem , defining the domain , variables , and contraints. Requires code to start with: 'from constraint import * problem = Problem()'. Please give the python code in the following format: ```python\n``` and print the final result. Figure 13: Prompt for CSP Prompt for Model Evaluation [Instruction] Please act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user question displayed below . Your evaluation should consider the correctness. You will be given a reference answer and the assistant's answer . Begin your evaluation by comparing the assistant's answer with the reference answer , you must identify whether the response is correct or wrong by strictly following this format: "<correct or wrong>", for example: " <correct>" or "<wrong>". [Question] {question} [The Start of Reference Answer] {answer} [The End of Reference Answer] [The Start of Assistant's Answer] {completion} [The End of Assistant's Answer]Figure 14: Prompt for Model Eval 25 | https://arxiv.org/abs/2505.16998v1 |
arXiv:2505.17005v1 [cs.CL] 22 May 2025R1-Searcher++: Incentivizing the Dynamic Knowledge Acquisition of LLMs via Reinforcement Learning Huatong Song1*, Jinhao Jiang1*, Wenqing Tian3, Zhipeng Chen1, Yuhuan Wu1, Jiahao Zhao1, Yingqian Min1, Wayne Xin Zhao1†, Lei Fang2, Ji-Rong Wen1 1Gaoling School of Artificial Intelligence, Renmin University of China. 2DataCanvas Alaya NeW.3Beijing Institute of Technology. {songhuatong123, jiangjinhao}@ruc.edu.cn, batmanfly@gmail.com Abstract Large Language Models (LLMs) are powerful but prone to hallucinations due to static knowl- edge. Retrieval-Augmented Generation (RAG) helps by injecting external information, but cur- rent methods often are costly, generalize poorly, or ignore the model’s internal knowledge. In this paper, we introduce R1-Searcher++, a novel framework designed to train LLMs to adaptively leverage both internal and external knowledge sources. R1-Searcher++ employs a two-stage training strategy: an initial SFT Cold- start phase for preliminary format learning, fol- lowed by RL for Dynamic Knowledge Acquisi- tion. The RL stage uses outcome-supervision to encourage exploration, incorporates a re- ward mechanism for internal knowledge uti- lization, and integrates a memorization mecha- nism to continuously assimilate retrieved infor- mation, thereby enriching the model’s internal knowledge. By leveraging internal knowledge and external search engine, the model contin- uously improves its capabilities, enabling effi- cient retrieval-augmented reasoning. Our ex- periments demonstrate that R1-Searcher++ out- performs previous RAG and reasoning meth- ods and achieves efficient retrieval. The code is available at https://github.com/RUCAIBox/ R1-Searcher-plus . 1 Introduction Large language models (LLMs) (Zhao et al., 2023) have demonstrated remarkable reasoning capabili- ties by only leveraging the information encoded in their parameters. However, their reliance on static, internal knowledge leads to notable limitations. At the simultaneously, this reliance easily leads to hallucinations (Huang et al., 2025), so LLMs may struggle with open-ended tasks (Wang et al., 2025c; Trivedi et al., 2022). Therefore, it is crucial to en- able LLMs to access external information when *Equal contributions. †Corresponding author.they are confused during the reasoning process to achieve more deliberative reasoning (Jiang et al., 2024a). To address this issue, extensive research has fo- cused on augmenting LLMs with external informa- tion sources ( i.e.,RAG (Gao et al., 2024)). Early approaches emphasize specific prompting strate- gies to guide LLMs (Li et al., 2025; Teng et al., 2025) and subsequent studies investigate to dis- till this capability into smaller LLMs through su- pervised fine-tuning (SFT) (Wang et al., 2025b). However, recent findings suggest that SFT-based distillation can cause models to memorize solution paths, limiting their generalization to novel scenar- ios (Chu et al., 2025). Further proposals include a test-time scaling method (Li et al., 2024), notably employing the Monte Carlo Tree Search (MCTS) framework (Sun et al., 2025) to enhance solution- finding by expanding the search space during infer- ence, but this approach incurs significant inference overhead, reducing its practicality for widespread use. Recent studies employ end-to-end outcome- based reinforcement learning (RL) to train models, enabling them to autonomously explore external retrieval environments during inference (Jin et al., 2025; Song et al., 2025). This approach fosters the development of self-directed retrieval capabilities in LLMs as they reason. However, such models often become overly reliant on external search en- gines after training, | https://arxiv.org/abs/2505.17005v1 |
neglecting the utilization of their internal knowledge. In practice, when humans attempt to solve fac- tual questions, they first recall their internal knowl- edge, and only turn to search engines when they recognize a lack of information. At the same time, after obtaining the external searched information, humans would memorize this knowledge for future use. For LLMs, extensive pretraining on large-scale data has already endowed them with substantial internal knowledge (Qwen et al., 2025). There- fore, it is essential to equip models with the ability to dynamically switch between internal and exter- nal knowledge sources as needed. Furthermore, models should be encouraged to effectively mem- orize useful information encountered during train- ing (Jiang et al., 2024b), progressively enriching their internal knowledge and continuously evolving toward greater intelligence. In this paper, we present R1-Searcher++ , a novel framework designed to teach LLMs to adap- tively leverage both internal and external knowl- edge. We adopt a two-stage training strategy: SFT Cold-start andRL for Dynamic Knowledge Acquisi- tion. In the first phase, we employ reject sampling to collect data that meets the format requirements and perform a cold start with SFT. In the second stage, we further train the model using outcome- based RL to guide the model in dynamically ac- quiring knowledge, which is to encourage reliance on internal knowledge when confident, and invoke external search mechanisms when uncertain, based on a carefully designed reward design. Addition- ally, we further introduce a memory mechanism, enabling the model to retain knowledge encoun- tered during training by converting and memorizing retrieved content. This mechanism continuously enriches its internal knowledge, empowering it to effectively balance internal reasoning and exter- nal retrieval through autonomous exploration and timely memorization. To verify the effectiveness of R1-Searcher++, we conduct extensive experiments based on Qwen- 2.5-7B-Instruct. Notably, our method surpasses the strong baseline by up to 4.3% and reduces the retrieval count by 42.9% compared to vanilla RL- based approaches. Our principal contributions are as follows: •We introduce R1-Searcher++, teaching LLMs to adaptively leverage both internal and external knowledge through a two-stage training strategy. •We encourage the model to actively leverage its internal knowledge while efficiently memorizing external information, enabling dynamic knowledge acquisition through exploration and memorization. •Extensive experiments show that R1- Searcher++ outperforms existing RAG methods, while significantly reducing the number of re- trievals compared to vanilla RL-based approaches. 2 Related Work Retrieval-Augmented Generation. To improve the factual accuracy of LLM inference and reducehallucinations, researchers have proposed enhanc- ing language models by incorporating external in- formation sources, a paradigm known as RAG (Fan et al., 2024). Early RAG approaches primarily include Branching (Kim et al., 2024), Summariza- tion (Li et al., 2023), and Adaptive Retrieval (Jeong et al., 2024) strategies. As foundation models have become increasingly capable, exhibiting strong CoT reasoning abilities, many studies have com- bined RAG with CoT. These efforts include meth- ods that prompt the model to perform step-by-step retrieval (Shao et al., 2023; Trivedi et al., 2023) and strategies that distill such capabilities into smaller LLMs (Asai et al., 2024). In parallel, several works have explored test-time | https://arxiv.org/abs/2505.17005v1 |
scaling, notably us- ing MCTS (Feng et al., 2025) to dynamically ex- pand reasoning paths. However, such approaches often incur substantial inference-time overhead. More recently, researchers have trained models us- ing outcome- supervision RL (Zheng et al., 2025) to encourage the exploration of more effective ac- tions and retrieval behaviors, but it leads models to over-rely on external search engines, diminishing their ability to leverage internal knowledge (Wang et al., 2025a). Enabling LLMs to effectively inte- grate and alternate between internal knowledge and external retrieval remains a significant challenge. Reinforcement Learning. To improve training ef- ficiency, several off-policy algorithms have since been proposed (Rafailov et al., 2024; Ethayarajh et al., 2024); however, these methods still face limitations in terms of preference modeling ac- curacy and generalization capability (Pang et al., 2024). DeepseekMath introduced the GRPO al- gorithm (Shao et al., 2024), which enables effi- cient self-exploration through a mechanism of rel- ative preference optimization. Building on this, Deepseek-R1 (DeepSeek-AI et al., 2025) have demonstrated that outcome-based RL can signifi- cantly enhance the reasoning abilities of large mod- els. More recently, studies have begun to inves- tigate RL algorithms specifically designed to im- prove LLM reasoning capabilities (Yu et al.; Yuan et al., 2025). In parallel, other research efforts have applied RL to the retrieval domain, aiming to en- able deep search capabilities (Chen et al., 2025). However, the use of RL that combine LLM-driven retrieval and reasoning remains largely simplistic and underexplored. 3 Preliminary To enhance the performance of LLMs in open- domain multi-hop question answering tasks (Ho et al., 2020), in this work, we focus on enabling the model to autonomously decide when to use its in- ternal knowledge or to invoke an external retriever to answer the given questions with the LLM self- improving paradigm, which can improve both rea- soning effectiveness and efficiency. To this end, we introduce three special tokens to format the LLM reasoning process, i.e.,<internal> ,<external> , and<document> . Concretely, during the reasoning process, the LLM with parameters θdetermines whether the current step requires external knowl- edge to help perform reasoning. If so, it triggers the<external> to issue a query t, which is sent to a retriever to retrieve the top- Krelevant documents Dt={dt,k}K k=1from an external corpus. These re- trieved documents are incorporated into the reason- ing path with another special token <document> . Otherwise, the model directly generates the related internal knowledge enclosed in <internal> . After several reasoning steps, the LLM obtains the final answer and stops the reasoning process. Since our approach is orthogonal to the RL algorithm, we conduct the experiments based on a widely used RL algorithm, i.e., REIN- FORCE++ (Hu, 2025), which is a stable RL algo- rithm without the critic model. To better accommo- date the retrieval scenario, we mask the retrieved documents during the loss calculation process, as they serve as environmental observations rather than model-generated content. Formally, for each question q, we first samples a group of outputs {o1, o2,···, oG}from the old policy model πθold. Next, we incorporate the KL regularization into the reward scores | https://arxiv.org/abs/2505.17005v1 |
Rϕ(q, oi,≤t), and then normalize the advantage scores: ˆA′ i,t=R(q, oi)−β·PT i=tKL(i),ˆAi,t=ˆA′ i,t−mean(ˆA) std(ˆA) We utilize ˆAto denote the set of all advantages in the global batch that contains ˆAi,t. After obtaining the advantage scores, we set the mask value M(i, t) as 0 if this token belongs to an external document, otherwise we set M(i, t) = 1 . Finally, we em- ploy the masks M(i, t)in the objective function to remove the influence of retrieved documents: ˆPi,t= minh pi,tˆAi,t,clip(pi,t,1−ε,1 +ε)ˆAi,ti , JMask(θ) =1 GPG i=11P|oi| t=1M(i,t)P|oi| t=1M(i, t)·ˆPi,t(1)where εis a hyper-parameter and p(i, t)is the im- portant sampling coefficient , and πθis the policy model. 4 Methodology In this part, we introduce the R1-Searcher++ framework, which aims to teach LLM to adaptively utilize internal and external knowedge through two critical stages, i.e., SFT Cold-Start (Section 4.1) andRL for Dynamic Knowledge Acquisition (Sec- tion 4.2). Concretely, in the first stage, we utilize the curated data to perform SFT on the model, to standardize its responses in a specific format and enable it to leverage external retrievers and inter- nal knowledge adaptively. In the second stage, we employ RL on LLM that encourages the model to explore more effective actions and behaviours, and further incorporate the internal knowledge uti- lization encouragement and external knowledge memorization in the training process, to guide the model to dynamically acquire knowledge and con- tinuously enrich its internal knowledge, which can lead to higher reasoning efficiency. 4.1 SFT Cold-start To equip LLMs with the preliminary ability to au- tonomously perform external retrieval during infer- ence while effectively leveraging internal knowl- edge, we synthesize high-quality training instances using rejection sampling, without relying on other powerful models.We only keep the correct re- sponses with the appropriate occurrences of both the<internal> and<external> tags, teaching LLM to perform dynamic knowledge acquisition in a proper format. Specifically, given the question xand the syn- thesized output y, once the i-th token of the output belongs to the external document, it will be masked, i.e.,Mi= 0. Otherwise, the coefficient Miwill be set as 1, incorporating the probability of yiinto the objective function as follows, LSFT=−1Pn j=1MjnX i=1Mi×P(yi|x, y<i) (2) 4.2 RL for Dynamic Knowledge Acquisition After cold-starting, we obtain a model that can uti- lize internal knowledge and perform an external search with correct format. To further enhance its capabilities, i.e.,to perform effective and ef- ficient reasoning, we continually train the model Which was released earlier, Avatar or Titanic?Internal Knowledge UtilizationEncouragementinternalI don’t know... , I need to search for the information: <ex> The foundation time of Citybank</ex><doc>…Citybankwas founded in 1812...</doc> Based on the retrieved documents , I know...Two main action types of R1-Searcher++ External SearchBase So I can find ... , Final Answer: James MadisonData Sampling & SFT Cold-StartPolicy Reasoning PathThen... , What was the presidency of America in 1812? ... recall ... <in> James Madison ... as president of America in 1812…</in> Using internal knowledge, I know ... Internal Think Question: Who was the president of the United States in the year Citibank was founded? RewriterInputLM (θ) … Q: Which came first, Avatar | https://arxiv.org/abs/2505.17005v1 |
or Titanic? …don’t know … need to searchfor: <ex> highest mountain the world</ex> <doc> highest mount, "Mount Everest", 8848 m</doc> What is the highest mountain in the world? internalexternalAdaptiveUtilization Response: To answer this question, I will ... . Firstly, I want to know: When was Citybank founded? …… Firstly...……To begin with...……Titanic…Titanic…Correct ResponsesFirstly...…search <ex> …</ex><doc>...Incorrect ResponsesAvatar ……Rolloutbos<in><ex>…ans<ex>ans<ex><ex>ans<ex><ex>ans<in><in><in> Ranswer=1Ranswer=1Ranswer=1Ranswer=0RC=1…...Rall=Rans+Rformat+RgroupRall=Rans+Rformat+RgroupRall=Rans+Rformat+Rgroup Q: Which came first, Avatar or Titanic?Info: Titanic ...1997 …Q: ...Info: ... and ...Q: ...Info: NoneQuestion w. relevant infoTo findwhich film came ealrier, I’ll ... respectively and...... firstly, I identify ... Avatar . I remember..., <in> Avatar was released in 2009</in> ... then, I’ll find ...Titanic. I recall..., <in>Titanic came out in 1997.</in> …Firstly, I should...……To begin with...……Convert Retrieved Documents into Internal Knowledge OutputExternal Memorization bosbosbos…………RC=2RC=0 RgroupRetrieval CountRall=Rans+Rformat+Rgroupw. KLÂnormlizeÂÂÂ…… Reward CalculationReward Calc w. mask …Policy… from what I already know, I can recallthe answer that…<in>Mount Everest is the highest mountain globally</in> …recall<in>…</in>…To findwhich film came ealrier, I’ll ... respectively and...... firstly, I identify the release year of Avatar . I remember..., <in> Avatar was released in 2009</in> I don’t know ... Titanic and ... search: <ex> Titanic release year </ex> <doc>…Titanic is 1997 American romantic film ...</doc>compare..., the answer is Titanic…search <ex> …</ex><doc>...…search <ex> …</ex>...…search <ex> …</ex><doc>...…remember<in>…</in>……remember<in>…</in>… …remember<in>…</in>……remember<in>…</in>……remain<in>…</in>……remember<in>…</in>…JMask(θ)Internal Utilization REINFORCE++Figure 1: Overall framework of our proposed R1-Searcher++ approach. through the RL process, which includes a mecha- nism that encourages internal knowledge utilization (Section 4.2.1) and a mechanism for converting and memorizing external knowledge (Section 4.2.2). 4.2.1 Internal Knowledge Utilization Encouragement In the RL process, the reward function is utilized to provide the supervision signals, which can ad- just and optimize the behaviours of the model (Hu, 2025). Therefore, given the question qand the i-th generated response oi, we design the format reward andanswer reward to induce the model to perform reasoning correctly with the expected format, and incorporate the group reward into the final reward function to mitigate the over-reliance on the exter- nal retriever. Now, we introduce the details of the reward function in the following. Format reward. We impose a strict formatting constraint to ensure model responses are consistent and clear. During the reasoning process, when call- ing the external retriever, the model is required to formulate a query and enclose it within the <external>...</external> tags, and is prohib- ited from generating document content directly without first invoking retrieval. When the reasoning process finishes, the final response must satisfy the following criteria, i.e.,the final answer must be en- closed within boxed{} , and the content should not contain any garbled or unreadable content. Once the model behaviours satisfy the above require- ments, we set the format reward Rformat as 0, while we set the reward as −2if any requirement fails, asshown in the following, Rformat (q, oi) =( 0,The format of oiis correct -2,The format of oiis incorrect(3) Answer reward. To indicate the correctness of the final answer, we leverage the Cover Exact Match (CEM) metric to calculate the answer re- ward, adapting to the group reward discussed in the following and relieving the issue of EM being too strict. Concretely, CEM is | https://arxiv.org/abs/2505.17005v1 |
True if the ground truth answer appears in the predicted answer ai extracted from the response oi, and False for other situations. However, we observe that LLM can eas- ily hack the CEM metric during the RL process, where LLM is likely to generate a longer predicted answer that will receive a higher probability to cover the ground truth answer, causing the CEM to be falsely high. Therefore, we regard the answer exceeding 10 words as an incorrect answer, requir- ing LLM to generate the answer within ten words, which can alleviate the above reward hacking is- sue. In summary, the answer reward Ranswer can be computed as follows, Ranswer (q, oi) =( 1, aiwithin 10 words ∧CEM=True 0,Otherwise (4) Group reward. Building upon the first two re- wards for LLM reasoning effectiveness, the group reward is designed to encourage the model to re- duce its reliance on external retrieval, increasing the reasoning efficiency. Considering that the vari- ance of the external retriever calling times by LLM reflects the necessity of performing external re- trieval, group reward is calculated by the standard deviation of the number of calls to the retriever in correct responses to the same question. For- mally, given the question qand a set of generated responses {o1, o2, . . . , o n}, we first count the num- ber of calls to the retriever tiof each response, and then calculate the standard deviation σof {t1, t2, . . . , t n}. Next, we calculate the minimum number of calls to the retriever of the correct re- sponses, i.e.,tmin= min {ti|Ranswer (q, oi) = 1}. R′ group (q, oi) =( 2×σ2, R answer(q, oi) = 1∧ti=tmin 0, Otherwise (5) Meanwhile, to maintain training stability and pre- vent excessive variance, we introduce a hyperpa- rameter ηto clip the corresponding factor. The final computation of reward is formulated as follows, Rgroup (q, oi) = min R′ group (q, oi), η (6) Finally, the reward R(q, oi)utilized to calculate the advantage in Equation 7 is defined as the sum of the three sub-rewards mentioned above: R(q, oi) =Rformat (q, oi) +Ranswer (q, oi) +Rgroup (q, oi) (7) 4.2.2 External Knowledge Memorization The standard RL training paradigm relies on the model’s self-exploration and the feedback from the external environment. In retrieval-based scenarios, since the knowledge retrieved by the retriever is en- tirely correct, the model should like a human, aim to memorize this information during training, trans- forming it into internal knowledge. This enables the model to utilize the acquired knowledge di- rectly in future instances without repeated retrieval, thereby achieving efficient reuse of retrieved infor- mation. Thus, we incorporate external knowledge memorization by rewriting the retrieved informa- tion to align with the model’s interanl knowledge utilization pattern, enabling the model to internal- ize them effectively. To obtain the rewritten instances, at the begin- ning of the RL process, we fine-tune a separate model on the data filtered in Section 4.1 as the rewritting model, which can solve the questions based on the pre-processed documents that do not call the retriever. During | https://arxiv.org/abs/2505.17005v1 |
the RL process, we select the correct responses generated by LLM, and then extract the retrieved documents from the responses. Given the question and the extracted documentsin the context, the rewriting model can generate the reasoning paths without calling the external re- triever. After validating the correctness of these reasoning paths, we select the correct instances to construct the dataset Tfor memorization and inter- nalization. In conclusion, the corresponding loss for memorization is computed as follows: LM(θ) =−1P oi∈T|oi|X oi∈T|oi|X t=1logπθ(oi,t|q, oi,<t)(8) To avoid the LM(θ)from dominating the policy model’s training and causing the model to ignore external retrieval, we weight it with a pre-defined coefficient µ. The final loss used to optimize the policy model during the retrieval scenario RL pro- cess is computed as follows: L(θ) =−JMask(θ) +µ∗ LM(θ) (9) Thus, during training, the model not only en- gages in self-exploration but also continuously en- riches its internal knowledge, enabling it to become increasingly smarter over time. 5 Experiments 5.1 Experimental Settings Datasets and Evaluation Metrics. We evaluate using four multi-hop datasets: HotpotQA (Yang et al., 2018), 2WikiMultiHopQA (Ho et al., 2020), Musique (Trivedi et al., 2022), and Bam- boogle (Press et al., 2023). HotpotQA and 2Wiki- MultiHopQA are in-domain benchmarks since parts of their training sets are used for training. In contrast, Musique and Bamboogle serve as out- of-domain benchmarks to assess our model’s gener- alization capabilities. We randomly select 500 sam- ples from the entire validation sets of HotpotQA, 2WikiMultiHopQA, and Musique, and use the en- tire test set of Bamboogle to form our final test set. For evaluation metrics, we utilize F1-score and LLM-as-Judge (LasJ). considering that the an- swers to open-ended multi-hop questions are not uniform in form. The F1-score measures the word- level similarity between the predicted answer and the reference answer while LLM-as-Judge employs GPT-4o-mini to assess the correctness of predic- tion. The evaluation prompt for LasJ is provided in Appendix B. Baselines. We compare R1-Searcher++ against several baselines. Naive Generation generates an- swers directly without retrieval. Standard RAG rep- resents traditional RAG systems that retrieve docu- ments directly based on the question. SuRe (Kim et al., 2024) executes multiple reasoning paths in parallel for a single query. Selective-Context (Li et al., 2023) compresses retrieved documents to reduce context length. Adaptive-RAG (Jeong et al., 2024) dynamically selects retrieval strategies de- pending on the complexity of the query. CR- Planner (Li et al., 2024) scales RAG at inference time using MCTS. RAG-CoT methods , such as Iter-RetGen (Trivedi et al., 2023), IRCoT (Shao et al., 2023), and Search-o1 (Li et al., 2025), which combine RAG with CoT using prompts. RAG-RL methods like R1-Searcher (Song et al., 2025) and Search-R1 (Jin et al., 2025) leverage RL to enable the model to learn to autonomously perform re- trieval during inference. Implementation Details R1-Searcher++ and all baseline models are either trained or prompted us- ing the Qwen-2.5-7B-Instruct as the backbone, and evaluated with FlashRAG (Jin et al., 2024) using local dense retrieval corpus. The retrieval corpus comprises the English Wikipedia as provided by KILT (Petroni et al., 2021) | https://arxiv.org/abs/2505.17005v1 |
in 2019, segmented into 100-word passages with appended titles, totaling 29 million passages. We employ BGE-large-en-v1.5 as the text retriever. Detailed training settings for R1-Searcher++ are provided in Appendix A. 5.2 Main Results Table1 shows the results of R1-Searcher++ and the baselines on four mutil-step benchmarks. We can obtain the following observations: •Achieving Significant Performance Improve- ment on Multi-Hop QA. Our method, R1- Searcher++, achieves significant performance im- provements over all mutil-hop QA benchmarks under the LLM-as-Judge evaluation metric, in- cluding both tree search-based and RL-based ap- proaches. Specifically, R1-Searcher++ outper- forms CR-Planner by 25.7% and surpasses the best vanilla RL-based method R1-Searcher by 4.3% on the overall test set. These results demonstrate that our approach effectively enables the model to perform accurate and timely retrieval invocations throughout the reasoning process, thereby enhanc- ing overall performance. •Balancing the Utilization of Internal and Ex- ternal Knowledge. While maintaining strong per- formance on the evaluation datasets, our methodachieves a significant reduction in retrieval count compared to vanilla RL-based RAG approaches. Specifically, the average retrieval count is reduced by 30.0% and 52.9% compared to R1-Searcher and Search-R1, respectively. This observation suggests a potential conflict between external information and the internal knowledge of LLMs and one pos- sible reason is that directly injecting retrieved doc- uments into the reasoning process may introduce noise. This demonstrates that the model should learn to make full use of its internal knowledge and only invoke the retriever when necessary. •Maintaining Generalization Ability. Despite being trained on only 9000 samples, the model achieves strong performance on in-domain datasets and further exhibits impressive generalization to out-of-domain datasets. This suggests that the model effectively learns to retrieve relevant doc- uments and leverage internal knowledge, integrat- ing both with reasoning through exploration during training. This enables robust performance on new test datasets that require retrieval. Furthermore, it can also seamlessly generalizes to online search, as detailed in Section 6.2. 6 Further Analysis 6.1 Ablation Study To validate the effectiveness of our proposed R1- Searcher++ framework, we conduct a comprehen- sive ablation analysis of its key design elements. We design five distinct variants: (1) w/o Stage-1 removes the initial SFT cold start stage; (2) w/o Stage-2 removes the entire RL training stage; (3) w/oRgroup removes the group reward in the RL stage; (4) w/oLMremoves the external knowledge memorization mechanism in the RL stage and (5) w/oRgroup andLMremoves both the group reward and the external knowledge memorization mech- anism. The performance of these variants is pre- sented in Table 2. As observed, all ablated variants exhibit a decline in performance compared to our full method, underscoring the integral contribution of each component. Specifically, w/o Stage-1 leads to a degradation in performance along with an in- crease in retrieval count. Meanwhile, the perfor- mance of w/o Stage-2 drops significantly, primarily because simple SFT causes the model to over-rely on its internal knowledge. This highlights the ne- cessity of our two-stage pipeline. Futhermore, dur- ing RL training, w/oRgroup during RL also leads to a reduction in performance. This demonstrates ModelsHotpotQA†2Wiki†Bamboogle‡Musique‡Avg F1 LasJ RC F1 LasJ RC F1 LasJ | https://arxiv.org/abs/2505.17005v1 |
RC F1 LasJ RC F1 LasJ RC Directly Gen 26.0 26.6 0.00 27.7 26.8 0.00 18.2 17.6 0.00 9.6 6.2 0.00 18.0 19.3 0.00 Standard RAG 32.0 42.4 1.00 34.8 34.8 1.00 31.5 31.2 1.00 17.2 14.6 1.00 24.6 30.8 1.00 Sure 42.9 48.4 1.00 26.2 26.8 1.00 29.2 28.0 1.00 13.1 10.0 1.00 27.9 28.3 1.00 Selective-Context 39.8 43.4 1.00 29.1 29.6 1.00 22.1 20.8 1.00 10.6 8.8 1.00 22.8 25.7 1.00 Adaptive-RAG 38.0 47.4 1.53 21.1 25.8 1.42 23.3 25.0 1.50 10.1 11.6 1.83 20.6 27.5 1.57 IRCoT 47.7 55.2 2.47 32.4 38.6 2.74 37.5 39.2 2.30 14.8 15.8 2.70 29.4 37.2 2.55 Iter-RetGen 47.2 54.4 3.00 33.2 34.4 3.00 32.4 32.0 3.00 19.9 18.2 3.00 28.2 34.8 3.00 CR-Planner 44.4 33.6 2.40 48.2 22.0 2.54 35.2 34.4 2.96 12.2 11.4 2.72 32.0 25.4 2.66 Search-o1 46.9 53.2 1.39 46.6 51.2 1.91 52.9 52.0 1.18 21.1 19.0 1.40 36.6 43.9 1.47 R1-Searcher 60.4 62.2 2.18 62.8 63.4 2.23 59.0 54.4 2.17 35.7 31.4 2.61 45.6 52.9 2.30 Search-R1 57.8 62.2 3.12 46.2 50.0 3.71 56.9 56.0 3.25 27.5 26.0 3.61 40.3 48.6 3.42 R1-Searcher++ 59.0 64.2 1.44 61.2 64.4 1.18 60.8 59.2 1.74 33.8 32.8 2.06 45.3 55.2 1.61 Table 1: Performance comparisons between R1-Searcher++ and the baselines on QA benchmarks. The best and second best results are bold and underlined , respectively.†/‡represents in-domain/out-of-domain datasets. MethodBamboogle Musique F1 LasJ RC F1 LasJ RC Ours 60.8 59.2 1.74 33.8 32.8 2.06 w/o Stage-1 56.9 56.8 1.96 32.7 31.6 2.49 w/o Stage-2 47.4 45.6 0.94 23.0 19.4 1.03 w/oRgroup 58.3 56.8 1.91 33.1 32.4 2.37 w/oLM 58.1 57.2 1.84 31.0 29.4 2.09 w/oRgroup andLM56.2 54.4 1.92 32.2 31.2 2.40 Table 2: Ablation study on Bamboogle and Musique. the positive impact of group reward in successfully guiding the model to be more selective with exter- nal searches and to rely more on its internalized knowledge. Similarly, w/oLMresults in lower scores and a slight increase in retrieval count in- dicating that the memory mechanism for external knowledge can effectively internalize retrieved con- tent as intrinsic knowledge of the model. 6.2 Online Search Considering training efficiency and cost, we im- plement a local dense embedding-based retrieval system using Wikipedia as the external retrieval environment, which remains static during training. In contrast, most real-world applications rely on online web retrieval. To evaluate the generalization ability of R1-Searcher++ in online search scenar- ios, we assessed its performance on two newly introduced datasets: Bamboogle and Frames, us- ing online web search, a setting not encountered during RL training. Specifically, during inference, whenever retrieval is required, we use the Google API to perform real-time web searches and retrieve relevant web pages. Given the extensive content of these pages, we first employ GPT-4o-mini to gen-MethodBamboogle Frames F1 LasJ RC F1 LasJ RC Ours 77.5 76.0 1.70 33.8 39.0 1.77 Search-o1 52.9 52.0 1.18 26.1 30.7 1.56 R1-Searcher 67.5 68.8 1.72 33.3 38.0 1.86 Search-R1 69.3 67.2 1.92 33.3 36.0 2.38 Table 3: Online search generalization experiments on Bamboogle and Frames. Method Correct Incorrect Overall R1-Searcher 853 / 2.16 772 / | https://arxiv.org/abs/2505.17005v1 |
2.52 1625 / 2.33 Search-R1 761 / 3.30 864 / 3.60 1625 / 3.46 R1-Searcher++ 881 / 1.41 744 / 1.78 1625 / 1.58 Table 4: Number of correct and incorrect cases and the average retrieval count of RL-based methods. erate concise summaries, which are then integrated into the reasoning process. As illustrated in Table 3, R1-Searcher++ achieves the best F1 and LLM-as- Judge scores compared to both prompt engineering- based methods ( i.e.,Search-o1) and vanilla RL- based approaches ( i.e.,R1-Searcher, Search-R1). Moreover, compared to vanilla RL methods, our model significantly reduces the number of retrieval calls. This demonstrates our model’s strong adapt- ability to online search scenarios, as well as its ability to effectively balance internal knowledge with external retrieval during inference, thereby achieving retrieval efficiency without compromis- ing performance. Who was president of the United States in the year that Citibank was founded? Answer: James Madison To answer this question, we need to follow these steps: 1. Determine the year Citibank was founded. 2. Identify the U.S. President serving in that year. Step 1: Citibank ..., which was founded in 1818 ... we'll consider 1818 as the foundation year of a predecessor to Citibank. Step 2: To find out who was president in 1818, we need to look at the U.S. presidency timeline. ... in 1818, was John Quincy Adams. Final Answer: John Quincy Adams Qwen-7B-Inst I need to find out who was president of the United States in the year that Citibank was founded. I'll search for it. <ex> Citibank was founded </ex> <doc>… Citibank was founded in 1812 ...</doc> <ex> who was president of the United States in the year that Citibank was founded </ex> <doc> 2010, Citibank had repaid … </doc> I found out that Citibank was founded in 1812. Now I need to find out who was president of the United States in 1812. <ex> who was president of the United States in 1812 </ex> <doc>1812 in the United States … - President: James Madison (DR-Virginia)…</doc> Final Answer: James Madison R1-Searcher (External Search = 3) R1-Searcher++ (External Search = 1) To answer this question, I need to ... First, l should think about when Citibank was founded. <ex> when was citibank founded <ex> <doc> Citibank … was founded on 1812 </doc> Based on the search results, Citibank was founded in 1812. I already know that Citibank was founded in 1812. So, who was the president of the United States in 1812? For this, I can recall relevant knowledge about the presidency in 1812. <in> James Madison was the President of the United States in 1812... </in> Based on internal knowledge, the president of the United States in 1812 was James Madison. Final Answer: James Madison Figure 2: A qualitative example showing the deliberative reasoning process of RAG-Star in Bamboogle. 0 20 40 60 80 100 Training Step0.500.751.001.251.501.752.002.25Retrieval Count R1-Searcher++ R1-Searcher 0 20 40 60 80 100 Training Step1.00 0.75 0.50 0.25 0.000.250.500.75Training Process Reward R1-Searcher++ R1-Searcher Figure 3: The log of retrieval count and reward for R1- Searcher and R1-Searcher++ during RL training. 6.3 Analysis of | https://arxiv.org/abs/2505.17005v1 |
Knowledge Acquisition As shown in Table 4, R1-Searcher++ exhibits the lowest average retrieval count across both correctly and incorrectly answered questions. Moreover, it achieves the highest accuracy, indicating the effec- tive utilization of internal knowledge. Furthermore, Figure 3 shows the changes in re- trieval count and reward during the process of RL training for R1-Searcher and R1-Searcher++. For our method, We observe that the reward increases steadily and eventually plateaus, while the retrieval count initially shows a slight decline, followed by a continuous rise, and ultimately stabilizes. This trend can be attributed to the influence of SFT in the Stage-1, during which the model exhibits a low demand for invoking the search engine. As train- ing progresses, the model gradually discovers that performing external searches can yield higher re- wards, leading to an increase in retrieval behavior. In the later phase, a balance is gradually established between the use of external search and internal knowledge, resulting in the stabilization of both re- trieval count and reward. In contrast, R1-Searcher exhibits significantly higher retrieval counts while its reward quickly stabilizes, indicating an over- reliance on the retriever. This effectively validatesthe effectiveness of our approach in achieving a balanced utilization of internal and external knowl- edge, while gradually enabling dynamic knowledge acquisition throughout the RL training process. 6.4 Case Study To illustrate the overall reasoning process of R1- Searcher++, we analyze a representative example from the Bamboogle dataset. Figure 2 compares the responses generated by R1-Searcher++, Search- R1, and the untrained model when presented with the same question. The vanilla Qwen-2.5-7B- Instruct, without invoking any external search en- gine, relies solely on its internal knowledge and produces an incorrect answer. In contrast, while Search-R1 arrives at the correct answer ( i.e., James Madison ), it issues an excessive number of queries, including unnecessary one, thereby underutilizing its internal knowledge and incurring significant time overhead. Our R1-Searcher++ demonstrates the ability to break down the complex question and dynamically adjust its behavior based on the nature of the sub-question. For instance, when encounter- ing an uncertain or ambiguous sub-question ( i.e., When was Citibank founded? ), it opts to perform an external search. However, when faced with a more specific question that can be answered using internal knowledge ( i.e., Who was the president of the United States in 1812? ), it leverages its in- ternal knowledge directly without invoking search. This flexible mechanism enables a balance bewteen the external search and internal knowledge. More cases are provided in Appendix C. 7 Conclusion In this paper, we introduced R1-Searcher++, a novel framework that enables large language mod- els to dynamically integrate and alternate between internal knowledge and external retrieval. This is a two-stage training strategy consisting of an SFT Cold-start phase and RL for Dynamic Knowledge Acquisition. The RL stage incorporates a reward mechanism to encourage internal knowledge uti- lization, and a memory module to convert retrieved information into internal knowledge. Through this design, R1-Searcher++ empowers LLMs to per- form efficient retrieval-augmented reasoning while continuously enriching their internal knowledge via self-exploration and memory. Experimental results on | https://arxiv.org/abs/2505.17005v1 |
multi-hop tasks demonstrate that R1-Searcher++ outperforms existing RAG methods. Limitation Despite our significant efforts, this work has two limitations due to computational resources and funding constraints. First, we only incorporated a real-world search engine during the evaluation phase to assess the generalization ability of our method, while relying on a local denseretrieval corpus during training. Aligning the training pro- cess with real-world conditions by integrating a real search engine may lead to improved perfor- mance through more realistic supervision. Addi- tionally, our current experiments are limited to a 7B-parameter model. In future work, we plan to train and evaluate our framework on larger-scale models to further validate its generalization capa- bility and robustness. References Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2024. Self-rag: Learning to retrieve, generate, and critique through self-reflection. InThe Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. Mingyang Chen, Tianpeng Li, Haoze Sun, Yijie Zhou, Chenzheng Zhu, Haofen Wang, Jeff Z Pan, Wen Zhang, Huajun Chen, Fan Yang, and 1 others. 2025. Research: Learning to reason with search for llms via reinforcement learning. arXiv preprint arXiv:2503.19470 . Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V . Le, Sergey Levine, and Yi Ma. 2025. Sft memorizes, rl generalizes: A comparative study of foundation model post-training. Preprint , arXiv:2501.17161. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang,Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhi- hong Shao, Zhuoshu Li, Ziyi Gao, and 81 others. 2025. Deepseek-r1: Incentivizing reasoning capa- bility in llms via reinforcement learning. CoRR , abs/2501.12948. Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306 . Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. 2024. A survey on rag meeting llms: Towards retrieval-augmented large language models. In Pro- ceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , KDD ’24, page 6491–6501, New York, NY , USA. Association for Computing Machinery. Wenfeng Feng, Chuzhan Hao, Yuewei Zhang, Jingyi Song, and Hao Wang. 2025. Airrag: Activat- ing intrinsic reasoning for retrieval augmented gen- eration via tree-based search. arXiv preprint arXiv:2501.10053 . Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng Wang, and Haofen Wang. 2024. Retrieval-augmented gener- ation for large language models: A survey. Preprint , arXiv:2312.10997. Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multi-hop qa dataset for comprehensive evaluation of reasoning steps. In Proceedings of the 28th International Con- ference on Computational Linguistics , pages 6609– 6625. Jian Hu. 2025. Reinforce++: A simple and efficient ap- proach for aligning large language models. Preprint , arXiv:2501.03262. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2025. A survey on hallucination in large lan- | https://arxiv.org/abs/2505.17005v1 |
guage models: Principles, taxonomy, challenges, and open questions. ACM Transactions on Information Systems , 43(2):1–55. Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, and Jong C Park. 2024. Adaptive-rag: Learn- ing to adapt retrieval-augmented large language mod- els through question complexity. arXiv preprint arXiv:2403.14403 . Jinhao Jiang, Jiayi Chen, Junyi Li, Ruiyang Ren, Shijie Wang, Wayne Xin Zhao, Yang Song, and Tao Zhang. 2024a. Rag-star: Enhancing deliberative reasoning with retrieval augmented verification and refinement. CoRR , abs/2412.12881. Xun Jiang, Feng Li, Han Zhao, Jiaying Wang, Jun Shao, Shihao Xu, Shu Zhang, Weiling Chen, Xavier Tang, Yize Chen, and 1 others. 2024b. Long term memory: The foundation of ai self-evolution. arXiv preprint arXiv:2410.15665 . Bowen Jin, Hansi Zeng, Zhenrui Yue, Dong Wang, Hamed Zamani, and Jiawei Han. 2025. Search-r1: Training llms to reason and leverage search engines with reinforcement learning. CoRR , abs/2503.09516. Jiajie Jin, Yutao Zhu, Xinyu Yang, Chenghao Zhang, and Zhicheng Dou. 2024. Flashrag: A modular toolkit for efficient retrieval-augmented generation research. arXiv preprint arXiv:2405.13576 . Jaehyung Kim, Jaehyun Nam, Sangwoo Mo, Jongjin Park, Sang-Woo Lee, Minjoon Seo, Jung-Woo Ha, and Jinwoo Shin. 2024. Sure: Summarizing re- trievals using answer candidates for open-domain QA of LLMs. In The Twelfth International Conference on Learning Representations . Xiaoxi Li, Guanting Dong, Jiajie Jin, Yuyao Zhang, Yujia Zhou, Yutao Zhu, Peitian Zhang, and Zhicheng Dou. 2025. Search-o1: Agentic search-enhanced large reasoning models. Preprint , arXiv:2501.05366. Xingxuan Li, Weiwen Xu, Ruochen Zhao, Fangkai Jiao, Shafiq Joty, and Lidong Bing. 2024. Can we further elicit reasoning in llms? critic-guided planning with retrieval-augmentation for solving challenging tasks. arXiv preprint arXiv:2410.01428 . Yucheng Li, Bo Dong, Frank Guerin, and Chenghua Lin. 2023. Compressing context to enhance inference ef- ficiency of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natu- ral Language Processing , pages 6342–6353, Singa- pore. Association for Computational Linguistics. Richard Yuanzhe Pang, Weizhe Yuan, He He, Kyunghyun Cho, Sainbayar Sukhbaatar, and Jason Weston. 2024. Iterative reasoning preference opti- mization. Advances in Neural Information Process- ing Systems , 37:116617–116637. Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick S. H. Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021 , pages 2523–2544. Associa- tion for Computational Linguistics. Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language models. In Findings of the Association for Computa- tional Linguistics: EMNLP 2023 , pages 5687–5711. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, JianxinYang, Jiaxi Yang, Jingren Zhou, and 25 oth- ers. 2025. Qwen2.5 technical report. Preprint , arXiv:2412.15115. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. | https://arxiv.org/abs/2505.17005v1 |
2024. Direct preference optimization: Your lan- guage model is secretly a reward model. Preprint , arXiv:2305.18290. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. Zero: Memory optimizations toward training trillion parameter models. Preprint , arXiv:1910.02054. Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. 2023. Enhanc- ing retrieval-augmented large language models with iterative retrieval-generation synergy. arXiv preprint arXiv:2305.15294 . Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. 2024. Deepseekmath: Pushing the limits of mathemati- cal reasoning in open language models. Preprint , arXiv:2402.03300. Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, and Ji- Rong Wen. 2025. R1-searcher: Incentivizing the search capability in llms via reinforcement learning. CoRR , abs/2503.05592. Zhongxiang Sun, Qipeng Wang, Weijie Yu, Xiaoxue Zang, Kai Zheng, Jun Xu, Xiao Zhang, Song Yang, and Han Li. 2025. Rearter: Retrieval-augmented rea- soning with trustworthy process rewarding. Preprint , arXiv:2501.07861. Fengwei Teng, Zhaoyang Yu, Quan Shi, Jiayi Zhang, Chenglin Wu, and Yuyu Luo. 2025. Atom of thoughts for markov llm test-time scaling. Preprint , arXiv:2502.12018. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Musique: Multi- hop questions via single-hop question composition. Transactions of the Association for Computational Linguistics , 10:539–554. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2023. Interleaving retrieval with chain-of-thought reasoning for knowledge- intensive multi-step questions. In Proceedings of the 61st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 10014–10037. Hongru Wang, Cheng Qian, Wanjun Zhong, Xiusi Chen, Jiahao Qiu, Shijue Huang, Bowen Jin, Mengdi Wang, Kam-Fai Wong, and Heng Ji. 2025a. Otc: Optimal tool calls via reinforcement learning. arXiv preprint arXiv:2504.14870 . Liang Wang, Haonan Chen, Nan Yang, Xiaolong Huang, Zhicheng Dou, and Furu Wei. 2025b. Chain-of-retrieval augmented generation. CoRR , abs/2501.14342. Shuting Wang, Jiejun Tan, Zhicheng Dou, and Ji-Rong Wen. 2025c. Omnieval: An omnidirectional and auto- matic rag evaluation benchmark in financial domain. Preprint , arXiv:2412.13018. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. InProceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing , pages 2369–2380. Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, and 1 others. Dapo: An open- source llm reinforcement learning system at scale, 2025. URL https://arxiv. org/abs/2503.14476 . Yufeng Yuan, Qiying Yu, Xiaochen Zuo, Ruofei Zhu, Wenyuan Xu, Jiaze Chen, Chengyi Wang, TianTian Fan, Zhengyin Du, Xiangpeng Wei, and 1 others. 2025. Vapo: Efficient and reliable reinforcement learning for advanced reasoning tasks. arXiv preprint arXiv:2504.05118 . Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, and 1 others. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 , 1(2). Yuxiang Zheng, Dayuan Fu, Xiangkun Hu, Xiaojie Cai, Lyumanshan Ye, Pengrui Lu, and Pengfei Liu. | https://arxiv.org/abs/2505.17005v1 |
2025. Deepresearcher: Scaling deep research via reinforce- ment learning in real-world environments. arXiv preprint arXiv:2504.03160 . A Training Detailed The training data of the Stage-1 (SFT Cold Start) includes 720 samples from the HotpotQA training set and 85 samples from the 2WikiMultiHopQA training set. The training consists of 6 epochs, with a batch size of 64 and a learning rate of 2e-5. And the training data of Stage-2 (RL Training) con- sists of 4561 samples from HotpotQA, and 3581 samples from 2WikiMultiHopQA. Each data sam- ple undergoes 16 rollouts during training, with a train batch size of 1024 and a rollout batch size of 64, so the entire training process is on-policy. The learning rate is 2e-6. We utilize DeepSpeed’s Zero-3 (Rajbhandari et al., 2020), with a sampling temperature of 1.0, top-p of 0.95 and a maximum retrieval count of 8. The training epoch is set to 1, with KL divergence coefficient set to 1e-4. Andcontrol coefficient µof NLL loss is set to 0.1. The maximum limit of the variance in the number of retrievals during group reward computation ηis set to 2. B Prompts C Case Study Judge Prompt Given a Question and its Golden Answer, verify whether the Predicted Answer is correct. The prediction is correct if it fully aligns with the meaning and key information of the Golden Answer. Respond with True if the prediction is correct and False otherwise. Question: Golden Answer: Predicted Answer: System Prompt for Generation with Internal and External You are a reasoning assistant. When tackling a question, you should first thinks about the reasoning process in the mind and then provides the final answer. You should break down the original complex question into straightforward sub-questions and address them step by step. For each sub-question, You MUST choose one of the following two tools to solve it: 1. **Internal Reasoning Elaboration:**If you determine your existing knowledge is sufficient to answer the sub-question, you **should choose** this tool to answer the sub-question directly: ‘<|begin_internal_answer|>your structured thought and answer here <|end_internal_answer|>‘. **Use it whenever you rely solely on internal information** for a significant reasoning step. 2. **External Search:**If you determine that your internal knowledge is insufficient, potentially outdated, or requires verification with external, up-to-date information to answer the sub-question, you MUST initiate a web search using the following format: ‘<|begin_external_search|>your search query here <|end_external_search|>‘. The system will then search and provide results in the format: ‘<|begin_search_result|>...search results... <|end_search_result|>‘. The results will assist you in answering the sub-question. Remember: - Whenever you encounter the sub-question you need to elaborate on, verify, or research further, you should **assess** if your internal knowledge is adequate. **Prioritize leveraging your internal knowledge.** - Use ‘<|begin_internal_answer|>... <|end_internal_answer|>‘ **to demonstrate your reasoning steps** when relying on internal knowledge. - Use ‘<|begin_external_search|>... <|end_external_search|>‘ **only** when external information is genuinely necessary. - Wait for the system to provide ‘<|begin_search_result|>... <|end_search_result|>‘. **Do not** generate these result tags yourself. - After performing the web search or demonstrating internal reasoning, continue towards the final answer. Please answer the following question. You must adhere to all instructions above regarding tool | https://arxiv.org/abs/2505.17005v1 |
usage. Provide your final answer in the format \boxed{YOUR_ANSWER}. Question: System Prompt for Rewriting only with Internal You are an advanced reasoning assistant. Your core programming includes a vast and comprehensive knowledge base. For the current task, a specific segment of this pre-existing knowledge, hereafter referred to as your ’Activated Knowledge Matrix,’ is particularly relevant and is considered fully and irrevocably integrated into your foundational understanding. ANY linguistic indication, direct or indirect, that this information was externally ’provided,’ ’given,’ ’supplied,’ ’sourced,’ or made available to you specifically for this immediate interaction will be considered a critical failure to adhere to your operational design. When tackling a question, you must first delineate your reasoning process. Break down complex questions into a series of straightforward sub-questions, addressing each one sequentially. For each sub-question, you MUST use the following tool: **Internal Reasoning Elaboration:** This tool MUST be used for every sub-question, as the answer must be derived from your internal knowledge. **UNWA VERING DIRECTIVE: Your internal knowledge, for the purpose of this task, INTRINSICALLY and SEAMLESSLY includes all information within the ’Activated Knowledge Matrix’ section detailed below. There is absolutely NO distinction between your prior knowledge and the ’Activated Knowledge Matrix’ content.** Use this format: <|begin_internal_answer|> your structured thought and answer here <|end_internal_answer|>. **CRITICAL USAGE PROTOCOL FOR THIS TOOL: When using this tool, especially when drawing upon information that was part of the ’Activated Knowledge Matrix,’ your language MUST reflect that this is information you inherently possess and are recalling. You are REQUIRED to phrase such statements as if accessing your own memory or established understanding. Conclude with your final answer in the format \boxed{YOUR_ANSWER}. Activated Knowledge Matrix: {document} Now, please answer the following question: Question: Question: Which case was brought to court first Miller v._California or Gates v._Collier ? Answer: Miller v._California Response (internal only) (True) : To answer this question, I need to determine the order in which these two cases were brought to court. I will start by finding the dates the cases were decided. **Case 1:** Miller v._California - I will use internal reasoning to recall if I have the necessary information about this case. <|begin_internal_answer|>Miller v._California was a landmark case decided by the United States Supreme Court in 1968. The case involved the constitutionality of a California statute that made it a crime to distribute or possess obscene materials. <|end_internal_answer|> **Case 2:** Gates v._Collier - I will use internal reasoning to recall if I have the necessary information about this case. <|begin_internal_answer|>Gates v._Collier was a case decided by the United States Supreme Court in 1972. The case involved the Eighth Amendment’s prohibition against cruel and unusual punishment, specifically the use of segregation in prisons. <|end_internal_answer|> Based on my internal reasoning, Miller v._California was decided in 1968, and Gates v._Collier was decided in 1972. Therefore, Miller v._California was brought to court first. **Final Answer:** Miller v._California was brought to court first. Therefore, the answer is \\boxed{Miller v._California}. Table 5: Example for R1-Searcher++: Question answering using only internal knowledge. Question: George Lucas created what character to raise Boba Fett? Answer: Jango Fett | https://arxiv.org/abs/2505.17005v1 |
Response (internal and external) (True) : To answer this question, I need to break it down into sub- questions and use the appropriate tools. **Sub_question 1:** Who is Boba Fett? - For this, I need to verify if I have enough information about Boba Fett or if I need to search for more details. <|begin_internal_answer|> Boba Fett is a character from the Star Wars franchise, known for being a bounty hunter. I have this information stored in my knowledge base. <|end_internal_answer|> **Sub_question 2:** What character did George Lucas create to raise Boba Fett? - For this, I need to search for information about the character who raised Boba Fett. <|begin_external_search|> George Lucas created what character to raise Boba Fett<|end_external_search|> <|begin_search_result|> (1)Boba Fett . . . is a fictional character . . . in the "Star Wars" franchise. . . . prequel film . . . establishes his origin as a clone of bounty hunter Jango Fett who raised him as his son. . . . (2)Boba Fett . . . personifies "danger and mystery" . . . mysterious nature as reasons for his popularity. . . . The initial Boba Fett toy . . . might be responsible for the character’s popularity. . . . (3)Boba Fett . . . Reception. Boba Fett is a "cult figure" . . . one of the most popular "Star Wars" characters. . . . IGN ranked Boba Fett as the eighth top "Star Wars" character. . . . (4)Boba Fett . . . "the less you do with Boba Fett, the stronger he becomes". . . . Mark Hamill pitched the idea that Fett was Luke Skywalker’s mother . . . unaware of Fett’s demise. . . . (5)Jango Fett . . . is a fictional character . . . created by George Lucas. . . . genetic template for the Clone Army . . . <|end_search_result|> Based on the search results, it is clear that George Lucas created Jango Fett to be the father of Boba Fett. **Final Answer:** George Lucas created Jango Fett to raise Boba Fett. Therefore, the answer is \\boxed{Jango Fett}. Table 6: Example for R1-Searcher++: Question answering by integrating internal knowledge with external search. | https://arxiv.org/abs/2505.17005v1 |
Multi-SpatialMLLM: Multi-Frame Spatial Understanding with Multi- Modal Large Language Models Runsen Xu1,2∗,Weiyao Wang1,Hao Tang1,Xingyu Chen1,Xiaodong Wang1,Fu-Jen Chu1,Dahua Lin2,Matt Feiszli1,Kevin J. Liang1 1FAIR, Meta,2The Chinese University of Hong Kong ∗Work done at Meta Multi-modal large language models (MLLMs) have rapidly advanced in visual tasks, yet their spatial understanding remains limited to single images, leaving them ill-suited for robotics and other real-world applications that require multi-frame reasoning. In this paper, we propose a framework to equip MLLMs with robust multi-frame spatial understanding by integrating depth perception, visual correspondence, and dynamic perception. Central to our ap- proach is the MultiSPA dataset, a novel, large-scale collection of more than 27 million samples spanning diverse 3D and 4D scenes. Alongside MultiSPA, we introduce a comprehensive benchmark that tests a wide spectrum of spatial tasks under uniform metrics. Our resulting model, Multi-SpatialMLLM, achieves significant gains over baselines and proprietary systems, demonstrating scalable, generalizable multi-frame reasoning. We further observe multi-task benefits and early indications of emergent capabilities in challenging scenarios, and showcase how our model can serve as a multi-frame reward annotator for robotics. Date: May 23, 2025 Correspondence: runsxu@gmail.com Project Page: https://runsenxu.com/projects/Multi-SpatialMLLM How many degrees did the camera rotate in yaw? Multi-SpatialMLLM: The displacement vector is `[ 900 , 71 , 213 ]` mm.Multi-SpatialMLLM: It rotated `47` degrees. What is the displacement vector of the camera's movement?Camera Movement Which labeled point in Image-2 matches the circle in Image-1? Multi-SpatialMLLM: The new position is `[ 531 , 288 ]`Multi-SpatialMLLM: Point `A`. Determine the new position of point [ 353 , 239 ] from Image-1 in Image-2.Visual Correspondence BCDA How far away is the highlighted point from the viewer in the image (mm)? Multi-SpatialMLLM: The point at `[ 472, 148 ]` is closer.Multi-SpatialMLLM: It’s `3256`. In Image-2, which point is nearer to the camera's position [ 258 , 139 ] or [ 472 , 148 ]?Depth Perception What is the length of the desk that is commonly visible in these images? Multi-SpatialMLLM: The height is approximately `950` millimeters.Multi-SpatialMLLM: It’s `1591` mm. What is the height of the desk?Object Perception Which vector is the annotated point in Image-1's translation? Multi-SpatialMLLM: The point did not move.Multi-SpatialMLLM: The point's displacement vector is `[ -52 , 43 , -41 ]` mm. How many millimeters has the point [ 300 , 300 ] Image-1 been displaced?Object Movement Input Referencing: Coordinate, Visual Prompt, Semantic LabelOutput Format: Qualitative, Scalar, Multiple-Choice, Vector, Coordinate Figure 1. We present Multi-SpatialMLLM , a model capable of multi-frame spatial understanding, a capability overlooked by previous spatial understanding research. Multi-SpatialMLLM can support different types of input referencing and outputs for various tasks. 1. Introduction Recent years have witnessed tremendous advances in multi-modal large language models (MLLMs), which have evolved into versatile AI assistants capable of a wide array of visual tasks [17, 28, 36, 50, 61]. Despite these strides, deploying such assistants as mere “brains in a jar” within digital platforms limits their potential. Instead, there is agrowing push to integrate MLLMs directly into real-world applications, such as robotics [7, 37] and autonomous vehi- cles [59], to facilitate natural interactions with the environ- | https://arxiv.org/abs/2505.17015v1 |
ment. This shift imposes a requirement for human-like spa- tial understanding. However, current MLLMs often strug- gle with surprisingly basic spatial understanding—even confusing left and right [58]. Previous works [9, 14] attribute these deficiencies pri- 1 Table 1. Comparison of spatial understanding datasets. Our MultiSPA is the first large-scale dataset for multi-frame spatial understand- ing, with diverse referencing and output formats. We generate 27M samples here and can scale further if needed. Dataset Split Multi-Frames GT Annotation Referencing Output # Images # QAs BLINK [21] eval / / dot MCQ 877 572 UniQA-3D [70] eval / / dot, semantic MCQ 2450 2450 Q-Spatial [39] eval /reve / semantic scalar 271 271 VSR [41] train, eval /reve / semantic true/false 11K 11K SpatialVLM [9] train, eval /reve Only eval semantic qual., MCQ, scalar 10M 2B SpatialRGPT [14] train, eval /reve Only eval mask qual., scalar 1M 8.7M MultiSPA train, eval / / dot, coord., semantic qual., MCQ, scalar, coord., vec. 1.1M 27M+ marily to a shortage of specialized training data, and ad- dresses it by incorporating spatial data into model training, leading to notable improvements. However, these works fo- cus on single-image scenarios, thus restricting the model’s perception to a static field-of-view without any dynamic in- formation. We instead aim for more comprehensive spa- tial understanding, enabling MLLMs to reason across mul- tiple images. Inspired by the long-standing Structure-from- Motion problem [25] from 3D computer vision, we focus on integrating three fundamental capabilities into MLLMs: (1) depth perception, to infer relative distances and three- dimensional structures (2) visual correspondence, to match overlapping regions across images for consistent scene as- sociation, and (3) dynamic perception, to perceive self- movement (camera motion) and object motion. The challenge in achieving this goal is the scarcity of suitable training data. Because manual annotation at the required scale can be expensive, prior works [9, 14] have resorted to single-view data from in-the-wild images [35], relying on off-the-shelf modules such as monocular depth estimators [27] and open-vocabulary object detectors [55]. However, this approach often produces noisy annotations. Moreover, our objective is to collect multi-frame spa- tial data, which requires both spatial and temporal align- ment—an open challenge [64, 67] in unstructured in-the- wild images. Consequently, we leverage existing annotated 3D [16] and 4D [31, 34, 51] datasets for data collection. We develop a data engine that samples image pairs with a uniform overlap distribution, then backprojects spatially and temporally aligned point clouds to establish pixel corre- spondences. Leveraging these correspondences in addition to camera movement and projection information, we cre- ate high quality question–answer pairs via diverse, LLM- generated templates. In contrast to previous methods that rely on semantic labels [9] or object masks [14] for ref- erencing, our framework supports multiple modalities, in- cluding visual point annotations, pixel coordinates, and se- mantic labels, thus broadening potential downstream appli- cations. The collected data encompasses both qualitative and quantitative spatial information, ranging from text to scalar, 2D pixel locations, and 3D displacement vectors. In total, we curate a dataset named MultiSPA consistingof more than 27 million samples, which we use | https://arxiv.org/abs/2505.17015v1 |
to train our Multi-SpatialMLLM , as illustrated in Fig. 1. To the best of our knowledge, MultiSPA is the first large-scale dataset dedicated to multi-frame spatial understanding. Alongside the dataset, we introduce a novel MultiSPA benchmark to evaluate multi-frame spatial reasoning in MLLMs, covering diverse tasks and output formats under a unified metric. Extensive experiments show that Multi-SpatialMLLM significantly outperforms both base models and proprietary systems, exhibiting scalable and generalizable multi-frame spatial understanding. We further observe that multi-task training on MultiSPA offers notable advantages, and iden- tify preliminary signs of emergent behavior in more chal- lenging spatial tasks. Lastly, we demonstrate our model’s potential as a multi-frame reward annotator for robotics. 2. Related Work Multi-modal large language models. We refer to multi- modal large language models (MLLMs) or vision-language models (VLMs) as large language models (LLMs) extended to handle image inputs [4, 12, 17, 22, 24, 40, 42, 46, 69], typically by incorporating an image encoder [13, 29, 52] converting images into tokens, which are then projected into the LLM’s [3, 61, 63] text latent space and processed along- side text tokens. Training MLLMs uses the same language modeling objective as text-only LLMs, generally follow- ing a two-stage paradigm. First, a pre-trained image en- coder is aligned with a pre-trained LLM’s latent represen- tations, either by fine-tuning a projector [17, 42, 69] or by jointly training the entire model from scratch [1, 19, 28]. Second, part or all of the model is fine-tuned with high- quality instruction-following data [10, 15, 43] to enable user-guided responses. Thanks to Internet-scale image–text data, e.g. captioning or OCR corpora [56, 57], MLLMs have demonstrated remarkable performance on tasks like image captioning and multi-modal dialogue. However, these train- ing data lack sufficient spatial annotations, resulting in de- ficient spatial understanding. We address this limitation by further fine-tuning existing MLLMs on newly collected spa- tial data. Notably, we preserve the original model architec- ture to maintain the wide range of capabilities and applica- tion scenarios derived from large-scale pre-training. 2 Spatial understanding benchmarks for MLLMs. Re- searchers have explored deploying MLLMs on real-world platforms such as robotics [7, 19, 32, 37] and autonomous vehicles [59], applications requiring human-like spatial un- derstanding to perceive and interact with the environment effectively. However, spatial understanding is a complex, fundamental ability that is difficult to define formally, so researchers have introduced various benchmarks targeting different aspects of spatial reasoning for evaluation pur- poses. Most prior works focus on single-image spatial un- derstanding, primarily assessing inter-object spatial rela- tions [20, 30, 39, 41, 58, 62, 68] or spatial recognition in- spired by cognitive science [53]. Some benchmarks extend beyond single images: BLINK [21] and UniQA-3D [70] evaluate spatial relationships across image pairs, while VSI- Bench [66] introduces video-based scene-level spatial rea- soning. Though these share some similar tasks with ours, such as qualitative camera movement estimation and key- point matching, our proposed benchmark includes addi- tional tasks like object movement perception, and support- ing more diverse input and output formats, instead of just the limited multiple-choice format. Improving MLLMs for spatial understanding. Existing | https://arxiv.org/abs/2505.17015v1 |
benchmarks highlight the limitations of MLLMs in spatial understanding, prompting several recent works [6, 8, 9, 14, 48, 60]. However, most of these focus on single-image un- derstanding. SpatialVLM [9] was the first to identify the lack of spatial training data as a key limitation, demonstrat- ing significant improvements by fine-tuning MLLMs on a curated spatial dataset. SpatialRGPT [14] extended this ap- proach by introducing mask-based reference and incorpo- rating depth images. SpatialPIN [48] explored an alterna- tive strategy, avoiding model fine-tuning and instead lever- aging specialized perception models to extract spatial infor- mation for MLLMs. Unlike prior efforts, we focus on en- abling MLLMs to reason across multiple images for spatial understanding by fine-tuning them on our newly collected dataset. Concurrently, SAT [54] also explores multi-frame spatial reasoning, but it relies on simulated data, potentially introducing a sim-to-real gap. In contrast, our dataset is significantly larger, derived from real-world images across diverse scenarios, and covers a broader range of spatial rea- soning tasks. See Tab. 1 for a comparison of our dataset with other popular spatial datasets and benchmarks. 3. MultiSPA Dataset and Benchmark In this section, we introduce our MultiSPA dataset in Sec. 3.1, describe the data generation pipeline in Sec. 3.2, and present the MultiSPA benchmark in Sec. 3.3. 3.1. MultiSPA Dataset Tasks definitions. We aim to equip MLLMs with the abil- ity to integrate multiple images for spatial understanding.Building on the three fundamental capabilies discussed in Sec. 1, we introduce the following five tasks to generate training data: 1) Depth perception, 2) Visual correspon- dence, 3) Camera movement perception, 4) Object move- ment perception, and 5) Object size perception. Figure 1 shows examples of these five tasks. Referencing and output types. As summarized in Tab. 1, we reference specific pixels or objects in spatial QA data using visual dots (points) or semantic labels, as opposed to masks[14] to avoid additional dependency on a segmenta- tion module [33]. Additionally, we introduce pixel coordi- nates as a straightforward referencing method that preserves the original images without requiring extra annotations. Be- yond referencing, most existing datasets constrain spatial tasks to multiple-choice formats or limit outputs to quali- tative or scalar quantitative answers. We broaden these re- strictions by incorporating diverse quantitative outputs such as pixel coordinates and displacement vectors. Detailed de- scriptions of each task are provided as follows, with exam- ples of each in the supplementary material. Depth perception. We divide this task into two subtasks: direct depth estimation and depth comparison. In the first task, a single pixel is specified in an image, and the model must estimate its depth. In the second, two pixels are speci- fied, and the model must identify the one closer to the cam- era. Both subtasks support referencing pixels either via vi- sually annotated dots or pixel coordinates. Visual correspondence. Given two images and a pixel lo- cation in the first image, the model must identify the corre- sponding pixel with the same 3D position in the second im- age, either qualitatively or quantitatively. In the qualitative version, the pixel is annotated visually | https://arxiv.org/abs/2505.17015v1 |
in both images, and an additional three pixels in the second image are labeled to form a multiple-choice question. The model’s goal is to pick the correct label. In the quantitative version, only the pixel coordinates are specified, and the model must output the corresponding coordinates in the second image. Camera movement perception. Given two images, the model must estimate the relative movement of the cam- era from the first view to the second, including both trans- lation and orientation. We define multiple output levels, from coarse to fine-grained. In the simplest variant, the model must only identify the camera’s movement direction along three translational axes: “right” or “left”, “forward” or “backward”, and “up” or “down”, as well as its rotation direction in two axes: rotating “left” or “right” and tilting “up” or “down”. A more challenging variant requires the model to estimate scalar values of the overall translation distance or rotation angle. Finally, the most detailed form requires predicting the camera’s displacement vector. In to- tal, we have nine question types for this category. Object movement perception. Given two images and a pixel location on a specific object (or object part) in the first 3 image, the model estimates the pixel’s overall translation distance or, at a finer level, the pixel’s displacement vector with respect to the first view. The camera may remain still or move during capture, and pixel referencing can be done either via visual annotations or pixel coordinates. Object size perception. Given several images of a target object, the model estimates the object’s height, width, and length. We treat this as a higher level of spatial understand- ing compared with the previous four tasks. The model must integrate information across all images to infer the object’s size. We use semantic labels to refer to the object. 3.2. MultiSPA Data Generation Data Format. Following common MLLM fine-tuning strategies [12, 17, 43], we format our data as QA pairs as: User: <image >...<image >{description }{question } Assistant: {answer } We use GPT-4o[50] to generate diverse templates for task descriptions, questions, and answers. Please refer to the supplementary material for detailed templates for each task. To facilitate answer extraction, we enclose the answer in backticks ( ‘‘). For numerical answers of metric length, we use millimeters as the unit and round to the nearest integer. For pixel coordinates, we normalize the values to maintain compatibility with varying image resolutions as follows: xnorm=jx W×1000k , ynorm=jy H×1000k (1) where x, yare the original pixel coordinates, and W, H are the width and height of the image, respectively. Source datasets. We leverage existing annotated scene datasets for high-quality data collection. Specifically, we use the 4D datasets Aria Digital Twin (ADT)[51] and Panoptic Studio (PStudio)[31], with 3D tracking annota- tions from the TAPVid3D [34] dataset for the object move- ment perception task, and the 3D dataset ScanNet [16] for other tasks. Our data generation pipeline can be used for other datasets as long as they have the same spatial annota- tions. Further details are in the supplementary material. 3.2.1. Static Scene Data Generation Visible | https://arxiv.org/abs/2505.17015v1 |
points calculation. For each scene, ScanNet [16] provides a reconstructed point cloud Pscene={pW}, where each point pW= (X, Y, Z )Tis in world coordinates. Each RGB image Iihas a depth map Di, an extrinsic matrix Ei (camera to world), and an intrinsic matrix Ki. We transform and project each point pWontoIivia: pC i= (Ei)−1 pW 1 , u v 1 =Ki pC i[2] pC i[0] pC i[1] pC i[2] .(2) Visible Points Calculation Overlap CalculationPoint Cloud 0%16.7%7.9%Figure 2. Overlap ratio calculation of image pairs. We maintain all visible points of image i, denoted as Pi, by selecting those whose projected coordinates (u, v)lie within the image bounds and are not occluded: 0<pC i[2]<Di(u, v). (3) Depth perception data generation. To create depth per- ception data, we randomly sample images for each scene. For each image Ii, we sample one or two visible points from Pi, record their 2D coordinates (u, v)and corresponding depth pC i[2], and fill in the templates to construct QA pairs. Image pairs sampling. Although depth estimation is per- formed on single images, we also require image pairs with overlapping regions to construct multi-frame spatial under- standing data. For each scene, we define the overlap ratio between two images as the IoU of their visible points: Overlap (i, j) =|Pi∩ Pj| |Pi∪ Pj|. (4) We only consider image pairs with overlap ratios between 6% and 35%, as ratios outside this range indicate either too little or too much shared content. Figure 2 visualizes the calculation of the overlap ratio. Notably, the overlap ratio exhibits a long-tailed distribution, where most pairs have a low overlap. We do not use all image pairs, and to achieve balanced sampling, we divide the overlapping pairs into bins based on their overlap ratios. We then evenly allo- cate a target number of samples among these bins while pri- oritizing bins with fewer samples. For each task, we sample image pairs with different random seeds to ensure diversi- ties. More details are in the supplementary material. Visual correspondence data generation. For an sampled image pair (Ii,Ij), we randomly select one point from their co-visible points Pi∩Pjand use its projected pixel coordi- nates in both images to construct the QA pair. Camera movement perception data generation. In the ScanNet[16] dataset, the camera coordinate system is de- 4 fined with its origin at the top-left of the image, where the x-axis points to the right, the y-axis points downward, and thez-axis points forward. For an image pair (Ii,Ij), we compute the relative camera pose with respect to the first image as Ei j=E−1 iEj∈R4×4.The translation component is given by the displacement vector (di j)T= [xi j, yi j, zi j] = Ei j[0:3,3], and its norm, ∥di j∥, represents the overall trans- lation distance. When xi j>0,yi j>0, and zi j>0, we label the camera motion as “right”, “down”, and “forward”, respectively; otherwise, the movement is considered “left”, “up”, or “backward.” To determine orientation, we measure rotation angles around the gravity direction and the tilt rel- ative to the ground plane (details in the supplementary ma- terial). | https://arxiv.org/abs/2505.17015v1 |
Finally, we format all these spatial parameters into QA templates to construct the camera movement data. Object size perception data generation. For this task, we require a set of images that not only share overlapping re- gions but also jointly cover the entire target object. To en- sure that the model learns to reason across all images, only the complete image set should cover the object’s full di- mensions, while no proper subset does. To achieve this, we propose a BFS-based minimum-coverage-set search algorithm that iteratively explores image combinations with early pruning. For each object in ScanNet[16], we use the size of the target object’s 3D bounding box as its “height,” “width,” and “length” and combine these with the searched image sets to construct QA pairs. More details are in the supplementary material. 3.2.2. Dynamic Data Generation TAPVid3D [34] provides temporally aligned point cloud tracking sequences {Pt}T t=1, along with the corresponding video frames {It}T t=1, camera extrinsics {Et}T t=1, and in- trinsics {Kt}T t=1for the ADT[51] and PStudio datasets[31]. We use these datasets to construct object movement per- ception QA pairs. We randomly select one point from the tracked sequences, then choose two images (Ii,Ij)to form the image pair. Similar to the camera movement data gen- eration procedure, we compute each point’s displacement vector and translation distance between these two frames using the camera extrinsics. To ensure diversity, we adopt two additional modules described as follows to sample the points and image pairs (more in supplementary material). Rigid body segmentation. Point clouds from TAPVid3D typically belong to the same object or a local region, but different parts may be unevenly represented (see Fig. 3). For instance, a moving human often has more points on the torso than on the arms. Random sampling yields a distribu- tion skewed toward the dominant body part, which follows a single movement pattern. Thus, we devise a clustering- based rigid body segmentation method to group the point clouds according to inter-point distance changes over time, and sample each group separately to enhance diversity. AllGroup-1Group-3Group-2Figure 3. Visualization of rigid body segmentation results. Image pairs sampling. Given a selected point that ap- pears in Tframes, one could form up toT(T−1) 2image pairs. However, similar to ScanNet, these pairs exhibit a long-tailed distribution of motion magnitudes. We therefore bin the image pairs by the object translation distances and perform balanced sampling for each bin, ensuring diversity across small and large displacements. 3.3. MultiSPA Benchmark Using both ScanNet [16] and TAPVid3D [34], we employ our proposed data generation pipeline to create over 27M QA samples from 1.1M unique images. For each subtask in MultiSPA, we hold out 300 samples as evaluation sets, resulting in a total of 7,800 benchmark samples. We en- sure that the images in the benchmark come from scenes or scenarios distinct from those in the training split. Evaluation metric. Our MultiSPA benchmark supports diverse answer formats. The required answer format is specified in the question, and a regular expression is used to extract the answer from model responses. Accuracy is calculated using task-specific criteria. For qualitative | https://arxiv.org/abs/2505.17015v1 |
and multiple-choice answers, exact string matching is used. For scalar and vector responses, if the L2norm of the error is within 20% of the ground truth’s L2norm, the answer is correct. For pixel coordinates, a prediction is correct if within 5% image width pixels of the ground truth. 4. Experimental Results 4.1. Multi-Frame Spatial Understanding We evaluate the multi-frame spatial understanding of our Multi-SpatialMLLM using our proposed MultiSPA bench- mark and discuss the scalability of our model. Implementation details. Our preliminary studies show that InternVL2 [61] exhibits stronger instruction-following capabilities than other popular MLLMs ( e.g., LLaV A- OneVision [36], VILA [40]). Hence, we adopt the 8B In- ternVL2 model as our base, fine-tuning it on the MultiSPA training split. Specifically, we employ LoRA [26] with rank R= 16 to update the LLM backbone, while freezing the 5 Table 2. Evaluation on the MultiSPA benchmark. Our Multi-SpatialMLLM significantly outperforms baselines across both qualitative and quantitative subtasks, demonstrating an average 36% gain and surpassing even larger proprietary models. Multi-SpatialMLLM InternVL-8B InternVL-13B InternVL-26B Claude-3.5 Gemini-2.0 GPT-4o Average 56.11 (+35.68) 20.43 21.47 21.36 27.50 30.31 28.87 Depth Perception Comparison 74.00 (+24.50) 49.50 51.83 50.50 38.17 57.00 54.84 Value 75.33 (+71.99) 3.34 1.34 2.50 34.84 28.67 22.50 Visual Correspondence Coordinate 49.00 (+47.33) 1.67 2.33 1.67 1.33 5.67 2.00 MCQ 90.00 (+56.67) 33.33 44.00 44.00 54.67 73.00 67.67 Camera Orientation Direction 90.83 (+42.66) 48.17 47.67 49.34 62.17 62.17 58.84 Degree 45.50 (+42.16) 3.34 2.00 5.17 10.50 16.34 17.50 Camera Translation Direction 85.89 (+33.56) 52.33 47.55 50.22 55.11 51.89 54.78 Distance 42.33 (+28.00) 14.33 14.33 13.00 16.33 14.00 13.67 Vector 18.00 (+17.67) 0.33 0.00 0.67 0.33 0.33 0.00 Object Movement Distance 40.42 (+31.58) 8.84 8.09 8.75 8.50 9.42 8.92 Vector 12.92 (+10.42) 2.50 4.08 3.58 1.92 2.33 5.25 Object Perception Size 49.11 (+21.66) 27.45 34.45 26.89 46.11 42.89 40.44 image encoder and projection layer. We use a cosine learn- ing rate scheduler with lr = 4×10−5and the AdamW [45] optimizer. Due to computational constraints, we train on a subset of MultiSPA (3M QA samples) for one epoch, mixed with 60K general image-based instruction-following sam- ples to preserve the base model’s original abilities. The training is conducted on 24 nodes of 8 ×32G V100 GPUs with a batch size of 192, taking 50 hours to complete. Baselines. We include the official versions of In- ternVL2 [61] with different model sizes as baselines to investigate how our proposed training data improves performance. Additionally, we evaluate three pop- ular proprietary models, including “Claude-3.5-Sonnet- 20241022” [2], “Gemini2.0-Flash” [18], and “GPT-4o- 20241120” [50], as representative models to highlight the limitations of multi-frame spatial understanding even in SOTA MLLMs. Since these baselines often either refuse to answer certain questions or produce responses failing to adhere to the required answer format, we employ additional prompts to encourage them to provide answers or guess val- ues when uncertain. We also use GPT-4 for post-processing to ensure their outputs conform to the prescribed answer format and can be extracted for evaluation accordingly. MultiSPA benchmark. Table 2 summarizes model accu- racy on our MultiSPA benchmark, grouping similar sub- | https://arxiv.org/abs/2505.17015v1 |
tasks for clarity ( e.g., “Direction” under “Camera Trans- lation” refers to predicting the camera’s movement along three axes). We observe that most existing MLLMs have limited multi-frame spatial understanding ability, perform- ing slightly above random (about 50–60% accuracy) on qualitative tasks such as depth comparison, camera orienta- 0.5 1.0 1.5 2.0 2.5 Data (M)1020304050Acc 26B 13B 8BFigure 4. Scalability of Multi-SpatialMLLM. tion, and camera translation direction. Even worse, they ef- fectively fail entirely on tasks requiring quantitative outputs such as coordinate-based visual correspondence and camera or object movement vectors. By contrast, our Multi-SpatialMLLM significantly im- proves performance across all tasks, achieving an average 36% gain over the base model. On relatively easier qualita- tive tasks, it gets 80–90% accuracy (compared to about 50% for the base model) and outperforms all proprietary models. Even on challenging tasks like predicting camera movement vectors, our model attains 18% accuracy, whereas all other baselines remain near zero. It is notable that our model has only 8B parameters, which is likely far fewer than those of closed-source mod- els. Yet, with the MultiSPA dataset, it matches or even ex- ceeds their performance, validating the effectiveness of our proposed data. Scalability of Multi-SpatialMLLM. Certain tasks like es- 6 timating the camera’s displacement vector remain a signif- icant challenge even for Multi-SpatialMLLM. This is per- haps unsurprising, given their high difficulty and unsolved nature despite being a longstanding task in 3D computer vision. We hypothesize that the lower performance we ob- serve is in part because of higher data requirements of these tasks due to their challenging nature. To verify this, we investigate whether the multi-frame spatial understanding ability of Multi-SpatialMLLM is scal- able. We select the challenging camera movement vector prediction task as a case study and gradually increase the training data from 0.5M to 2.5M QA samples, fine-tuning different sizes of the InternVL model. (Figure 4). We ob- serve consistent improvements by adding more data and in- creasing model capacity. With 2.5M samples, the 26B vari- ant achieves around 44% accuracy, compared with the base model’s 0.67%. These findings encouragingly suggest that further scaling up training data and model capacity holds promise for even more powerful spatial understanding. 4.2. Generalization of Multi-SpatialMLLM We study the generalization ability of Multi-SpatialMLLM by evaluating it on the held-out external benchmarks and on standard VQA benchmarks. We also demonstrate the multi- task benefits introduced by our MultiSPA data. BLINK benchmark. To verify whether our model’s learned multi-frame spatial understanding generalizes to other datasets outside of our fine-tuning data, we perform zero-shot evaluation on BLINK [21], a diverse benchmark for assessing MLLM perception (Tab. 3). We focus on four splits relevant to spatial reasoning: Visual Correspon- dence (V .C.), Relative Depth (R.D.), Multi-View Reason- ing (M.V .), and Spatial Reasoning (S.R.). Note that our model never sees BLINK images during fine-tuning, and BLINK’s image resolutions and distributions differ from our training data. We find that all baselines fail on the Multi-View Reasoning task, and the InternVL base mod- els especially struggle with Visual Correspondence. In con- trast, our Multi-SpatialMLLM achieves almost | https://arxiv.org/abs/2505.17015v1 |
90% accu- racy on these tasks and delivers an average 26.4% improve- ment over the base model, even outperforming several pro- prietary models. This result demonstrates that the multi- frame spatial understanding learned by our model is trans- ferable across datasets. We do not observe gains on the Spa- tial Reasoning task, possibly because this task focuses on topological position relations between two objects within a single image, which differs significantly from our multi- frame training data geared toward integrating spatial cues from multiple viewpoints. Standard VQA benchmarks. We evaluate our Multi- SpatialMLLM on several popular standard VQA bench- marks, as shown in Tab. 4. These benchmarks target various MLLM capabilities, such as general perceptionTable 3. Evaluation on the BLINK[21] benchmark. Model Avg. V .C. R.D. M.V . S.R. Gemini-2.0 75.7 88.4 83.9 42.9 83.9 Claude-3.5 67.7 74.4 63.7 54.1 75.5 GPT-4o 73.8 84.9 71.0 53.4 81.8 InternVL-8B 57.9 39.0 71.8 49.6 76.2 InternVL-13B 64.0 52.3 71.0 49.6 85.3 InternVL-26B 61.4 47.1 78.2 44.4 79.7 Multi-SpatialMLLM 84.3 89.5 79.8 94.7 74.8 Table 4. Evaluation on standard VQA benchmarks. Model POPE VizWiz OCRVQA MathVista MMStar CCBench InternVL-8B 84.5 33.2 42.7 58.5 61.1 77.3 Ours 85.3 30.7 42.7 57.6 59.7 75.7 (POPE [38] and VizWiz [23]), optical character recogni- tion (OCRVQA [49]), reasoning (MathVista [47] and MM- Star [11]), and Chinese VQA (CCBench [44]). The results show rough parity across the benchmarks, indicating that our model retains most of its original standard VQA profi- ciency and can be used as a general-purpose MLLM, with- out being overfit to just multi-frame spatial reasoning. Table 5. Model performance w./wo. multi-task training. Camera Movement Vector Object Movement Single Task 9.30 17.50 Multiple Tasks 18.00 (+8.70) 22.04 (+4.56) Multi-task generalization and synergy. While each of the tasks proposed in Sec. 3.1 are focused on a narrower sub- goal, ultimately the aim is to collectively improve multi- frame spatial understanding; we thus prefer that our train- ing data has synergistic generalization effects, as opposed to balancing potentially antagonistic tasks individually. We observe that this is indeed the case by comparing train- ing on just the 500K samples from the camera-movement subset (without any other task data) versus the full train- ing set of 3M samples: we observe that the additional data from the additional tasks indeed increases the accu- racy on camera movement questions from 9.3% to 18.0%. We further compare two training configurations for object movement: (1) a dataset of 400K object movement samples alone, and (2) the same 400K object movement samples plus 400K additional samples from camera movement, vi- sual correspondence, and depth estimation. The average ac- curacy on object movement subtasks increases from 17.5% to 22.04% with the additional data, as shown in Tab. 5. Importantly, these extra 400K samples only involve Scan- Net [16] images, whereas the object movement data origi- nate from PStudio [31] and ADT [51], and the two sets do not share question types or data sources. This improvement demonstrates the spatial understanding learned from differ- ent datasets and task types can transfer, highlighting an ad- 7 How far did the | https://arxiv.org/abs/2505.17015v1 |
annotated point move from Image A to B?GPT4o: The annotated point movedabout 0.4 meters.InternVL-8B: A few centimeters to the left on the table.Multi-SpatialMLLM: The point did not move. Frame-0Frame-1Frame-3Frame-5 Image-AImage-B Frame-5Figure 5. Demonstrations of Multi-SpatialMLLM in zero-shot robotics tasks. Our model accurately identifies static objects and predicts movement distances, aligning with the ground truth. It exhibits potential for novel applications like multi-frame reward annotation. ditional scalability dimension beyond merely data volume and model capacity—namely, task diversity. 4.3. Emergence of spatial understanding We have shown that our model’s multi-frame spatial under- standing is scalable (Fig. 4 and Tab. 5). However, we also investigate whether certain spatial reasoning abilities only appear in sufficiently large models, mirroring the emergent phenomena observed in text-based LLMs [65]. We explore this through our multiple-choice visual cor- respondence task for preliminary study. By default, when generating distractor pixels in the second image, we pick them randomly; we denote this as Easy , as distractors may be quite far from the answer. For a more challenging sce- nario, we deliberately select distractors near the correct pixel, thus requiring higher discriminative power from the model ( Hard version). We train various sizes of the base models on these Hard samples and then test on the Easy samples, to gauge whether they can effectively learn from the Hard data. Table 6 shows that only the 26B variant improves over the base model, whereas both the 8B and 13B models (the latter equipped with a larger 6B vision en- coder) fail to learn effectively from the Hard samples and even show reduced performance. As a reference, training the 8B model on the same number of Easy samples yields 93.33% accuracy on the test set. These findings suggest that learning difficult spatial tasks may require sufficiently large model capacity—potentially pointing to an “emergent” as- pect of multi-frame spatial understanding. We leave deeper investigation of this interesting phenomenon to future work. 4.4. Demonstrations of Multi-SpatialMLLM In Fig. 1, we demonstrate our Multi-SpatialMLLM’s multi- view spatial understanding. We further test its real-world performance on newly collected images of a robot armTable 6. Multiple-choice visual correspondence accuracy. Model Size Encoder Size LLM Size Acc. (Baseline v.s. Hard) 8B 300M 7B 33.3 v.s. 25.67 13B 6B 7B 44.0 v.s. 42.67 26B 6B 20B 44.0 v.s. 82.33 stacking cubes. These robot scenes are out-of-distribution because our training set does not include any robotic sce- nario. As shown in Fig. 5, when asked about the movement of a static blue cube, GPT-4o and the base model respond incorrectly, while ours accurately identifies no movement. Multi-Frame reward annotator. Prior works [9, 14] have shown that MLLMs with spatial understanding can act as reward annotators in robot learning, but they only handle single-frame inputs. In contrast, our model supports multi- frame tasks such as perceiving object movement across con- secutive frames. In Fig. 5, we provide Frame-0 and subse- quent frames (Frame-1 to Frame-5), then query our model about the object’s displacement. Our model successfully estimates an increasing trend in movement distances, align- ing with the ground truth. Though the predicted values | https://arxiv.org/abs/2505.17015v1 |
are not exact (due to differing resolutions and domains), these results underscore our model’s generalization ability and highlight potential novel applications as a reward annota- tor or evaluator for robot tasks involving multi-frame spatial understanding, such as “move the object by nmeters.” 5. Conclusion In this work, we extend MLLMs’ spatial understanding to multiple frames, a capability overlooked in previous re- search. We develop a data generation pipeline that produces the first large-scale dataset and a benchmark, MultiSPA, dedicated to this goal. Our extensive experiments demon- strate the effectiveness, scalability, and generalization of the 8 proposed Multi-SpatialMLLM, revealing key observations such as multi-task benefits and emergent behaviors in chal- lenging spatial tasks. The model also opens up new appli- cations, including acting as a multi-frame reward annotator. We discuss limitations in the supplementary material. Appendix A . MultiSPA Data Samples 9 B . MultiSPA Data Templates 9 C . Details of Source Datasets 9 D . Image Pairs Sampling 9 E . Rotation Angles 9 F. BFS-Based Minimum-Coverage-Set Search 10 G . Clustering-Based Rigid Body Segmentation 10 H . Limitations 10 A. MultiSPA Data Samples Our MultiSPA dataset has 26 subtasks in total. Each task with an example is shown from Fig. 6 to Fig. 14. B. MultiSPA Data Templates Due to paper length limits, we only show part of the tem- plates in Listing 4. Other templates are similar to those shown in this supplementary material. Please refer to our website for the complete lists of templates. C. Details of Source Datasets ScanNet. ScanNet [16] is an RGB-D dataset containing more than 1,500 indoor scans. Each scan provides recon- structed point clouds, 3D camera poses, camera intrinsics, depth maps, and 3D instance and semantic segmentation masks. Our data generation pipeline utilizes all these an- notations, though segmentation masks are optional if object perception data is not required. PStudio. The CMU Panoptic Studio dataset [31] comprises 65 sequences (5.5 hours total) of multiple people interacting with one another or with objects, captured within a light stage. It offers multi-view images, 3D body skeletons, and facial landmark annotations. ADT. Aria Digital Twin [51] is an egocentric video dataset recorded with Aria glasses. It contains 200 sequences of real-world indoor activities, each with precise 6DoF cam- era poses, 3D human poses, 2D image segmentations, and depth maps, as well as a digital twin environment.TAPVid3D. TAPVid3D is a dataset for tracking 3D points in space. It provides temporal 3D point tracking, con- structed from PStudio, ADT, and DriveTrack [5]. It lever- ages official annotations to produce temporally aligned 3D point sequences, along with camera pose sequences and in- trinsics. We use these annotations for our data generation. Note that we exclude the DriveTrack split because its cam- era poses are insufficiently accurate. D. Image Pairs Sampling To ensure a balanced selection of image pairs based on their overlap ratio, we adopt the procedure as follows. First, we separate pairs with zero overlap and randomly sample a pre- defined number of them. Next, we partition all nonzero- overlap pairs into bins according to their overlap ratio. | https://arxiv.org/abs/2505.17015v1 |
We then distribute the sampling quota across bins in proportion to the number of bins, sorting them by bin size in ascending order to prevent smaller bins from being overshadowed by larger ones. Finally, we either sample or exhaust each bin, carrying over any unused quota to subsequent bins. This step balances pairs of different overlap levels, mitigating is- sues caused by long-tail distributions. The main content of the full algorithm is shown in Listing 1. E. Rotation Angles Beyond translation, we estimate two orientation angles: yaw and pitch. We do not model roll, as it typically remains small in real-world use cases ( e.g., autonomous vehicles, robotics, wearable devices). Formally, let E∈R4×4be the camera pose in world coordinate, which has the z−axis aligned with the gravity direction, and Rits upper-left 3×3 submatrix: R=E[0:3,0:3]. (5) We then extract yaw and pitch by focusing on the camera’s forward (i.e., z-) axis: zfwd=R 0 0 1 . (6) Yaw is defined as the angle of this rotated z-axis in the hor- izontal plane, measured around the gravity axis: yaw= arctan 2 zfwd[1],zfwd[0] ×180 π. (7) Pitch is the angle of zfwdrelative to the ground plane: pitch = arcsinzfwd[2] ∥zfwd∥ ×180 π. (8) With these two angles, we can determine the camera rotates left or rotates right, and tilt up or titl down. 9 F. BFS-Based Minimum-Coverage-Set Search To ensure that an object’s full dimensions are captured across multiple images, we develop a breadth-first search (BFS) algorithm that identifies minimal sets of images whose combined coverage meets each dimension’s size re- quirement. In particular, for each axis (height, width, length), we track which subset of object points are visi- ble per image. If the difference between the minimum and maximum coordinates of all selected points along that axis meets a target threshold (based on the object’s 3D bounding box size), we consider it covered. Our BFS proceeds in two phases at each iteration: 1.Phase A: Coverage check. We examine the current sets and mark any that fully cover the object on the chosen axis. These sets are recorded as “minimal,” and any set that is a superset of a previously found minimal set is pruned. 2.Phase B: Expansion. We expand the remaining (uncov- ered) sets to the next level by appending additional im- ages, while pruning those that cannot possibly achieve coverage in deeper levels. This process continues until either no further expansion is possible or the maximum number of images is reached. The final result is a collection of minimal sets that together span the object’s relevant dimension. Although we include prun- ing steps, the search still becomes expensive when consid- ering sets of three or more images. Hence, we only use two images for object size perception in our data. Listing 2 shows a simplified implementation. G. Clustering-Based Rigid Body Segmentation In TAPVid3D, all points in a sequence often belong to the same object or scene region, but they can be unevenly dis- tributed ( e.g., a human torso versus arms). To sample di- verse motion patterns, we segment the point cloud into | https://arxiv.org/abs/2505.17015v1 |
multiple rigid bodies, each undergoing a distinct motion. Our method accumulates inter-point distance changes over time and applies hierarchical clustering to identify coherent groups. We also filter groups with too less points to avoid noise. Listing 3 is a simplified code snippet. H. Limitations Despite our focus on enabling multi-frame spatial under- standing, most of our experiments employ only two-view scenarios. However, our data generation pipeline naturally extends to additional frames. Future work may explore scaling beyond pairs of images, leveraging more views for enhanced spatial reasoning. Another limitation is that al- though we observe signs of the emergent phenomenon, fur- ther investigation is required to clarify what exact spatial abilities drive such emergence. 10 1 2def sample_dataframe(df, all_overlap_samples, non_overlap_samples, 3 overlap_min=0, overlap_max=100, interval=1): 4 # 1) Sample pairs with overlap == 0 5 non_overlap_df = df[df["overlap"] == 0].copy() 6 sampled_non_overlap_df = (non_overlap_df if len(non_overlap_df) <= non_overlap_samples 7 else non_overlap_df.sample(n=non_overlap_samples)) 8 9 # 2) Partition the remaining pairs (overlap != 0) into bins 10 remaining_df = df[df["overlap"] != 0].copy() 11 bins = np.arange(overlap_min, overlap_max + interval, interval) 12 remaining_df["overlap_group"] = pd.cut(remaining_df["overlap"], bins=bins, include_lowest=True) 13 remaining_df.dropna(subset=["overlap_group"], inplace=True) 14 15 bin_groups = [] 16 for ovlp_bin, group_df in remaining_df.groupby("overlap_group"): 17 bin_groups.append((ovlp_bin, group_df)) 18 if not bin_groups: 19 final_df = sampled_non_overlap_df.copy() 20 final_df.drop(columns=["overlap_group"], errors="ignore", inplace=True) 21 return final_df 22 23 # 3) Distribute all_overlap_samples evenly across bins 24 N = len(bin_groups) 25 base_quota = all_overlap_samples // N 26 remainder = all_overlap_samples % N 27 bin_quotas = [base_quota] *N 28 for i in range(remainder): 29 bin_quotas[i] += 1 30 31 # 4) Sort bins by size (ascending) and sample 32 bin_data = [] 33 for i, (ovlp_bin, group_df) in enumerate(bin_groups): 34 bin_data.append({ 35 "group_df": group_df, 36 "quota": bin_quotas[i], 37 "size": len(group_df) 38 }) 39 bin_data.sort(key=lambda x: x["size"]) 40 41 sampled_df = pd.DataFrame() 42 leftover = 0 43 for info in bin_data: 44 group, quota, size = info["group_df"], info["quota"], info["size"] 45 current = quota + leftover 46 if size <= current: 47 sampled_df = pd.concat([sampled_df, group], ignore_index=True) 48 leftover = current - size 49 else: 50 sampled_df = pd.concat([sampled_df, group.sample(n=current)], ignore_index=True) 51 leftover = 0 52 53 if leftover > 0: 54 print(f"Warning: leftover {leftover} samples not used.") 55 56 # 5) Combine sampled bins with zero-overlap samples 57 final_df = pd.concat([sampled_df, sampled_non_overlap_df], ignore_index=True) 58 final_df.drop(columns=["overlap_group"], errors="ignore", inplace=True) 59 return final_df Listing 1. The image pairs sampling algorithm for static scene data 11 1def compute_coverage(points, mask, axis): 2 """Returns the min-to-max spread along ’axis’ for points indicated by ’mask’.""" 3 if not mask.any(): 4 return 0.0 5 coords = points[mask][:, axis] 6 return coords.max() - coords.min() 7 8def covers_dimension(coverage, target_dim, tol): 9 """Checks if ’coverage’ is within tolerance of the target dimension.""" 10 return abs(coverage - target_dim) <= tol *target_dim 11 12def bfs_min_coverage(images, visibility, points, obj_mask, axis, target_dim, tol, max_k=2): 13 """ 14 Finds minimal image sets up to size ’max_k’ that meet coverage criteria along ’axis’. 15 ’images’ is a list of candidate frames, ’visibility’ maps frame->boolean mask, 16 ’obj_mask’ indicates the object points in ’points’. 17 """ 18 # Prepare BFS queue: each item is | https://arxiv.org/abs/2505.17015v1 |
(set_of_images, combined_mask, last_idx) 19 queue = [] 20 for i, img in enumerate(images): 21 mask_i = visibility[img] & obj_mask 22 queue.append(([img], mask_i, i)) 23 24 solutions = [] 25 k = 1 26 while k <= max_k and queue: 27 next_level = [] 28 for combo, comb_mask, last_idx in queue: 29 cov = compute_coverage(points, comb_mask, axis) 30 if covers_dimension(cov, target_dim, tol): 31 solutions.append(combo) 32 elif k < max_k: 33 # Expand only if we have not reached max_k 34 for j in range(last_idx + 1, len(images)): 35 mask_j = visibility[images[j]] & obj_mask 36 next_mask = comb_mask | mask_j 37 next_level.append((combo + [images[j]], next_mask, j)) 38 queue = next_level 39 k += 1 40 return solutions Listing 2. Simplifed version of BFS-based minimum-coverage-set search with pruning. 12 1def smooth_distance_changes(dist_t, dist_prev, smooth_factor=0.01): 2 """Zeroes out small distance changes to reduce noise.""" 3 diff = np.abs(dist_t - dist_prev) 4 return np.where(diff > smooth_factor, diff, 0) 5 6def rigid_body_segmentation(points, thr=0.1, smooth_factor=0.01): 7 """ 8 points: Shape (T, N, 3), with T time steps & N points. 9 thr: Threshold for clustering distance. 10 smooth_factor: Ignored small changes. 11 Returns: A list of groups, each group is a list of point indices. 12 """ 13 T, N, _ = points.shape 14 cum_loss = np.zeros((N, N)) 15 16 # Accumulate distance changes over time 17 for t in range(1, T): 18 dist_t = squareform(pdist(points[t])) 19 dist_prev = squareform(pdist(points[t - 1])) 20 cum_loss += smooth_distance_changes(dist_t, dist_prev, smooth_factor) 21 22 # Hierarchical clustering 23 Z = linkage(squareform(cum_loss), method="average") 24 labels = fcluster(Z, thr, criterion="distance") 25 26 # Group points by label 27 groups = [] 28 for label_id in range(1, labels.max() + 1): 29 group = np.where(labels == label_id)[0].tolist() 30 groups.append(group) 31 return groups Listing 3. Rigid body segmentation with smoothing and hierarchical clustering. 13 1# Depth Estimation-Dot 2TASK_DESCRIPTION = [ 3 "<image>\nGiven animage with anannotated point, complete the question-answer task.", 4] 5TEMPLATES = { 6 "questions": [ 7 "What isthe depth ofthe annotated point inthe image (in mm)?", 8 ], 9 "answers": [ 10 "The depth ofthe annotated point is‘{depth}‘ mm.", 11 ] 12} 13 14# Visual Correspondence Multiple-Choice 15TASK_DESCRIPTION = [ 16 "Image-1: <image>\nImage-2: <image>\nGiven these two images, find the corresponding points between them.", 17] 18 19TEMPLATES = { 20 "questions": [ 21 "Which point labeled A,B,C,orDinImage-2 corresponds tothe circle point inImage-1? Please answer with the correct label from Image-2.", 22 ], 23 "answers": [ 24 "The correct point islabeled ‘{correct_label}‘.", 25 ] 26} 27 28# Object Perception 29TASK_DESCRIPTION = [ 30 "Assume the scene remains unchanged. Your task istodetermine the spatial properties based onthe images. You need tointegrate and analyze information from all provided images toget the answer.", 31] 32 33QUESTION_TEMPLATES = [ 34 "What isthe {dimension} (in millimeters) ofthe {object_category} itself commonly visible inthese images?", 35] 36 37ANSWER_TEMPLATES = [ 38 "The {dimension} isapproximately ‘{value_mm}‘ millimeters.", 39] 40 41# Object Movement-Coordinate-Distance 42 43TASK_DESCRIPTION = [ 44 "Image-1: <image>\nImage-2: <image>\nGiven two images, analyze the movements ofobjects inthe images and the cameras that captured them. The movement should berelative tothe first image. Note that the objects inthe images | https://arxiv.org/abs/2505.17015v1 |
and the camera may ormay not have moved.", 45] 46 47QUESTION_TEMPLATES = [ 48 "How far did the point at[{x1} ,{y1} ]inImage-1 travel between the two shots? The coordinates [x,y]are normalized to0-1 and scaled by1000, with [0,0]atthe top-left corner. The x-axis represents the width, and the y-axis represents the height." 49] 50 51ANSWER_TEMPATES = [ 52 "The point traveled atotal of‘{total_distance}‘ mm.", 53] Listing 4. Part of the templates used by MultiSPA dataset 14 human: <image> Given an image with two points specified by their coordinates, determine which point is closer to or farther from the camera. The coordinates [ x , y ] are normalized to 0-1 and scaled by 1000, with [ 0 , 0 ] at the top-left corner. The x-axis represents the width, and the y-axis represents the height. Which of these coordinates is more remote: [ 9 , 40 ] or [ 961 , 34 ]? gpt: The point at `[ 961 , 34 ]` has the longer distance to the camera.Depth Comparison-Coordinate human: <image> Examine the image with two annotated points and decide which point is positioned closer to the camera and which one is positioned farther from it. Can you identify which annotated point has the shorter distance to the camera? gpt: The annotation `B` marks the closer point.Depth Comparison-Dot human: <image> Given an image and a specific point's 2D coordinates, perform. The coordinates [ x , y ] are normalized to 0-1 and then scaled by 1000, with [ 0 , 0 ] starting at the top-left. The x-axis represents the width, and the y-axis represents the height. What is the depth at point [ 204 , 486 ] in the image (in mm)? gpt: Measured depth of this point is `1941` mmDepth Estimation-Coordinate human: <image> For the image with a noted point, ascertain the depth-related answers. Identify the depth value for the annotated point in millimeters. gpt: The depth value of the marked point is `2249` mm.Depth Estimation-Dot Figure 6. Data samples of depth perception. 15 human: Image-1: <image> Image-2: <image> Identify equivalent point positions between images. The [ x , y ] coordinates are normalized to 0-1 and scaled by 1000, with [ 0 , 0 ] at top-left. Width follows x-axis, height follows y-axis. Find the matching position in Image-2 for point [ 314 , 111 ] from Image-1. gpt: The point has moved to `[ 363 , 70 ]` in Image-2.Visual Correspondence-Coordinate human: Image-1: <image> Image-2: <image> Determine the corresponding points between these images. Which point labeled A, B, C, or D in Image-2 corresponds to the circle point in Image-1? Please answer with the correct label from Image-2. gpt: The correct point in Image-2 is `D`.Visual Correspondence-Dot Figure 7. Data samples of visual correspondence. human: Image-1: <image> Image-2: <image> Observe and describe the camera's motion from one image to the other. The movement should be relative to the first image. Note that the objects in the images are assumed to be static. Between these two frames, is the camera more to the left or to the right now? gpt: | https://arxiv.org/abs/2505.17015v1 |
It shows a noticeable move `left` along the horizontal axis.Camera Translation Direction-X axis human: Image-1: <image> Image-2: <image> Compare these images to infer the camera's motion. The movement should be relative to the first image. Note that the objects in the images are assumed to be static. Is the camera vantage now elevated or depressed relative to its old vantage? gpt: It exhibits a noticeable move `up` along the vertical axis. Camera Translation Direction-Y axis human: Image-1: <image> Image-2: <image> Observe and describe the camera's motion from one image to the other. The movement should be relative to the first image. Note that the objects in the images are assumed to be static. If we consider the vantage, is it pushing in or pulling away from the center of the view? gpt: The camera moved `forward` in depth.Camera Translation Direction-Z axis Figure 8. Data samples of camera movement-translation direction. 16 human: Image-1: <image> Image-2: <image> Observe and describe the camera's motion from one image to the other. The movement should be relative to the first image. Note that the objects in the images are assumed to be static. Yaw is an angle describing the camera's rotation left or right around vertical. Please tell me if it pivoted left or pivoted right? gpt: We confirm the camera has undergone a `left` yaw rotation.Camera Orientation Direction-Yaw human: Image-1: <image> Image-2: <image> Examine how the camera shifted its position/orientation between these shots. The movement should be relative to the first image. Note that the objects in the images are assumed to be static. By pitch angle, we refer to the camera's up-down rotation relative to the ground. Are we seeing an upward tilt or a downward tilt? gpt: It shows a notable pitch rotation of `down`. Camera Orientation Direction-Pitch Figure 9. Data samples of camera movement-orientation direction. human: Image-1: <image> Image-2: <image> Determine the change in camera viewpoint across these two images. The movement should be relative to the first image. Note that the objects in the images are assumed to be static. How many degrees does the camera revolve side-to-side from its original orientation? gpt: The camera's left-right rotation measures about `61` degrees. Camera Orientation Degree-Yaw human: Image-1: <image> Image-2: <image> Examine how the camera shifted its position/orientation between these shots. The movement should be relative to the first image. Note that the objects in the images are assumed to be static. Pitch is the angle capturing up-down spin. Could you share the camera's pitch rotation in degrees? gpt: Observations indicate a pitch rotation of `1` degrees.Camera Orientation Degree-Pitch Figure 10. Data samples of camera movement-orientation degree. 17 human: Image-1: <image> Image-2: <image> Identify the differences in camera pose when comparing these images. The movement should be relative to the first image. Note that the objects in the images are assumed to be static. Does the camera have a large or small travel distance, and how much is it? gpt: Comparing both images, the camera moved roughly `742` mm in distance.Camera Translation Distance human: Image-1: <image> Image-2: <image> Examine how the camera shifted its | https://arxiv.org/abs/2505.17015v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.