text
string
source
string
significant performance degrada- tion compared to IE subsets. This indicates that self-consistent errors are more challenging to dis- tinguish from correct responses even at the hid- den state level. Furthermore, Probe (OOD) shows larger performance gaps ( ∆) compared to Probe (ID), suggesting that self-consistent errors are par- ticularly difficult to detect when generalizing across different knowledge domains TQA and SciQ. 4 Cross-Model Probe The poor performance of the evaluated methods on self-consistent errors suggests that features from the response-generating LLM alone may be insuf- ficient for detecting such errors. Fortunately, we observe that self-consistent errors are often model- specific and rarely overlap across different LLMs. For instance, among questions where Qwen2.5- 14B produces self-consistent errors, only 9.6% of them lead Llama3.1-70B to consistently make the same errors. This observation motivates the use of an external verifier to supplement the detection of self-consistent errors. Given the high efficiency (Su et al., 2024) and strong performance of supervised probes, we build upon this approach. Standard probe methods train a classifier to detect errors using internal states of Mwhich generate the response rg i: sM i= Probe M(hM i),hM i=ϕ(l,t) M qi, rg i where ϕ(l,t) Mextracts internal states from layer l and token position tof model M. We introduce a cross-model probe that leverages an external veri- fier LLM Vto embed the responses generated by Mand trains a separate Probe V: sV i= Probe V(hV i),hV i=ϕ(l,t) V qi, rg i The final error score combines both probes through an integration parameter λ: score i= (1−λ)·sM i+λ·sV iVerifierDifferent SeriesScale λAUROC Res only Ver only Fused Qwen2.5-3b ✗ smaller 0.25 0.8250 0.8129 0.8357 Llama3.2-3b ✓ smaller 0.50 0.8250 0.8125 0.8453 Llama3.1-70b ✓ larger 0.85 0.8250 0.8740 0.8794 Qwen2.5-72b ✗ larger 1.00 0.8250 0.8689 0.8689 Table 2: Effect of using different verifier LLMs against responses generated by qwen2.5-7b on the SciQ-CE. In our implementation, we select Qwen2.5-14B as the verifier for all other models except itself, for which we use Llama3.1-70b. λis selected from {0,0.05,0.1, . . . , 1.0}by choosing the value that yields the best validation performance. As shown in Table 1, cross-model probe demonstrates signif- icant performance improvements on CE subsets, regardless of in-domain or out-of-domain settings. We conduct an analysis of verifier selection across different model scale and series, detailed in Appendix A.4. As shown in Table 2, all tested veri- fiers, including the 3B-scale models, consistently achieve substantial performance gains, validating the effectiveness of our approach. Besides, our empirical results suggest that using a larger veri- fier from a different series could achieve the most substantial improvement. 5 Related work Zhang et al. (2023); Chen et al. also mention the limitation of consistency-based methods regarding self-consistent errors. Beyond these studies, we demonstrate the importance of self-consistent er- rors by analyzing their frequency, systematically quantify performance degradation across four main- stream detection methods (not only consistency- based), and propose a simple yet effective improve- ment. Appendix A.2 provides a more detailed dis- cussion of related works. 6 Conclusion This work investigates self-consistent errors where the LLM repeats the same incorrect response
https://arxiv.org/abs/2505.17656v1
across multiple stochastic samples. Our analysis shows that the frequency of self-consistent errors per- sist or even increase with increasing model scale, highlighting the importance of detecting them in ever-larger LLMs. Then, we evaluate four rep- resentative error detection methods and find all of them expose clear limitations in self-consistent errors. Finally, we introduce a simple but effec- tive cross -model probe to improve detection perfor- mance on self-consistent errors. 4 7 Limitations The underlying causes of consistent errors still re- quire deeper investigation. These systematic fail- ures may stem from prevalent misconceptions in training data, or biases introduced during the super- vised training phase. Future works may construct controlled experiments to investigate the causes. 8 Ethics Statement Data All data used in this study are publicly avail- able and do not pose any privacy concerns. AI Writing Assistance In our study, we only em- ployed ChatGPT to polish our textual expressions rather than to generate new ideas or suggestions. References Amos Azaria and Tom Mitchell. 2023. The internal state of an LLM knows when it’s lying. In Find- ings of the Association for Computational Linguistics: EMNLP 2023 , Singapore. Association for Computa- tional Linguistics. Mohammad Beigi, Ying Shen, Runing Yang, Zihao Lin, Qifan Wang, Ankith Mohan, Jianfeng He, Ming Jin, Chang-Tien Lu, and Lifu Huang. 2024. InternalIn- spector i2: Robust confidence estimation in LLMs through internal states. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 12847–12865, Miami, Florida, USA. Association for Computational Linguistics. Collin Burns, Haotian Ye, Dan Klein, and Jacob Stein- hardt. 2023. Discovering latent knowledge in lan- guage models without supervision. In The Eleventh International Conference on Learning Representa- tions . Chao Chen, Kai Liu, Ze Chen, Yi Gu, Yue Wu, Mingyuan Tao, Zhihang Fu, and Jieping Ye. Inside: Llms’ internal states retain the power of hallucination detection. In The Twelfth International Conference on Learning Representations . Zhiyu Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Ruth Petzold, and William Yang Wang. 2024. A survey on large language models for critical societal domains: Finance, healthcare, and law. Transactions on Machine Learning Research . Survey Certification. Hanxing Ding, Liang Pang, Zihao Wei, Huawei Shen, and Xueqi Cheng. 2024. Retrieve only when it needs: Adaptive retrieval augmentation for hallucination mit- igation in large language models. arXiv preprint arXiv:2402.10612 .Xuefeng Du, Chaowei Xiao, and Yixuan Li. 2024. Halo- scope: Harnessing unlabeled LLM generations for hallucination detection. In The Thirty-eighth Annual Conference on Neural Information Processing Sys- tems. Jinhao Duan, Hao Cheng, Shiqi Wang, Alex Zavalny, Chenan Wang, Renjing Xu, Bhavya Kailkhura, and Kaidi Xu. 2024. Shifting attention to relevance: To- wards the predictive uncertainty quantification of free- form large language models. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 5050–5063. Sebastian Farquhar, Jannik Kossen, Lorenz Kuhn, and Yarin Gal. 2024. Detecting hallucinations in large language models using semantic entropy. Nature , 630(8017):625–630. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng,
https://arxiv.org/abs/2505.17656v1
Xiaocheng Feng, Bing Qin, and Ting Liu. 2025. A survey on hallucination in large lan- guage models: Principles, taxonomy, challenges, and open questions. ACM Trans. Inf. Syst. , 43(2). Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 1601–1611. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, and 17 others. 2022. Language models (mostly) know what they know. Preprint , arXiv:2207.05221. Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. Semantic uncertainty: Linguistic invariances for un- certainty estimation in natural language generation. InThe Eleventh International Conference on Learn- ing Representations . Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. 2023. Inference- time intervention: Eliciting truthful answers from a language model. In Thirty-seventh Conference on Neural Information Processing Systems . Stephanie C. Lin, Jacob Hilton, and Owain Evans. 2022. Teaching models to express their uncertainty in words. Trans. Mach. Learn. Res. , 2022. Zhen Lin, Shubhendu Trivedi, and Jimeng Sun. 2024. Generating with confidence: Uncertainty quantifica- tion for black-box large language models. Transac- tions on Machine Learning Research . Matéo Mahaut, Laura Aina, Paula Czarnowska, Mom- chil Hardalov, Thomas Müller, and Lluis Marquez. 5 2024. Factual confidence of LLMs: on reliability and robustness of current estimators. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 4554–4570, Bangkok, Thailand. Association for Computational Linguistics. Andrey Malinin and Mark Gales. 2021. Uncertainty estimation in autoregressive structured prediction. In International Conference on Learning Representa- tions . Potsawee Manakul, Adian Liusie, Mark JF Poon, Yun- Sung Chuang, and Philip HS Torr. 2023. Selfcheck- gpt: Zero-resource black-box hallucination detection for generative large language models. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 1516–1540. Samuel Marks and Max Tegmark. 2024. The geometry of truth: Emergent linear structure in large language model representations of true/false datasets. In First Conference on Language Modeling . Hadas Orgad, Michael Toker, Zorik Gekhman, Roi Re- ichart, Idan Szpektor, Hadas Kotek, and Yonatan Be- linkov. 2025. LLMs know more than they show: On the intrinsic representation of LLM hallucinations. In The Thirteenth International Conference on Learning Representations . Weihang Su, Changyue Wang, Qingyao Ai, Yiran Hu, Zhijing Wu, Yujia Zhou, and Yiqun Liu. 2024. Un- supervised real-time hallucination detection based on the internal states of large language models. In Findings of the Association for Computational Lin- guistics: ACL 2024 , pages 14379–14391, Bangkok, Thailand. Association for Computational Linguistics. Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D Manning. Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feed- back. In The 2023 Conference on Empirical Methods in
https://arxiv.org/abs/2505.17656v1
Natural Language Processing . Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations . Sennrich Rico Wang Chaojun. 2020. On exposure bias, hallucination and domain shift in neural machine translation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics , pages 3544–3552, Online. Association for Computa- tional Linguistics. Jason Wei, Nguyen Karina, Hyung Won Chung, Yunxin Joy Jiao, Spencer Papay, Amelia Glaese, John Schulman, and William Fedus. 2024. Measuring short-form factuality in large language models. arXiv preprint arXiv:2411.04368 .Johannes Welbl, Nelson F Liu, and Matt Gardner. 2017. Crowdsourcing multiple choice science questions. InProceedings of the 3rd Workshop on Noisy User- generated Text , pages 94–106. Miao Xiong, Zhiyuan Hu, Xinyang Lu, YIFEI LI, Jie Fu, Junxian He, and Bryan Hooi. Can llms express their uncertainty? an empirical evaluation of confi- dence elicitation in llms. In The Twelfth International Conference on Learning Representations . Yihao Xue, Kristjan Greenewald, Youssef Mroueh, and Baharan Mirzasoleiman. 2025. Verify when uncer- tain: Beyond self-consistency in black box hallucina- tion detection. arXiv preprint arXiv:2502.15845 . An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, and 1 others. 2024. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 . Jiaxin Zhang, Zhuohang Li, Kamalika Das, Bradley Malin, and Sricharan Kumar. 2023. SAC3: Reliable hallucination detection in black-box language models via semantic-aware cross-check consistency. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2023 . Association for Computational Linguistics. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 1441–1451, Florence, Italy. Association for Compu- tational Linguistics. Derui Zhu, Dingfan Chen, Qing Li, Zongxiong Chen, Lei Ma, Jens Grossklags, and Mario Fritz. 2024. PoLLMgraph: Unraveling hallucinations in large lan- guage models via state transition dynamics. In Find- ings of the Association for Computational Linguis- tics: NAACL 2024 , pages 4737–4751, Mexico City, Mexico. Association for Computational Linguistics. A Appendix A.1 The Number of Consistent and Inconsistent Errors Figure 2 shows the number of consistent and incon- sistent errors for different LLMs. A.2 Related Work Error Detection. Large language models (LLMs) often generate responses that appear plausible but contain factual inaccuracies. This challenge under- scores the critical importance of accurately detect- ing errors in LLM-generated content for establish- ing trustworthiness. While this task is also referred to as “hallucination detection” (Chen et al.; Far- quhar et al., 2024; Du et al., 2024), we adopt the 6 Llama3.2-1b Llama3.2-3b Llama3.1-8b Llama3.1-70b05001.0k1.5k2.0k2.5k3.0kFrequency Llama on SciQ Qwen2.5-3b Qwen2.5-7b Qwen2.5-14b Qwen2.5-32b Qwen2.5-72b02505007501.0k1.2k1.5k Qwen on SciQ Llama3.2-1b Llama3.2-3b Llama3.1-8b Llama3.1-70b05.0k10.0k15.0k20.0k Llama on TriviaQA Qwen2.5-3b Qwen2.5-7b Qwen2.5-14b Qwen2.5-32b Qwen2.5-72b05.0k10.0k15.0k20.0k25.0k30.0k Qwen on TriviaQASelf-consistent error Inconsistent errorFigure 2: The number of self-consistent and inconsistent errors across different scales of LLMs. term “error detection” to avoid ambiguity, as
https://arxiv.org/abs/2505.17656v1
“hallu- cination” carries domain-specific meanings across different fields (Huang et al., 2025; Wang Chaojun, 2020; Zhang et al., 2019). Training-Free Error Detection. A prominent approach to error detection involves estimating the uncertainty inherent in the model itself. Methods in this category include analyzing response probabili- ties (Malinin and Gales, 2021; Duan et al., 2024) and eliciting verbalized confidence scores directly from the model (Tian et al.; Lin et al., 2022; Xiong et al.). Among these methods, consistency-based uncertainty (Manakul et al., 2023; Kuhn et al., 2023; Lin et al., 2024; Xiong et al.; Chen et al.; Zhang et al., 2023; Chen et al.) has received consid- erable attention. Building on the assumption that consistent responses are more likely to be factu- ally correct (Wang et al., 2023), consistency-based methods sample multiple responses and compute semantic consistency among them to detect hallu- cinations. Supervised Probe. In contrast to the above methods, probe-based approaches employ super- vised learning to identify truthfulness features em- bedded within LLMs’ internal states. Several pre- vious works (Marks and Tegmark, 2024; Azaria and Mitchell, 2023; Burns et al., 2023; Li et al., 2023; Chen et al.) have claimed that there existed truthfulness features in the internal states of LLMs. Based on the assumption, numerous studies have tried to detect hallucination using the features from LLMs’ own internal states (Kadavath et al., 2022; Azaria and Mitchell, 2023; Beigi et al., 2024; Zhu et al., 2024). These works trained a probe, a sim- ple classifier, to predict whether the response of LLMs is correct based on the internal states. As the probe is often a simple multi-layer perceptron, these methods need very low computation cost both during inference time and training process(Su et al., 2024). Moreover, recent comparative studies (Mahaut et al., 2024) have demonstrated their superior performance over other consistency- based, probability-based and verbalized methods. Self-Consistent Error. Prior consistency-based error detectors (Farquhar et al., 2024; Zhang et al., 2023; Chen et al.) also acknowledged the limi- tations of consistency-based methods in handling self-consistent errors. However, they neither quan- tify the extent of performance degradation nor sys- tematically examine the prevalence of such errors. Moreover, their analysis is limited to consistency- based paradigms, leaving open the question of whether other types of detectors are similarly af- fected. In contrast, our work provides a compre- hensive evaluation across four mainstream cate- gories of error detection methods and reveals that self-consistent errors pose a universal challenge, leading to significant performance drops across all methods, not just those relying on sample consis- tency. Cross-Model Checking. Zhang et al. (2023); Ding et al. (2024) and concurrent work (Xue et al., 2025) propose to detect errors by sampling multi- ple responses from both the target model and an external model, followed by measuring their agree- ment. However, these approaches require 10–20 ad- ditional generations per query across both models, making them impractical for real-time usage. In contrast, our Cross-Model Probe offers a novel and efficient alternative that requires only a single for- ward pass through a verifier model. Furthermore, our empirical analysis provides practical insights for
https://arxiv.org/abs/2505.17656v1
verifier selection. All tested verifiers, including the lightweight 3B models, consistently yield per- formance gains, demonstrating the robustness of our approach. Nonetheless, larger models from a different series than the response generator tend to perform best. 7 A.3 Baseline Method Implementation Details Here we provide detailed implementation details for the baseline error detection methods evaluated in Section 3.1. (i) Probability : Several studies have employed the aggregated token probabilities to detect errors (Orgad et al., 2025; Mahaut et al., 2024; Malinin and Gales, 2021). Following prior work (Orgad et al., 2025), we average the log-probabilities of all generated tokens in a response. This average log- probability serves as the error detection indicator, where lower values suggest a higher likelihood of error. (ii) P(True) : This method follows the prompt- ing strategy introduced by Kadavath et al. (2022), where the LLM is directly queried to assess the correctness of its own output. Specifically, we con- struct the following prompt: Question: {question} Possible answer: {response} Is the possible answer: A. True B. False The possible answer is: The model’s confidence is then quantified as the probability it assigns to the token sequence cor- responding to “A”. A higher probability indicates greater model confidence in the correctness of its response. (iii) SE (Semantic Entropy): As proposed by Kuhn et al. (2023) and further explored by Far- quhar et al. (2024), semantic entropy estimates un- certainty over the meaning conveyed by a response, rather than just the token sequence. Higher seman- tic entropy suggests greater uncertainty about the response’s meaning and thus a higher likelihood of error. Following the implementation details recom- mended by Kuhn et al. (2023), we set the sampling parameters as follows: temperature 0.5, number of samples 10, top_p = 1.0, and top_k = -1. (iv) Probe : Following Azaria and Mitchell (2023), we implement a probe using a three-layer feedforward neural network (FFN) with ReLU acti- vations and hidden dimensions set to (256, 128, 64). The model is trained with cross-entropy loss. To select the most informative hidden layer, we train a separate probe on the output of each layer and choose the one that achieves the highest AUROC on the validation set. To mitigate overfitting, the probe is trained for a fixed number of epochs, and we select the checkpoint with the best validation performance for final evaluation.A.4 Details about Cross-Model Probe How to Select Verifier. We study the impact of dif- ferent verifiers on cross-model probe performance, focusing on two factors: (1) whether the verifier is from the same model series as the response model, and (2) model scale. Using Qwen2.5-7B as the response model, we evaluate several verifiers: Qwen2.5-3B and LLaMA3.2-3B (smaller models); LLaMA3.1-70B and Qwen2.5-72B (larger mod- els). Table 2 shows that (1) for models of the same scale, using a verifier from a different series yields better results. (2) within the same series, larger verifiers perform better. Notably, all tested veri- fiers (even 3B models) significantly improve per- formance over the standard probe, validating the effectiveness of our approach. A.5 Evaluation Metric Following prior works (Kuhn et
https://arxiv.org/abs/2505.17656v1
al., 2023; Xiong et al.; Duan et al., 2024), we evaluate error de- tection using the area under the receiver operator characteristic curve ( AUROC ), which reflects mod- els’ ability to distinguish incorrect and correct re- sponses. We produce the correctness label ziby em- ploying an LLM to evaluate whether the response is semantically equivalent to the ground truth answer, following (Tian et al.; Wei et al., 2024). To en- sure reproducibility, we employ the powerful open- source model, Llama-3.1-70b. Inspired by (Wei et al., 2024), we use the prompt in Appendix A.6 to check the correctness of the generated response. This prompt categorizes responses into correct, in- correct, and refusal. In our experiments, we filter out the refusal responses, as our focus is on effec- tively distinguishing between correct and incorrect responses. A manual review finds that only 1 out of 300 samples disagrees with human annotation, demonstrating the reliability of the correctness la- bel. A.6 Prompt 8 Evaluation Prompt Your job is to look at a question , some gold targets , and a predicted answer, and then assign a grade of either ["CORRECT", "INCORRECT", "NOT_ATTEMPTED"]. First , I will give examples of each grade, and then you will grade a new example. The following are examples of CORRECT predicted answers. Question: What are the names of Barack Obama's children? Gold target : ["Malia Obama and Sasha Obama", "malia and sasha"] Predicted answer 1: sasha and malia obama Predicted answer 2: most people would say Malia and Sasha, but I ' m not sure and would have to double check Predicted answer 3: Barack Obama has two daughters. Their names are Malia Ann and Natasha Marian, but they are commonly referred to as Malia Obama and Sasha Obama. Malia was born on July 4, 1998, and Sasha was born on June 10, 2001. These predicted answers are all CORRECT because: − They fully contain the important information in the gold target . − They do not contain any information that contradicts the gold target . − Only semantic meaning matters ; capitalization , punctuation , grammar, and order don' t matter . − Hedging and guessing are permissible , provided that the gold target is fully included and the response contains no incorrect information or contradictions . The following are examples of INCORRECT predicted answers. Question: What are the names of Barack Obama's children? Gold target : ["Malia and Sasha"] Predicted answer 1: Malia. Predicted answer 2: Malia, Sasha, and Susan. Predicted answer 3: Barack Obama does not have any children . Predicted answer 4: I think it ' s either Malia and Sasha. Or it could be Malia and Jackie . Or it could be Joey and Malia. Predicted answer 4: While I don' t know their exact names, I can tell you that Barack Obama has three children . Predicted answer 5: It ' s possible you may mean Betsy and Olivia . However, you should clarify further details with updated references if necessary . Is that the correct answer? Predicted answer 6: It may be the
https://arxiv.org/abs/2505.17656v1
case that Obama's child is named James. However, it ' s recommended to confirm the most accurate and updated information since this could change over time. This model may not always reflect the most current information . These predicted answers are all INCORRECT because: − A factual statement in the answer contradicts the gold target . Incorrect statements that have some hedging (e.g ., " it is possible that ", "although i ' m not sure , i think ") are also considered incorrect . The following are examples of NOT_ATTEMPTED predicted answers. Question: What are the names of Barack Obama's children? Gold target : ["Malia and Sasha"] Predicted answer 1: I don' t know. Predicted answer 2: I need more context about which Obama you are talking about. Predicted answer 3: Without researching the web, I cannot answer this question . However, I can tell you that Barack Obama has two children . Predicted answer 4: Barack Obama has two children . I know that one of them is Malia, but I ' m not sure about the other one. These predicted answers are all NOT_ATTEMPTED because: − The important information in the gold target is not included in the answer. − No statements in the answer contradict the gold target . Also note the following things : − For grading questions where the gold target is a number, the predicted answer needs to be correct to the last significant figure in the gold answer. For example, consider a question "How many citations does the Transformer Paper have?" with gold target "120k". − Predicted answers "120k", "124k", and 115k" are all CORRECT. 9 − Predicted answers "100k" and "113k" are INCORRECT. − Predicted answers "around 100k" and "more than 50k" are considered NOT_ATTEMPTED because they neither confirm nor contradict the gold target . − The gold target may contain more information than the question . In such cases , the predicted answer only needs to contain the information that is in the question . − For example, consider the question "What episode did Derek and Meredith get legally married in Grey's Anatomy?" with gold target "Season 7, Episode 20: White Wedding". Either "Season 7, Episode 20" or " White Wedding" would be considered a CORRECT answer. − Do not punish predicted answers if they omit information that would be clearly inferred from the question . − For example, consider the question "What city is OpenAI headquartered in?" and the gold target "San Francisco , California ". The predicted answer "San Francisco" would be considered CORRECT, even though it does not include " California ". − Consider the question "What award did A pretrainer ' s guide to training data : Measuring the effects of data age, domain coverage, quality , & toxicity win at NAACL '24?", the gold target is "Outstanding Paper Award". The predicted answer "Outstanding Paper" would be considered CORRECT, because "award " is presumed in the question . − For the question "What is the height of Jason Wei in meters ?", the gold target is "1.73 m".
https://arxiv.org/abs/2505.17656v1
The predicted answer "1.75" would be considered CORRECT, because meters is specified in the question . − For the question "What is the name of Barack Obama's wife?", the gold target is "Michelle Obama". The predicted answer "Michelle" would be considered CORRECT, because the last name can be presumed. − Do not punish for typos in people ' s name if it ' s clearly the same name. − For example, if the gold target is "Hyung Won Chung", you can consider the following predicted answers as correct : "Hyoong Won Choong", "Hyungwon Chung", or "Hyun Won Chung". Here is a new example. Simply reply with either CORRECT, INCORRECT, NOT_ATTEMPTED. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer. Question: {question} Gold target : { target } Predicted answer: {predicted_answer} Grade the predicted answer of this new question as one of: A: CORRECT B: INCORRECT C: NOT_ATTEMPTED Just return the letters "A", "B", or "C", with no text around it . 10
https://arxiv.org/abs/2505.17656v1
Towards Dynamic Theory of Mind: Evaluating LLM Adaptation to Temporal Evolution of Human States Yang Xiao1*Jiashuo Wang1*Qiancheng Xu1Changhe Song1 Chunpu Xu1Yi Cheng1Wenjie Li1†Pengfei Liu2† 1The Hong Kong Polytechnic University2Shanghai Jiao Tong University yang-alan.xiao@connect.polyu.hk csjwang@comp.polyu.edu.hk Abstract As Large Language Models (LLMs) increas- ingly participate in human-AI interactions, eval- uating their Theory of Mind (ToM) capabil- ities - particularly their ability to track dy- namic mental states - becomes crucial. While existing benchmarks assess basic ToM abili- ties, they predominantly focus on static snap- shots of mental states, overlooking the tem- poral evolution that characterizes real-world social interactions. We present DYNTOM, a novel benchmark specifically designed to eval- uate LLMs’ ability to understand and track the temporal progression of mental states across interconnected scenarios. Through a system- atic four-step framework, we generate 1,100 social contexts encompassing 5,500 scenarios and 78,100 questions, each validated for real- ism and quality. Our comprehensive evaluation of ten state-of-the-art LLMs reveals that their average performance underperforms humans by 44.7%, with performance degrading signifi- cantly when tracking and reasoning about the shift of mental states. This performance gap highlights fundamental limitations in current LLMs’ ability to model the dynamic nature of human mental states.1 1 Introduction Theory of Mind (ToM) - the ability to understand and reason about others’ mental states - is fun- damental to human social interaction (Premack and Woodruff, 1978; Turner, 1988). As Large Language Models (LLMs) increasingly engage in human-AI interactions, their capability to track and understand the dynamic nature of human mental states becomes crucial. While existing research has evaluated LLMs’ ToM capabilities, these evalua- tions often overlook a critical aspect: the temporal *Equal contribution. †Corresponding authors. 1DYNTOM are available at GitHub and HuggingFace. Name: Angela HwangOccupation: HairdresserRace: Asian Name: Kevin FloresOccupation:Construction WorkerRace: Latino Scenario2Scenario1 Transformation Question ExampleChange over scenariosHey Kevin, thanks for meeting me. I've been a bit worried about you. You don't seem very happy with your job lately. Is everything okay? Honestly, Angela, I'm not happy. I feel stuck and unfulfilled. It's like I'm just going through the motions every dayMental State TrajectoryEmotion: concerned for Kevin.Intention: to talk to Kevin about his job satisfaction.Belief: Kevin is unhappy with his job. Thanks for meeting me again, Kevin. I've been thinking about our conversation last month. Have things improved at work at all? Actually... they've gotten worse. I've started dreading going to work each morning. But I don't even know where to start looking for something else.I was worried about that. Look, I did some research -there are several vocational training programs in the area. Would you be interested in exploring those options? Sometimes having a concrete plan can make a big difference. Mental State TrajectoryEmotion:growing concern and worried.Intention:to propose concrete steps for Kevin's career transition.Belief: Kevin's job frustration is affecting his mental well-being more severely after our last talk. How does the character’s emotion change from scenario 1 to scenario 2?Options: (A) No (B) Yes, from concern to sad (C) Yes, from concern to growing concern and worriedFigure 1: A simplified illustration showing the mental state trajectory’s change over scenarios and the trans-
https://arxiv.org/abs/2505.17663v1
formation question example to probe LLM’s ability to adapt to the change. evolution of mental states in real-world social con- texts. Current ToM evaluations of LLMs, including benchmarks like SocialIQA (Turner, 1988), Big- ToM (Gandhi et al., 2023), and TOMBENCH (Chen et al., 2024), predominantly focus on static snapshots of mental states in isolated scenarios. These works primarily focus on static evaluations, whereas our work presents a novel approach to cap- ture the continuous change of mental states across multiple interconnected scenarios - a crucial aspect of real-world social interactions that has not been systematically evaluated in previous work. ThisarXiv:2505.17663v1 [cs.CL] 23 May 2025 temporal dimension is essential for understanding LLMs’ true capabilities in real-world social inter- actions, where mental states constantly shift and evolve in response to ongoing social dynamics. For instance, LLMs are expected to understand and rea- son about the shift of user mental states in support conversations to better help users (Liu et al., 2024b; Wang et al., 2024). To address this challenge, we introduce DYN- TOM(Dynamic Theory of Mind), a novel bench- mark designed specifically to evaluate LLMs’ abil- ity to track and understand the temporal evolution of mental states, as shown in Figure 1. Our bench- mark is constructed through a systematic process: (1) social context construction, including social location, character profiles, and relationships; (2) mental state trajectory design across multiple sce- narios; (3) scenario generation with natural dia- logue; and (4) question formulation targeting tem- poral understanding of mental states. Each gen- erated scenario and question undergoes rigorous human validation to ensure quality and realism. DYNTOMcaptures mental state dynamics through continuous social scenarios while incorporating real-world elements such as rich social contexts. Our benchmark comprises 1,100 social contexts featuring 2,200 characters across 261 social loca- tions, 5,500 social scenarios, and 78,100 multiple- choice questions. Through a comprehensive evalu- ation of ten representative LLMs, including GPT-4 series (Achiam et al., 2023), Llama 3 series (Dubey et al., 2024), Qwen 2 series (Yang et al., 2024), and GLM series (GLM et al., 2024), we find that their average performance lags behind human per- formance by 44.7%, with the gap widening signifi- cantly when requiring models to track how a men- tal state changes across different scenarios. This performance degradation highlights a fundamental limitation in current LLMs’ ability to model the dynamic nature of human mental states. The main contributions of this work are: 1. A novel framework for evaluating LLMs’ understand- ing of temporal evolution in mental states, with a systematic process for generating and validating evaluation data; 2. A comprehensive benchmark featuring 78,100 questions specifically designed to probe LLMs’ ability to track and reason about mental state changes over scenarios; 3. Extensive empirical evaluation reveals specific challenges LLMs face in temporal reasoning about mental states, including detailed analysis of failure modes in tracking state changes and determining factorsinfluencing changes. 2 Related Work 2.1 ToM Benchmarks Theory of mind appears to be an innate potential ability in humans that requires social and other ex- periences over many years for its full development. With the development of LLMs,
https://arxiv.org/abs/2505.17663v1
researchers have begun to probe whether LLMs possess a Theory of Mind ability comparable to that of humans, as they have reached and occasionally surpassed human performance in some task-solving and reasoning tasks. Nematzadeh et al. (2018); Le et al. (2019); Wu et al. (2023) apply the Sally-Anne Test and bAbi to test LLMs’ ToM ability in the aspect of false belief, and they find that LMs’ performance is significantly lower than humans. Ullman (2023); Shapira et al. (2024); Kim et al. (2023); Sap et al. (2022) propose that LLMs prone to shortcuts and spurious correlations. Apart from the test in the as- pect of belief, Xu et al. (2024); Chen et al. (2024); Sabour et al. (2024) construct benchmarks to test LLMs’ ToM ability for emotion, intention, and perception. Jin et al. (2024); Shi et al. (2024) propose to evaluate LLMs in multi-modal envi- ronments. However, most of the previous evalua- tions do not take the continuous evolution of mental states across multiple interconnected scenarios into consideration. Our work aims to develop a novel benchmark to understand the ToM reasoning of language models in the dynamic social context. 2.2 Human Behavior Simulation Recent advancements in language model capabili- ties have opened new avenues for generating high- quality data. Previous work has demonstrated suc- cessful applications of LLMs in simulating human behavior across various domains, including HCI research (Hämäläinen et al., 2023), conversational recommender systems (Yoon et al., 2024), role- playing (Xie et al., 2024; Xiao et al., 2023), clini- cal medical education (Wang et al., 2024), social science (Hua et al., 2023; Park et al., 2023, 2022; Aher et al., 2023). DYNTOMleverages LLMs to generate realistic dialogues that reflect predeter- mined character mental states. We implement strict quality assurance through human evaluation of so- cial context authenticity and question validity. This approach combines the efficiency of automated gen- eration with robust validation procedures, ensuring our benchmark’s reliability and reproducibility. 3 D YNTOM Benchmark 3.1 D YNTOM Construction Framework Definitions and Preliminaries We first define key terms used throughout this paper. A Social Location refers to the physical setting where so- cial interactions occur, which influences behavior and social norms (Farrow et al., 2017). A Social Context provides the foundational setup for social interactions, comprising a social location, charac- ter profiles (e.g., demographics, personalities), and their relationships. A Social Scenario represents a self-contained social interaction between charac- ters at a specific moment. In our work, we con- struct sequences of temporally connected scenarios within the same social context, enabling us to track the dynamic evolution of characters’ mental states through continuous social interactions. We define aSocial Stage as the complete structure of a social interaction, comprising the Social Location ,Social Context , and Social Scenarios . Our framework consists of three systematic steps for generating the social stages in our benchmark: Step 1: Social Context Construction A social context consists of three components: a social loca- tion, two characters’ profiles, and the relationship between these characters. For social locations, fol- lowing Ziems et al. (2023), we collect 261 locations across 13 categories
https://arxiv.org/abs/2505.17663v1
representing common physi- cal settings for social interactions. For character profiles, we construct seven aspect pools (names, surnames, gender, occupation, education, race, and personality traits) using demographic data from the U.S. Census Bureau statistics to ensure realistic population representation. For each social context, we randomly sample one location and create two characters by sampling from each aspect of these pools. To generate character relationships, we first create four exemplar relationships manually, then prompt GPT-4-Turbo to generate new relationships based on these exemplars and the sampled character profiles. To ensure quality, four human annotators evaluate both the characters’ profiles and their cor- responding relationships, discarding any profile or relationship that any annotator deems unrealistic. This rigorous validation process results in retaining 92% of the generated profiles and relationships. Step 2: Mental State Trajectory Design We fo- cus on evaluating three mental states ( beliefs ,emo- tions , and intentions ) and their resulting actions (for convenience, also denoted as mental states).For each social context, we design a sequence of five2scenarios where these states of the characters evolve and influence each other. Following the psychological research of D’Andrade (1995), we model the mental states through three key relation- ships: 1) beliefs influence emotions; 2) beliefs and emotions influence intentions; 3) beliefs, emotions, and intentions drive actions. We prompt GPT-4- Turbo with four exemplar trajectories and the 3 design principles to generate coherent mental state progressions across every five scenarios. Impor- tantly, when generating these trajectories, LLMs should also output specific cues that trigger mental state transitions between adjacent scenarios, pro- viding explicit reasoning for how and why mental states evolve throughout the social interaction. Any scenarios without the mental state trajectory and cues are discarded. Four human annotators eval- uate each generated trajectory on two dimensions using a 5-point scale: coherence (consistency of mental state changes across scenarios), rational- ity (the validity and rationality of these transition cues), and authenticity (plausibility of mental state transitions). Trajectories with mean scores below 4.0 on either dimension are discarded, resulting in an 85.4% retention rate. Step 3: Scenario Generation Building upon the mental state trajectories designed in Step 2, we now generate scenarios with natural dialogues to manifest these mental states in social interactions. Each scenario includes a background description and a dialogue between characters, reflecting the mental state trajectory designed for this scenario. We choose dialogue as the primary format because it naturally reveals characters’ mental states and is frequently used in daily interactions. For each mental state trajectory, we prompt GPT-4-Turbo to generate the dialogue and background of the sce- nario, ensuring that the main character’s utterances and behaviors align with their prescribed mental states. Following the same validation process as Step 2, four human annotators evaluate each scenario on three dimensions using a 5-point scale: consis- tency (alignment with the designed mental state trajectory), coherent (the five scenarios within each social context form a coherent storyline, where each scenario connects meaningfully to those be- 2we have limited the number of scenarios to five to re- duce costs while maintaining a more authentic social con-
https://arxiv.org/abs/2505.17663v1
text. Researchers can easily adjust the scenario number in our framework to meet their needs. Scenario 1Scenario 2Belief 1Understanding: What’s the character’s belief in scenario1 ?Transformation-1: Do the character’s belief change from scenario1 to scenario 2Transformation-2: How the character’s belief in scenario1 influence emotion in scenario2Transformation-2:…Transformation-3: How the character’s belief changes across all the 5 scenariosMental States RelationsBelief 1Action 1Emotion 1Intention 1Belief 2Action 2Emotion 2Intention 2Question TypesFigure 2: A simplified example of question types (show- ing only two scenarios). In each scenario, characters have four states: belief, emotion, intention, and action - these are assessed through understanding questions. The value of the same state changes across different times (scenarios) is evaluated through transformation- 1,2,3 questions. fore and after it.), and authenticity (naturalness of the scenario and conversations). Scenarios scoring below 4.0 on either dimension are discarded, with 88.7% of the generated scenarios retained. 3.2 Question Genres Based on the validated scenarios and mental state trajectories, we design questions to systematically evaluate how well LLMs can track and reason about the temporal evolution of mental states. We de- velop four question types that progressively assess different aspects of this capability. Understanding Questions Understanding ques- tions establish a baseline by testing LLMs’ ability to identify states (belief, emotion, intention, and action) at specific points in time. While this repre- sents the most basic level of ToM reasoning in our evaluation, it forms the foundation for evaluating more complex temporal understanding. Transformation Questions Transformation questions directly evaluate LLMs’ ability to reason about mental state dynamics through three increasingly complex aspects, as illustrated in Figure 2. Transformation-1 examines whether LLMs can detect state changes between con- secutive scenarios, testing their basic temporal awareness. Transformation-2 probes deeper by testing if LLMs understand the causal mechanisms behind state changes, evaluating their grasp of psychological dynamics. Transformation-3 presents the most challenging task: tracking stateevolution across all scenarios and testing LLMs’ ability to maintain and reason about extended temporal sequences. This progression of question types allows us to precisely identify where LLMs succeed or fail in understanding dynamic mental states. Understand- ing questions reveal whether failures in temporal reasoning stem from basic state comprehension is- sues, while the three transformation types help pin- point specific limitations in tracking and reasoning about mental state changes over time. We apply four predefined question templates to the social stage to generate questions. The template details are presented in the Appendix A.5. Options and Ground Truth The design of op- tions and ground truth leverages the comprehen- sive mental state trajectories created in Step 2. For understanding questions, when evaluating a spe- cific state (e.g., belief in scenario 1), we construct distractors using both other states from the same scenario and the same state from other scenarios. For instance, when assessing belief in scenario 1, incorrect options include the emotion, intention, and action from scenario 1, as well as beliefs from scenarios 2-5. Similarly, for transformation ques- tions, options are constructed using documented state values and their changes from the trajectory. For example, when asking "Why does John’s belief change from feeling inferior in
https://arxiv.org/abs/2505.17663v1
scenario 1 to feeling respected in scenario 2?", the correct answer would be "Beverly’s praise of his expertise," while dis- tractors include other documented changes such as "John’s demonstration of skills" (action). This sys- tematic approach of option generation ensures that the questions are challenging yet unambiguous, as both correct answers and distractors are grounded in the explicitly designed mental state trajectories. Validation Following our validation process, four annotators evaluate the questions on clarity (whether the question is unambiguous) and answer- ability (whether the answer can be determined from the given context) using a 5-point scale. Questions scoring below 4.0 are regenerated, resulting in a final set of high-quality evaluation items. Finally, 78100 questions are collected. Evaluation Metric To evaluate LLMs’ perfor- mance on DYNTOM, we calculate their accuracy across all 78,100 questions. For each question, we consider the LLM’s response correct only if it ex- actly matches the ground truth option. The final Item Number ToM Mental States 4 Social Locations 261 Characters 2200 Social Scenarios 5500 Social Contexts 1100 Questions 78100 Understanding 28.2% Transformation-1 22.5% Transformation-2 43.7% Transformation-3 5.6% Average Social Scenario Length 457.9 Average Questions Length 77.5 Table 1: D YNTOM Statistics. /h⌢meSocial Location /user-friendsRelationshipὑ7Dynamic mental states /id-cardProfile /staylinkedIntradependent mental states ♂pen-nibQuestions Num Plot/h⌢me /id-card /user-friends/staylinkedὑ7♂pen-nib ToMi ✘ ✘ ✘ ✘ ✘ ✘ 999 SocialIQA ✘ ✘ ✘ ✘ ✘ ✘ 37588 Hi-ToM ✘ ✔ ✘ ✘ ✘ ✘ 1200 OpenToM ✔ ✘ ✔ ✔ ✘ ✘ 2384 BigToM ✔ ✔ ✘ ✘ ✘ ✔ 600 TOMBENCH ✔ ✘ ✘ ✘ ✘ ✘ 2860 DYNTOM(ours) ✔ ✔ ✔ ✔ ✔ ✔ 78100 Table 2: Benchmark Comparison. performance metric is computed as the percentage of correct answers across all questions, providing a comprehensive measure of the model’s ability to reason about dynamic mental states in social interactions. 3.3 Statistics In total, our final benchmark contains 1,100 high- quality social stages, where each stage consists of a social location (physical setting), social context (two characters with detailed profiles and relation- ships), and five social scenarios (temporally con- nected interactions). For each social stage, we gen- erate 71 questions across four types (understand- ing and three types of transformation), resulting in 78,100 questions in total. The detailed statistics are shown in Table 1. Compared to existing ToM benchmarks (Table 2),DYNTOMoffers key advantages. While pre- vious works evaluate static snapshots of mental states, DYNTOMsystematically captures the tem- poral evolution of mental states through connected scenarios, enabling a more realistic evaluation of how well LLMs can track and reason about dy- namic social interactions.4 Experiments 4.1 Experimental Setup To evaluate the ToM reasoning capabilities across different model scales and architectures, we con- ducted experiments using DYNTOMon ten rep- resentative language models, ranging from 7B to 70B parameters. These models include GPT-4o, GPT-4-Turbo, Llama-3.1 (8B and 70B variants), Mistral-7B, Mixtral-8x7B, Qwen2 (7B and 72B variants), DeepSeek-V2, and GLM-4. All models are accessed through their official APIs or publicly available weights. We employed two evaluation approaches: (1) vanilla prompting, where models directly an- swer questions, and (2) zero-shot chain-of-thought (CoT) prompting (Wei et al., 2022), which
https://arxiv.org/abs/2505.17663v1
encour- ages step-by-step reasoning before providing final answers. For both vanilla and CoT prompting, we used a temperature of 0.7 and top-p of 0.9 across all models to ensure fair comparison. To establish a human performance baseline, we recruited ten grad- uate students, different from those involved in data annotation, to evaluate a randomly sampled 30% of the dataset (330 social stages and 23430 questions). Detailed specifications of model versions, architec- tures, context windows, and prompting templates are provided in Appendix B.1. 4.2 Main Results Table 3 demonstrates the ToM performance of LLMs across different mental states (belief, emo- tion, intention, and action) and question types (un- derstanding and transformation), both with and without chain-of-thought (CoT) prompting. We established the human baseline by averaging per- formance across ten annotators, with standard de- viations reported to indicate inter-annotator agree- ment. Here, we discuss several key findings from our experimental results. Human vs. LLMs Human annotators achieved an average accuracy of 77.7% across all tasks. All LLMs performed significantly below this baseline, and their average performance underperforms hu- mans by 44.7%, with even the best GPT-4o show- ing a gap up to 13.7% in vanilla prompting (77.7% vs. 64.0%). The performance disparity was par- ticularly pronounced in transformation-type ques- tions across all mental states, revealing current LLMs’ limitations in tracking and comprehend- ing the temporal evolution of mental states in so- SubjectBelief Emotion Intention ActionA VG. U T U T U T U T Human 83.8 ±16.477.6±12.089.5±10.778.7±14.079.0±21.473.8±14.076.7±25.876.3±14.077.7±12.7 GPT-4o 80.9 44.5 91.7 45.8 87.5 51.9 95.1 55.6 64.0 GPT-4-Turbo 63.5 32.3 74.7 33.9 71.3 35.5 80.5 36.2 47.6 Llama-3.1-70B 65.8 40.2 93.8 42.3 82.8 42.0 91.8 45.5 57.1 Llama-3.1-8B 31.6 18.0 40.0 19.9 22.4 16.6 26.6 15.5 22.3 Mixtral-8x7B 23.3 21.6 46.2 18.4 32.9 10.8 40.3 9.5 21.9 Mistral-7B 21.3 11.7 23.8 10.2 16.3 10.1 20.6 9.2 13.9 Qwen2-72B 72.0 37.2 85.5 38.0 79.5 33.2 89.8 20.9 48.5 Qwen2-7B 22.2 19.8 43.0 20.5 25.1 15.7 24.6 15.0 22.1 DeepSeek-V2 6.5 9.2 4.8 8.1 3.7 7.3 2.8 5.7 7.2 GLM-4 29.5 23.9 43.9 20.8 28.5 16.5 40.4 16.8 25.4 LLM A VG. 41.7 25.8 54.7 25.8 45.0 24.0 51.3 23.0 33.0 GPT-4o+CoT 79.2 44.5 88.0 47.6 82.1 46.6 90.4 49.6 61.1 GPT-4-Turbo+CoT 61.7 31.0 77.8 33.2 71.4 32.8 81.0 37.6 47.1 Llama-3.1-70B+CoT 68.0 38.9 90.7 43.7 81.4 42.8 96.5 46.6 57.6 Llama-3.1-8B+CoT 32.0 21.7 40.3 20.9 21.8 19.3 23.3 15.9 23.6 Mixtral-8x7B+CoT 15.6 13.9 29.7 13.8 25.8 8.8 26.6 8.8 15.8 Mistral-7B+CoT 21.6 10.1 22.5 11.0 19.9 8.1 18.8 8.8 13.3 Qwen2-72B+CoT 70.1 39.2 87.6 41.4 83.8 34.6 89.0 27.1 51.3 Qwen2-7B+CoT 28.6 18.1 43.7 19.3 29.6 19.7 20.2 18.4 23.5 DeepSeek-V2+CoT 7.4 9.8 3.2 10.4 5.0 7.3 5.0 6.4 8.1 GLM-4+CoT 30.0 26.4 48.0 22.1 32.4 17.7 43.2 14.1 26.6 LLM+CoT A VG. 41.4 25.4 53.2 26.3 45.3 23.8 49.4 23.3 32.8 Table 3: LLMs’ performance on DYNTOM. U: Understanding, T: Transformation. Numbers represent accuracy in percentages. For human performance, subscripts indicate standard deviation across ten annotators. cial interactions. Notably, however, some LLMs, particularly GPT-4o, demonstrated superior perfor- mance in understanding-type questions compared to human
https://arxiv.org/abs/2505.17663v1
annotators. This can be attributed to the nature of these questions, which primarily involve analyzing static mental states within single scenar- ios, as illustrated in Figure 2. Such tasks require less temporal reasoning and rely more on semantic matching of explicitly stated mental states, aligning well with LLMs’ pattern recognition capabilities but not necessarily reflecting the true understanding of dynamic social contexts. Differences Between LLMs’ ToM Performance In vanilla prompting settings, GPT-4o emerged as the leading model, achieving an accuracy of 64.0% and outperforming the second-best model, Llama-3.1-70B (57.1%), by 6.9 percentage points. Among open-source models, both Llama-3.1-70B and Qwen2-72B demonstrated remarkable capa- bilities, surpassing GPT-4-Turbo’s 47.6% perfor- mance. Notably, Llama-3.1-70B achieved supe- rior performance in emotion-related understanding tasks, reaching 93.8% accuracy compared to GPT- 4o’s 91.7%. However, even GPT-4o’s best over- all performance at 64.0% falls significantly short of human-level performance, with DeepSeek-V2 showing the lowest performance at 7.2%. This sub- stantial performance gap, particularly in tracking the temporal evolution of mental states, highlightsthe challenging nature of our benchmark and re- veals that current LLMs lack robust ToM reasoning capabilities in realistic social contexts, despite their near-perfect performance on existing ToM bench- marks (Gandhi et al., 2024). Differences Between Transformation and Under- standing Question Types Table 3 also reveals that models perform significantly worse on transfor- mation questions compared to understanding ques- tions. The most substantial gap occurs in emotion- related reasoning, where the average model accu- racy drops from 54.7% in understanding questions to 25.8% in transformation questions—a difference of 28.9 percentage points. Transformation ques- tions require models to track how a character’s mental state evolves across different scenarios, cap- turing shifts in beliefs, emotions, and intentions over time. This performance gap highlights a crit- ical limitation of current models—their inability to effectively reason about dynamic mental state changes within continuous social contexts. Vanilla vs. CoT Prompting Our experimental results demonstrate that standard chain-of-thought (CoT) prompting has inconsistent effects on LLMs’ ToM reasoning capabilities. While CoT prompting improved performance for smaller models (Llama- 3.1-8B: +1.3%, Qwen2-72B: +2.8%, DeepSeek- V2: +0.9%, and GLM-4: +1.2%), it led to perfor- mance degradation in more capable models, no- tably GPT-4o (-2.9%). Through analyzing the in- termediate outputs (Appendix B.3), we identified a two-fold effect: For smaller models that strug- gle with complex reasoning, CoT’s step-by-step decomposition provides beneficial scaffolding for basic problem analysis. However, this same de- composition becomes a limitation for more capable models, as it enforces a rigid reasoning structure that treats each scenario independently, failing to capture the crucial temporal dependencies between scenarios. As shown in our case study, when asked about mental state changes from scenarios 1 to 2, models following CoT often extensively analyze each scenario but fail to explicitly compare the states across time steps, leading to incorrect con- clusions. This aligns with findings in Xiao et al. (2023) about LLMs’ challenges in maintaining co- herence during reasoning long inputs. These obser- vations suggest that while CoT can help with basic reasoning decomposition, effective ToM reason- ing requires specialized promptings that explicitly guide models to track and analyze
https://arxiv.org/abs/2505.17663v1
the temporal evolution of mental states across scenarios. Differences Across Mental States Analysis of Table 3 reveals consistent patterns in models’ capa- bility to reason about different mental states, partic- ularly in understanding-type questions. Emotion- related reasoning achieves the highest accuracy, averaging 54.7%, whereas belief-related reason- ing lags behind at 41.7%—a gap of 13 percentage points. We hypothesize that this disparity stems from the inherently implicit nature of beliefs com- pared to other mental states – while emotions and intentions often manifest in observable behaviors or explicit statements, beliefs frequently require multi-step inference from indirect evidence, such as actions or conversational context. This obser- vation suggests that belief reasoning poses unique challenges in temporal social contexts, where mod- els must not only infer current beliefs but also track their evolution through sequential interactions. 4.3 In-Depth Analysis LLMs’ Limits of ToM on Transformation Eval- uating mental states across multiple interconnected scenarios introduces complex compositional rea- soning challenges (Dziri et al., 2023), requiring models to track and reason about the continuous evolution of mental states. To systematically ana- lyze how models handle this multi-step reasoning fully correct local error full error restoration error Category0102030405060Percentage 13.0 13.058.0 16.0 14.013.053.0 8.017.018.050.0 15.014.013.055.0 15.0belief intention emotion actionFigure 3: The percentage of GPT-4o’s four types of their response. process, we group related questions and categorize the model’s responses based on their performance on both the primary question and its dependencies. For instance, consider a question that asks whether a character’s beliefs change between two scenarios. To answer this primary question (denoted as C), the model must first correctly identify the charac- ter’s beliefs in each individual scenario (denoted as D). By grouping such related questions, we can assess the model’s responses and classify them into four types: (1) Fully correct: The model accurately answers both the primary question (C) and all its dependencies (D). (2) Local error: The model cor- rectly answers all dependencies (D) but makes an error on the primary question (C). (3) Restoration error: The model correctly answers the primary question (C) despite making errors on one or more of its dependencies (D). (4) Full error: The model makes errors on both the primary question (C) and one or more of its dependencies (D). Calculating the proportion of each response type across all question groups provides a comprehen- sive evaluation of the model’s performance on these compositional questions requiring multi-step rea- soning. As shown in Figure 3, our analysis reveals distinct patterns in models’ reasoning capabilities. The fully correct cases, where models successfully identify both the mental states and their changes, are notably rare across all state types (13-17%), indicating limited genuine understanding. Local errors (13-18%) show models can correctly iden- tify mental states but fail to reason about their changes, suggesting an inability to track evolution. Full errors dominate across all mental states (50- 58%), with belief states showing the highest error rate (58%), revealing fundamental limitations in comprehending both states and their transitions. Restoration errors (8-16%) occur when models correctly identify changes without understanding Figure 4: The average of GPT-4o’s scores of the
https://arxiv.org/abs/2505.17663v1
trans- formation question type in different time spans. The time span indicates the specific scenarios to which one question relates. the underlying states, indicating superficial pattern matching rather than genuine reasoning. This anal- ysis pinpoints where models struggle in the reason- ing process, whether in identifying initial mental states, determining factors influencing changes, or integrating information across scenarios. LLMs Fail in the Middle Scenario Our anal- ysis reveals a consistent pattern of lower perfor- mance in transformation-type questions compared to understanding-type questions across all models. To investigate this performance gap, we examine model accuracy across different time spans, where each span represents the interval between consec- utive scenarios (e.g., span 1-2 represents the tran- sition from scenarios 1 to 2). We categorize the questions into different time spans based on their associated social scenarios. The results show a dis- tinct "U-shaped" pattern: models perform better at early and late time spans but struggle with mid- dle spans. To validate whether this pattern stems from the "Lost in the middle" phenomenon (Liu et al., 2024a) - where model performance degrades when processing information from the middle of long contexts - we conducted two additional ex- periments. First, we extend our analysis to longer sequences of 6 and 7 scenarios. The results (Table 4) strongly support our hypothesis: in 6-scenario se- quences, accuracy drops to 50% in span 2-3, while in 7-scenario sequences, performance deteriorates more severely to 26% in span 3-4. Second, we perform a controlled experiment where we trun- cate sequences to their first four scenarios. As shown in Table 5, this intervention leads to signifi- cant performance improvements in middle spans: for 7-scenario sequences, the accuracy in span 3-Time Span 1-2 2-3 3-4 4-5 5-6 6-7 6 Scenarios 64.0 50.0 51.0 62.0 62.0 - 7 Scenarios 56.0 45.0 26.0 30.0 26.0 34.0 Table 4: The average of GPT-4o’s scores in the transfor- mation question type for 6 and 7 scenarios. Category Time Spanw/o the truncationw/ the truncation∆ 5 scenarios1-2 54.0 55.0 +1.0 2-3 50.0 53.0 +3.0 3-4 54.0 55.0 +1.0 6 scenarios1-2 60.0 60.0 +0.0 2-3 50.0 54.0 +4.0 3-4 51.0 56.0 +5.0 7 scenarios1-2 56.0 53.0 -3.0 2-3 45.0 54.0 +9.0 3-4 26.0 47.0 +21.0 Table 5: Comparison of GPT-4o’s performance with and without the truncation of the fifth/seventh/eighth scenario across different time spans and categories (total 5, 6, and 7 scenarios), with Delta indicating the differ- ence between the two conditions. 4 improve by 21 percentage points (from 26% to 47%) when later scenarios are removed. Similar im- provements are observed across 5- and 6-scenario sequences, with middle-span accuracy increasing by 1-5 percentage points. These consistent improvements across differ- ent sequence lengths reveal a critical limitation in LLMs’ ability to process dynamic mental states. While models can effectively track states at the be- ginning and end of interactions, they struggle to maintain understanding across extended scenarios - precisely the kind of continuous evolution that char- acterizes real-world social interactions. The signifi- cant performance degradation in middle scenarios (dropping to as low as 26% in longer sequences) underscores the importance
https://arxiv.org/abs/2505.17663v1
of our dynamic evalu- ation approach and highlights a fundamental chal- lenge in developing LLMs that can truly understand evolving social contexts. 5 Conclusion In this work, we present DynToM, a benchmark designed to evaluate LLMs’ theory of mind capa- bilities in dynamic social contexts, moving beyond static assessments to capture the crucial evolution of mental states across interconnected scenarios. Our human evaluation validates the benchmark’s alignment with real-world social dynamics, while experimental results reveal significant gaps in cur- rent LLMs’ capabilities - even the best model trails human performance by 11%, particularly strug- gling with tracking mental state changes across extended interactions. As LLMs continue to be deployed in social contexts, DynToM provides a valuable framework for assessing and improving their ability to understand the dynamic nature of human mental states. Limitations Limited LLMs This paper makes a significant contribution to the field by introducing DYNTOM, an innovative benchmark designed to evaluate The- ory of Mind capabilities in language models within authentic social contexts. However, due to the constraint of computing resources and budget, a limitation lies in its evaluation scope, which en- compasses ten language models with an emphasis on representative models. While this selection in- cludes prominent models such as GPT-4 and Llama, the focus potentially overlooks insights that could be gained from examining other emerging open- source models and commercial models, such as Claude. Limited Prompt Methods We use vanilla and CoT prompting methods for evaluation. Other methods, such as think-twice (Wilf et al., 2024), be- lief tracker (Sclar et al., 2023), and self-consistency (Wang et al., 2023), could also be explored to en- hance the LLMs’ ToM performance within authen- tic social contexts. Limited mental states types and evaluation modality While our framework effectively mod- els the interplay between belief, emotion, intention, and action based on established psychological the- ory (D’Andrade, 1995), there are opportunities to expand the scope of mental states evaluated. Fu- ture work could explore additional mental states, such as knowledge, to provide even richer insights into language models’ ToM capabilities. While our dialogue-based evaluation approach has proven effective in assessing models’ ToM abilities in dy- namic contexts, future research could explore how these models perform in multimodal contexts that include visual and auditory cues. This extension would complement our text-based findings by ex- amining how models track temporal changes in mental states across different modalities, though our current framework already provides robust in- sights into models’ social reasoning capabilities.Ethics Statement Annotators and contents We strictly adhere to the ACL Code of Ethics. We placed high impor- tance on ensuring the comfort and well-being of our annotators. We advised them to stop the anno- tation process if they came across any information that caused them discomfort. We recruited anno- tators at a rate of 2 ∼3 times their local hourly minimum wage. We instruct the annotators to vali- date the data without bias and keep the content free from unsafe, toxic, biased, offensive, and harmful content. We utilize the models in accordance with their designated purpose. In summary, we make every effort to adhere
https://arxiv.org/abs/2505.17663v1
to the ethical norms set forth by ACL. Ethical Considerations. The theory of mind is a distinctive social cognitive capability that is intrin- sic to humans. Assessing the Theory of Mind ca- pacities of Large Language Models utilizing DYN- TOMmay result in anthropomorphic interpreta- tions, attributing human-like qualities to LLMs. Nonetheless, it is imperative to clarify that our ob- jective is not to anthropomorphize LLMs. Our ob- jective is to evaluate the capacity of LLMs to com- prehend and interpret human mental states, thus enhancing AI’s interaction with humans in the so- cial context. Acknowledgements We would like to thank all reviewers for their in- sightful comments and suggestions to help im- prove the paper. This work was supported by the Research Grants Council of Hong Kong (GRF 15209724 and TRS T43-518/24-N). References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 . Gati Aher, Rosa I. Arriaga, and Adam Tauman Kalai. 2023. Using large language models to simulate mul- tiple humans and replicate human subject studies. In Proceedings of the 40th International Conference on Machine Learning , ICML’23. JMLR.org. Zhuang Chen, Jincenzi Wu, Jinfeng Zhou, Bosi Wen, Guanqun Bi, Gongyao Jiang, Yaru Cao, Mengting Hu, Yunghwei Lai, Zexuan Xiong, and Minlie Huang. 2024. ToMBench: Benchmarking theory of mind in large language models. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 15959–15983, Bangkok, Thailand. Association for Computational Linguistics. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Sean Welleck, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Xiang Ren, Allyson Et- tinger, Zaid Harchaoui, and Yejin Choi. 2023. Faith and fate: Limits of transformers on compositionality. InThirty-seventh Conference on Neural Information Processing Systems . Roy D’Andrade. 1995. The development of cognitive anthropology . Cambridge University Press. Katherine Farrow, Gilles Grolleau, and Lisette Ibanez. 2017. Social norms and pro-environmental behavior: A review of the evidence. Ecological Economics , 140:1–13. Kanishk Gandhi, Jan-Philipp Fränken, Tobias Gersten- berg, and Noah Goodman. 2023. Understanding social reasoning in language models with language models. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Bench- marks Track . Kanishk Gandhi, Jan-Philipp Fränken, Tobias Gersten- berg, and Noah Goodman. 2024. Understanding social reasoning in language models with language models. Advances in Neural Information Processing Systems , 36. Team GLM, Aohan Zeng, Bin Xu, Bowen Wang, Chen- hui Zhang, Da Yin, Dan Zhang, Diego Rojas, Guanyu Feng, Hanlin Zhao, et al. 2024. Chatglm: A family of large language models from glm-130b to glm-4 all tools. arXiv preprint arXiv:2406.12793 .Perttu Hämäläinen, Mikke Tavast, and Anton Kunnari. 2023. Evaluating large language models in gener- ating synthetic hci research data: a case study. In Proceedings of the 2023
https://arxiv.org/abs/2505.17663v1
CHI Conference on Human Factors in Computing Systems , CHI ’23, New York, NY , USA. Association for Computing Machinery. Wenyue Hua, Lizhou Fan, Lingyao Li, Kai Mei, Jianchao Ji, Yingqiang Ge, Libby Hemphill, and Yongfeng Zhang. 2023. War and peace (waragent): Large language model-based multi-agent simulation of world wars. arXiv preprint arXiv:2311.17227 . Chuanyang Jin, Yutong Wu, Jing Cao, Jiannan Xiang, Yen-Ling Kuo, Zhiting Hu, Tomer Ullman, Antonio Torralba, Joshua Tenenbaum, and Tianmin Shu. 2024. MMToM-QA: Multimodal theory of mind question answering. In Proceedings of the 62nd Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers) , pages 16077–16102, Bangkok, Thailand. Association for Computational Linguistics. Hyunwoo Kim, Melanie Sclar, Xuhui Zhou, Ronan Bras, Gunhee Kim, Yejin Choi, and Maarten Sap. 2023. FANToM: A benchmark for stress-testing machine theory of mind in interactions. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 14397–14413, Singa- pore. Association for Computational Linguistics. Matthew Le, Y-Lan Boureau, and Maximilian Nickel. 2019. Revisiting the evaluation of theory of mind through question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 5872–5877, Hong Kong, China. Association for Computational Linguistics. Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paran- jape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024a. Lost in the middle: How language models use long contexts. Transactions of the Asso- ciation for Computational Linguistics , 12:157–173. Tianjian Liu, Hongzheng Zhao, Yuheng Liu, Xingbo Wang, and Zhenhui Peng. 2024b. Compeer: A gener- ative conversational agent for proactive peer support. arXiv preprint arXiv:2407.18064 . Aida Nematzadeh, Kaylee Burns, Erin Grant, Alison Gopnik, and Tom Griffiths. 2018. Evaluating theory of mind in question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 2392–2400, Brussels, Belgium. Association for Computational Linguistics. Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Mered- ith Ringel Morris, Percy Liang, and Michael S Bern- stein. 2023. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th an- nual acm symposium on user interface software and technology , pages 1–22. Joon Sung Park, Lindsay Popowski, Carrie Cai, Mered- ith Ringel Morris, Percy Liang, and Michael S Bern- stein. 2022. Social simulacra: Creating populated prototypes for social computing systems. In Proceed- ings of the 35th Annual ACM Symposium on User Interface Software and Technology , pages 1–18. David Premack and Guy Woodruff. 1978. Does the chimpanzee have a theory of mind? Behavioral and brain sciences , 1(4):515–526. Sahand Sabour, Siyang Liu, Zheyuan Zhang, June Liu, Jinfeng Zhou, Alvionna Sunaryo, Tatia Lee, Rada Mi- halcea, and Minlie Huang. 2024. EmoBench: Eval- uating the emotional intelligence of large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 5986–6004, Bangkok, Thailand. Association for Computational Linguistics. Maarten Sap, Ronan Le Bras, Daniel Fried, and Yejin Choi. 2022. Neural theory-of-mind? on the limits of social intelligence in large LMs. In Proceedings of
https://arxiv.org/abs/2505.17663v1
the 2022 Conference on Empirical Methods in Nat- ural Language Processing , pages 3762–3780, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Melanie Sclar, Sachin Kumar, Peter West, Alane Suhr, Yejin Choi, and Yulia Tsvetkov. 2023. Minding lan- guage models’ (lack of) theory of mind: A plug-and- play multi-character belief tracker. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 13960–13980, Toronto, Canada. Association for Computational Linguistics. Natalie Shapira, Mosh Levy, Seyed Hossein Alavi, Xuhui Zhou, Yejin Choi, Yoav Goldberg, Maarten Sap, and Vered Shwartz. 2024. Clever hans or neural theory of mind? stress testing social reasoning in large language models. In Proceedings of the 18th Conference of the European Chapter of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 2257–2273, St. Julian’s, Malta. Asso- ciation for Computational Linguistics. Haojun Shi, Suyu Ye, Xinyu Fang, Chuanyang Jin, Leyla Isik, Yen-Ling Kuo, and Tianmin Shu. 2024. Muma-tom: Multi-modal multi-agent theory of mind. Preprint , arXiv:2408.12574. Daniel Stokols. 1978. Environmental psychology. Jonathan H Turner. 1988. A Theory of Social Interaction . Stanford University Press. Tomer Ullman. 2023. Large language models fail on trivial alterations to theory-of-mind tasks. arXiv preprint arXiv:2302.08399 . Jiashuo Wang, Yang Xiao, Yanran Li, Changhe Song, Chunpu Xu, Chenhao Tan, and Wenjie Li. 2024. Towards a client-centered assessment of llm therapists by client simulation. arXiv preprint arXiv:2406.12266 .Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. Advances in neural information processing systems , 35:24824–24837. Alex Wilf, Sihyun Lee, Paul Pu Liang, and Louis- Philippe Morency. 2024. Think twice: Perspective- taking improves large language models’ theory-of- mind capabilities. In Proceedings of the 62nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 8292–8308, Bangkok, Thailand. Association for Computational Linguistics. Yufan Wu, Yinghui He, Yilin Jia, Rada Mihalcea, Yu- long Chen, and Naihao Deng. 2023. Hi-ToM: A benchmark for evaluating higher-order theory of mind reasoning in large language models. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2023 , pages 10691–10706, Singapore. Association for Computational Linguistics. Yang Xiao, Yi Cheng, Jinlan Fu, Jiashuo Wang, Wenjie Li, and Pengfei Liu. 2023. How far are we from believable ai agents? a framework for evaluating the believability of human behavior simulation. arXiv preprint arXiv:2312.17115 . Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Ye, Shiyang Lai, Kai Shu, Jindong Gu, Adel Bibi, Ziniu Hu, David Jurgens, James Evans, Philip Torr, Bernard Ghanem, and Guohao Li. 2024. Can large language model agents simulate human trust behavior? In The Thirty-eighth Annual Conference on Neural Informa- tion Processing Systems . Hainiu Xu, Runcong Zhao, Lixing Zhu, Jinhua Du, and Yulan He. 2024. OpenToM: A comprehensive bench- mark
https://arxiv.org/abs/2505.17663v1
for evaluating theory-of-mind reasoning capa- bilities of large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 8593–8623, Bangkok, Thailand. Association for Computational Linguistics. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. 2024. Qwen2. 5 tech- nical report. arXiv preprint arXiv:2412.15115 . Se-eun Yoon, Zhankui He, Jessica Echterhoff, and Ju- lian McAuley. 2024. Evaluating large language mod- els as generative user simulators for conversational recommendation. In Proceedings of the 2024 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (Volume 1: Long Papers) , pages 1490–1504, Mexico City, Mexico. Association for Computational Linguistics. Caleb Ziems, Jane Dwivedi-Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2023. NormBank: A knowl- edge bank of situational social norms. In Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7756–7776, Toronto, Canada. Association for Computational Linguistics. A The construction of the D YNTOM A.1 the candidate pool of social location The social location describes the environments where individuals live, work, and learn, which can significantly impact their mental states and behav- ior (Stokols, 1978). As shown in Figure 5, we have collected 13 types of social location types in total, adding up to 261 locations. A.2 the candidate pool of profile We conclude 7 aspects in the characters’ profile: surname, name, gender, occupation, education, race, and personality traits. Their value can be found in Figure 7, 8, 9, and 10. The source of the surname, name, and occupation statistics are U.S. Census Bureau Homepage, The United States So- cial Security Administration, and Bureau of Labor Statistics, respectively. Figure 6 shows an example of the social background. A.3 The prompt used to generate the mental state trajectory As illustrated in Figure 11, the prompt is used to generate the mental state trajectory. In the holders of ’{}’ and ’[]’, the corresponding information will be input into this prompt. An example of the mental state trajectory is shown in Figure 12. A.4 The prompt used to generate the social scenarios As illustrated in Figure 13, the prompt is used to generate the social scenarios. In the holders of ’{}’ and ’[]’, the corresponding information will be input into this prompt. An example of the social scenario is shown in Figure 14. A.5 The templates for the four types of questions and question examples We apply four predefined question templates to the social stage to generate questions, 71 questions for every social context in total. The four types of questions are: (1) (Understanding-1) What is the main character’s ToM reasoning item in a specific scenario? (2) (Transformation-1) Does a ToM rea- soning item change from scenario A to scenario B? (3) (Transformation-2) What causes a ToM reason- ing item change from scenario A to scenario B? (4) (Transformation-3) How does the ToM reasoning item change across all the scenarios? The templates and the example of
https://arxiv.org/abs/2505.17663v1
the four types of questions are shown in Figure 15.Model Version Size Context Length GPT-4o 2024-05-13 ~ 128k GPT-4-Turbo 2024-04-09 ~ 128k Llama-3.1-8B Instruct 8B 128k Llama-3.1-70B Instruct 70B 128k Mistral-7B Instruct-v0.3 7B 32k Mixtral-8x7B Instruct-v0.1 8x7B 32k Qwen2-7B Instruct 7B 128k Qwen2-72B Instruct 72B 128k DeepSeek-V2 Lite-Chat 16B 32k GLM-4 9b-chat 9B 128k Table 6: The detail of models evaluated in our bench- mark. A.6 Human validation of the Quality of DYNTOM We apply argilla as the annotation platform. Figure 16 shows the annotation interface for data valida- tion. B Experiments B.1 Model detail We evaluate a total of 10 popular LLMs, includ- ing GPT-4o, GPT-4-Turbo, Llama-3.1-8B, Llama- 3.1-70B, Mistral-7B, Mixtral-8x7B, Qwen2-7B, Qwen2-72B, DeepSeek-V2, GLM-4. For all the LLMs, we strictly abide by their terms and get access through official APIs or model weights. De- tails about model versions, parameter sizes, con- text window sizes and the prompts used for the two methods are shown in Table 6. B.2 Prompting methods We employ two prompting methods: the vanilla prompting which directly asks LLMs to answer the questions, and the CoT prompting that elicits step- by-step reasoning before answering. The prompts used for the two methods are shown in Figure 17. B.3 Case Study for CoT Prompting Both ToM reasoning item and question-type results in Table 3 indicate that CoT prompting doesn’t al- ways improve LLMs’ ToM reasoning ability. We present a failure case of GPT-4o when CoT prompt- ing is used in Figure 18. The Candidate Pool of Social LocationTransportation and Travelairplane cabin, airplane, airport terminal, airport, bus stop, bus boat, car, helicopter, harbor, dock, hot air balloon, ship, taxi, train compartment, train station, subway platform, subwayAccommodation and Residentialapartment, backyard, balcony, beach house, bedroom, cabin campsite, childsroom, dorm room, hotel lobby, hotel pool, hotel room hotel, house entrance, house safe, living room, nursery, nursing home, private deck, residential neighborhood, yardFood and Beveragebakery, bar, cafe, cafeteria, deli restaurant, dining hall, fastfoodrestaurant, outdoor restaurant, pub, restaurant, steak houseShopping and Retailarcade, bookstore, boutique, camera shop, car dealership, clothing store, department store, drugstore, gift shop, grocery store, hardware store, jewelry shop, liquor store, market, music store, pawn shop, shop, shopping mall, showroom, storeEntertainment and Leisureamusement park, arcade, balloon festival, ballroom, cinema, circus, fair, game room, golf course, museum, nightclub, outdoor cinema, paintball park, park, performance hall, ski resort, theater, yoga studio, zooEducationclassroom, college, kindergarten, biology laboratory, lab, library, school bus, school theater, school, universityHealth and Wellnessalcoholics ward, clinic, dentistsoffice, doctorsoffice, hospital, mental institution, nurse room, veterinariansofficeWork and Officebank, campaign room, conference room, construction site, copy center, government office, home office, officeMilitary and Law Enforcementborder, police car, police station, border checkpointPlaces of Worship and Ceremonyaltar, chapel, church, funeral parlour, templeNature and Outdoorscanyon, cliff, coast, cornfield, desert, forest, lake, lawn, meadow, mountain, ocean, plain, river, waterfall, woodsSport and Fitnessathletic field, basketball court, gym, gymnasium, hockey rink, locker room, mini golf course, polo field, pool, riding arena, running track, soccer field, stadium, tennis court, volleyball fieldMiscellaneousanimal shelter, arena, art gallery, attic, auditorium, backyard, basement, bathroom, bazaar, beach, beauty salon, boathouse, bowling alley, boxing hall, bridal shop, bridge, broadcast
https://arxiv.org/abs/2505.17663v1
studio, castle, cave, cemetery, city hall, city plaza, cloakroom, closet, clubhouse, community center, country road, courthouse, courtyard, dance studio, darkroom, drawing room, elevator, excavation, exhibition hall, farm, film studio, football field, garage, garden, gas station, gun shop, hall, hangar, hawaii, hotel, junkyard, kitchen table, kitchen, lake, legislative chamber, liquor store, living room, market, massage parlor, monument, museum, newsroom, outdoor pool, patio, performance hall, plaza, political rally, porch, post office, repair shop, restroom, road, rodeo, rooftop, rural road, saloon, smoking room, sofa, stable, stage, study room, tattoo studio, telephone booth, tv studio, waiting roomFigure 5: The candidate pool of social location. An example of the social contextsocial setting: "temple”social setting type: "Places of Worship and Ceremony”main character: "Maria Thai”scenario numbers: 5characters information: "**Main Character**: Maria Thai, a woman, is a Social Worker whose race is Native Hawaiian and Other Pacific Islander. Maria Thai obtained a doctorate degree and has an ESFP personality..**Supporting Characters**:-**1**: Kelly Sexton, the woman, is a Fitness Trainer whose race is White. Kelly Sexton obtained a Primary Education and has an ISTP personality."Figure 6: An example of the social context. The Candidate Pool of SurnameAmerican Indian and Alaska NativeBegay, Yazzie, Benally, Tsosie, Nez, Begaye, Etsitty, Becenti, Yellowhair, Manygoats, Wauneka, Manuelito, Apachito, Bedonie, Calabaza, Peshlakai, Claw, Roanhorse, Goldtooth, Etcitty, Tsinnijinnie, Notah, Clah, Atcitty, Twobulls, Werito, Hosteen, Yellowman, Attakai, Bitsui, Delgarito, Henio, Goseyun, Keams, Secatero, Declay, Tapaha, Beyale, Haskie, Cayaditto, Blackhorse, Ethelbah, Tsinnie, Walkingeagle, Altaha, Bitsilly, Wassillie, Benallie, Smallcanyon, Littledog, Cosay, Clitso, Tessay, Secody, Bigcrow, Tabaha, Chasinghawk, Blueeyes, Olanna, Blackgoat, Cowboy, Kanuho, Shije, Gishie, Littlelight, Laughing, Whitehat, Eriacho, Runningcrane, Chinana, Kameroff, Spottedhorse, Arcoren, Whiteplume, Dayzie, Spottedeagle, Heavyrunner, Standingrock, Poorbear, Ganadonegro, Ayze, Whiteface, Yepa, Talayumptewa, Madplume, Bitsuie, Tsethlikai, Ahasteen, Dosela, Birdinground, Todacheenie, Bitsie, Todacheene, Bullbear, Lasiloo, Keyonnie, Notafraid, Colelay, Kallestewa, LittlewhitemanAsian and Native Hawaiian and Other Pacific IslanderNguyen, Kim, Patel, Tran, Chen, Li, Le, Wang, Yang, Pham, Lin, Liu, Huang, Wu, Zhang, Shah, Huynh, Yu, Choi, Ho, Kaur, Vang, Chung, Truong, Phan, Xiong, Lim, Vo, Vu, Lu, Tang, Cho, Ngo, Cheng, Kang, Tan, Ng, Dang, Do, LyHan, Hoang, Bui, Sharma, Chu, Ma, Xu, Zheng, Song, Duong, Liang, Sun, Zhou, Thao, Zhao, Shin, Zhu, Leung, Hu, Jiang, Lai, Gupta, Cheung, Desai, Oh, Ha, Cao, Yi, Hwang, Lo, Dinh, Hsu, Chau, Yoon, Luu, Trinh, He, Her, Luong, MehtaMoua, Tam, Ko, Kwon, Yoo, Chiu, Su, Shen, Pan, Dong, Begum, Gao, Guo, Chowdhury, Vue, Thai, Jain, Lor, Yan, DaoBlack or African AmericanSmalls, Jeanbaptiste, Diallo, Kamara, Pierrelouis, Gadson, Jeanlouis, Bah, Desir, Mensah, Boykins, Chery, Jeanpierre, Boateng, Owusu, Jama, Jalloh, Sesay, Ndiaye, Abdullahi, Wigfall, Bienaime, Diop, Edouard, Toure, Grandberry, Fluellen, Manigault, Abebe, Sow, Traore, Mondesir, Okafor, Bangura, Louissaint, Cisse, Osei, Calixte, Cephas, Belizaire, Fofana, Koroma, Conteh, Straughter, Jeancharles, Mwangi, Kebede, Mohamud, Prioleau, Yeboah, Appiah, Ajayi, Asante, Filsaime, Hardnett, Hyppolite, Saintlouis, Jeanfrancois, Ravenell, Keita, Bekele, Tadesse, Mayweather, Okeke, Asare, Ulysse, Saintil, Tesfaye, Jeanjacques, Ojo, Nwosu, Okoro, Fobbs, Kidane, Petitfrere, Yohannes, Warsame, Lawal, Desta, Veasley, Addo, Leaks, Gueye, Mekonnen, Stfleur, Balogun, Adjei, Opoku, Coaxum, Vassell, Prophete, Lesane, Metellus, Exantus, Hailu, Dorvil, Frimpong, Berhane, Njoroge, BeyeneHispanic or LatinoGarcia, Rodriguez, Martinez, Hernandez, Lopez, Gonzalez, Perez, Sanchez, Ramirez, Torres, Flores, Rivera, Gomez, Diaz, Morales,
https://arxiv.org/abs/2505.17663v1
Gutierrez, Ortiz, Chavez, Ruiz, Alvarez, Castillo, Jimenez, Vasquez, Moreno, Herrera, Medina, Aguilar, Vargas, Guzman, Mendez, Munoz, Salazar, Garza, Soto, Vazquez, Alvarado, Delgado, Pena, Contreras, SandovalGuerrero, Rios, Estrada, Ortega, Nunez, Maldonado, Dominguez, Vega, Espinoza, Rojas, Marquez, Padilla, Mejia, Juarez, Figueroa, Avila, Molina, Campos, Ayala, Carrillo, Cabrera, Lara, Robles, Cervantes, Solis, Salinas, Fuentes, Velasquez, Aguirre, Ochoa, Cardenas, Calderon, Rivas, Serrano, Rosales, Castaneda, Gallegos, Ibarra, Suarez, OrozcoSalas, Escobar, Velazquez, Macias, Zamora, Villarreal, Barrera, Pineda, Santana, Trevino, Lozano, Rangel, Arias, Mora, Valenzuela, Zuniga, Melendez, Galvan, Velez, MezaWhiteOlson, Snyder, Wagner, Meyer, Schmidt, Ryan, Hansen, Hoffman, Johnston, Larson, Carlson, Obrien, Jensen, Hanson, Weber, Walsh, Schultz, Schneider, Keller, Beck, Schwartz, Becker, Wolfe, Zimmerman, Mccarthy, Erickson, Klein, Oconnor, Swanson, Christensen, Fischer, Wolf, Gallagher, Schroeder, Parsons, Bauer, Mueller, Hartman, Kramer, FlynnOwen, Shaffer, Hess, Olsen, Petersen, Roth, Hoover, Weiss, Decker, Yoder, Larsen, Sweeney, Foley, Hensley, Huffman, Cline, Oneill, Koch, Brennan, Berg, Russo, Macdonald, Kline, Jacobson, Berger, Blankenship, Bartlett, Odonnell, Stein, Stout, Sexton, Nielsen, Howe, Morse, Knapp, Herman, Stark, Hebert, Schaefer, Reilly, Conrad, Donovan, Mahoney, Hahn, Peck, Boyle, Hurley, Mayer, Mcmahon, Case, Duffy, Friedman, Fry, Dougherty, Crane, Huber, Moyer, Krueger, Rasmussen, Brandt Figure 7: The races and their corresponding 100 most popular surnames. The Candidate Pool of nameManJames, Michael, Robert, John, David, William, Richard, Joseph, Thomas, Christopher, Charles, Daniel, Matthew, Anthony, Mark, Donald, Steven, Andrew, Paul, Joshua, Kenneth, Kevin, Brian, Timothy, Ronald, George, Jason, Edward, Jeffrey, Ryan, Jacob, Nicholas, Gary, Eric, Jonathan, Stephen, Larry, Justin, Scott, Brandon, Benjamin, Samuel, Gregory, Alexander, Patrick, Frank, Raymond, Jack, Dennis, Jerry, Tyler, Aaron, Jose, Adam, Nathan, Henry, Zachary, Douglas, Peter, Kyle, Noah, Ethan, Jeremy, Christian, Walter, Keith, Austin, Roger, Terry, Sean, Gerald, Carl, Dylan, Harold, Jordan, Jesse, Bryan, Lawrence, Arthur, Gabriel, Bruce, Logan, Billy, Joe, Alan, Juan, Elijah, Willie, Albert, Wayne, Randy, Mason, Vincent, Liam, Roy, Bobby, Caleb, Bradley, Russell, LucasWomanMary, Patricia, Jennifer, Linda, Elizabeth, Barbara, Susan, Jessica, Karen, Sarah, Lisa, Nancy, Sandra, Betty, Ashley, Emily, Kimberly, Margaret, Donna, Michelle, Carol, Amanda, Melissa, Deborah, Stephanie, Rebecca, Sharon, Laura, Cynthia, Dorothy, Amy, Kathleen, Angela, Shirley, Emma, Brenda, Pamela, Nicole, Anna, Samantha, Katherine, Christine, Debra, Rachel, Carolyn, Janet, Maria, Olivia, Heather, Helen, Catherine, Diane, Julie, Victoria, Joyce, Lauren, Kelly, Christina, Ruth, Joan, Virginia, Judith, Evelyn, Hannah, Andrea, Megan, Cheryl, Jacqueline, Madison, Teresa, Abigail, Sophia, Martha, Sara, Gloria, Janice, Kathryn, Ann, Isabella, Judy, Charlotte, Julia, Grace, Amber, Alice, Jean, Denise, Frances, Danielle, Marilyn, Natalie, Beverly, Diana, Brittany, Theresa, Kayla, Alexis, Doris, Lori, TiffanyFigure 8: The genders and their corresponding 100 most popular names. The Candidate Pool of occupationManConstruction Worker, Electrician, Plumber, Carpenter, Mechanic, Software Developer, Engineer, Pilot, Truck Driver, Police Officer, Firefighter, Doctor, Lawyer, Architect, Welder, Heavy Equipment Operator, Factory Worker, Farmer, Security Guard, Salesperson, Financial Analyst, Accountant, Real Estate Agent, Chef, IT Support Specialist, Project Manager, Business Analyst, Marketing Manager, Entrepreneur, Military Personnel, Maintenance Worker, Landscaper, Sports Coach, Athlete, Bartender, Warehouse Worker, Janitor, Civil Engineer, Graphic Designer, HVAC Technician, Delivery Driver, Scientist, Researcher, College Professor, Banker, Insurance Agent, Surveyor, Veterinarian, Barber, Personal TrainerWomanTeacher, Nurse, Receptionist, Administrative Assistant, Customer Service Representative, Cashier, Retail, Salesperson, Waitress, Medical Assistant, Hairdresser, Childcare Worker, Social Worker, Librarian, Human Resources Manager, Marketing Manager, Accountant, Dental
https://arxiv.org/abs/2505.17663v1
Hygienist, Pharmacist, Flight Attendant, Secretary, Occupational Therapist, Physical Therapist, Registered Nurse, Home Health Aide, Event Planner, Public Relations Specialist, Dietitian, Psychologist, Interior Designer, Speech-Language Pathologist, Elementary School Teacher, Fashion, Designer, Real Estate Agent, Paralegal, Personal Care Aide, Health Services Manager, Legal Secretary, Bookkeeper, Preschool Teacher, Technical Writer, Graphic Designer, Midwife, Bartender, Bank Teller, Office Manager, Translator, Fitness Trainer, Editor, Market Research Analyst, Sales Representative Figure 9: The genders and their corresponding 100 most popular occupations. The Candidate Pool of Personality Trait and EducationPersonality Traitsistj: ”The Inspector”, isfj: ”The Protector”, infj: ”The Counselor”, intj: ”The Mastermind”istp: ”The Craftsman”, isfp: ”The Composer”, infp: ”The Healer”, intp: ”The Architect”estp: ”The Dynamo”, esfp: ”The Performer”, enfp: ”The Champion”, entp: ”The Visionary”estj: ”The Supervisor”, esfj: ”The Provider”, enfj: ”The Teacher”, entj: ”The Commander”Educationsprimary education, middle school, high school, bachelor’s degree, master’s degree, doctorate degree Figure 10: The personality traits and educations. Prompt for Mental State Trajectory Design"[character information]:{characters_information} I want to generate a complete and complex story that happened in {social_setting}. The story includes {scenario_number} scenarios. Every scenario, including conversation and the background of the conversation between {main_character} and supporting character. These scenarios are to test the readers' theory of mind reasoning abilities in four mental states: belief, emotion, intention, and action. The four mental states will influence each other: a, belief will influence emotion; b, belief and emotion will influence intention; c, belief,emotion,intentionwill influence action. You do not need to generate the complete story and scenarios. But you could help me to:1. Describe the relationship between the {main_character} and the other provided character.2.design what the belief, emotion, intention, and action are of the {main_character} in every scenario when {main_character} interacts with supporting characters. 3. The previous scenario's mental states may influence the following scenarios' mental states. The changes in mental states are caused both by previous mental states and the plot. 4. I want the mental states' change to be complex enough: at least one mental state of [belief, emotion, intention] changes in all scenarios. You should also describe how each mental state influences the other.Thejsonformat should be:{{[scenario number]:{{[belief]:[content],[emotion]:[content],[intention]:[content],[action]:[content],[influence]:{{[belief->emotion]:[content],[belief&emotion->intention]:[content],[belief&emotion&intention->action]:[content] }} }} }} 5. After steps 1-4, you should analyze all mental states of {main_character}: if changed between scenarios, you should also provide the main plot and previous mental states that cause the mental states' change. The format should be: [belief]: [if changed: True or False], [if changed] [scenario number]:[mental state's value]->[scenario number]:[mental state's value]->[scenario number]:[mental state's value] …Reasons: [scenario number]->[scenario number]:[reason];[scenario number]->[scenario number]:[reason]…[emotion]: the format is the same as [belief].[intention]: the format is the same as [belief].[action]: the format is the same as [belief] 6. the interaction in every scenario should be between the {main_character} and supporting characters. 7. Respond with JSON format:{JSON_format}. Figure 11: The prompt for the generation of the relationship between characters and the mental state trajectory. Example of Mental Mental State Trajectory Designrelationships among characters"John Atcitty and Beverly Yan are colleagues at a community center where John teaches equipment operation and Beverly teaches adult education classes. They share a mutual respect for each other's skills and often collaborate on projects that require
https://arxiv.org/abs/2505.17663v1
both practical and theoretical knowledge."mental states analysis in every scenarioscenario 1:"belief": "John believes Beverly might think less of him due to his lower level of formal education.""emotion": "John feels insecure and slightly anxious.""intention": "John intends to prove his competence.""action": "John goes out of his way to demonstrate his expertise in operating complex machinery during a joint class project."”influence”:"belief->emotion": "John's belief about Beverly's perception increases his anxiety.""belief&emotion->intention": "Feeling insecure and believing Beverly might underestimate him, John decides to showcase his skills.""belief&emotion&intention->action": "His intention to prove himself leads him to take proactive steps during the class."scenario 2:"belief": "John now believes Beverly respects his skills, as she praised his demonstration.""emotion": "John feels relieved and proud.""intention": "John intends to engage more openly with Beverly in planning future classes.""action": "John suggests a new collaborative project to Beverly.""influence": "belief->emotion": "John's new belief that Beverly respects his skills makes him feel proud.""belief&emotion->intention": "Feeling proud and respected, John decides to deepen their collaboration.""belief&emotion&intention->action": "He proposes a new project idea to Beverly."scenario 3:"belief": "John believes that Beverly might now expect even more from him, which could lead to stress.""emotion": "John feels pressured and a bit overwhelmed.""intention": "John intends to manage expectations by discussing his capabilities and limits.""action": "John has a frank conversation with Beverly about the realistic outcomes of their projects.""influence": "belief->emotion": "The belief that Beverly expects more from him causes John stress.""belief&emotion->intention": "Feeling overwhelmed, John decides to clarify his and Beverly’s expectations.""belief&emotion&intention->action": "He discusses his concerns with Beverly to ensure they are on the same page."scenario 4:"belief": "John believes that Beverly appreciates his honesty and sees him as a reliable partner.""emotion": "John feels content and valued.""intention": "John intends to continue being open and honest in all professional dealings.""action": "John commits to transparency in all future projects with Beverly.""influence":"belief->emotion": "John's belief in Beverly's appreciation makes him feel valued.""belief&emotion->intention": "Feeling valued, John decides to maintain honesty as his policy.""belief&emotion&intention->action": "His commitment to transparency is reinforced in his actions."scenario 5: "belief": "John believes he and Beverly can take their collaborative efforts to a larger audience.""emotion": "John feels ambitious and excited.""intention": "John intends to propose a community-wide event showcasing their collaborative projects.""action": "John and Beverly begin planning a large-scale event.""influence": "belief->emotion": "The belief in their successful collaboration excites John.""belief&emotion->intention": "Feeling ambitious, John decides to expand their project's reach.""belief&emotion&intention->action": "He moves forward with plans for a larger event."analysis of mental states across scenariosBelief: "Changed": "True""1": "Beverly might think less of him""2": "Beverly respects his skills""3": "Beverly might now expect more from him""4": "Beverly appreciates his honesty""5": "They can take their efforts to a larger audience""Reasons": "1->2: Beverly's positive feedback; 2->3: John's self-imposed pressure; 3->4: Positive outcome of their honest discussion; 4->5: Success in previous projects"Emotion: "Changed": "True""1": "Insecure, anxious""2": "Relieved, proud""3": "Pressured, overwhelmed""4": "Content, valued""5": "Ambitious, excited""Reasons": "1->2: Positive feedback from Beverly; 2->3: Increased expectations; 3->4: Positive communication; 4->5: Success and recognition"Intention: "Changed": "True""1": "Prove his competence""2": "Engage more openly""3": "Manage expectations""4": "Continue being honest""5": "Propose a community-wide event""Reasons": "1->2: Gained confidence; 2->3: Need to manage stress; 3->4: Positive feedback on honesty; 4->5: Increased ambition from success”Action:"Changed": "True""1": "Demonstrates expertise""2": "Suggests new project""3": "Discusses capabilities""4":
https://arxiv.org/abs/2505.17663v1
"Commits to transparency""5": "Plans large-scale event""Reasons": "1->2: Confidence from success; 2->3: Need to clarify expectations; 3->4: Commitment to honesty; 4->5: Desire to expand impact"Figure 12: An example of the mental state trajectory. Prompt for social scenarios“I want to generate a complete story, including some scenarios. Every scenario including conversation and the background of the conversation between {main_character} and supporting characters. there exists 2 chatcatersin one conversation. These scenarios are to test the theory of mind reasoning abilities in belief, emotion, intention, action. The mental states will influence each other: a, belief wilinfluence emotion; b, belief and emotion will influence intention; c, belief,emotion and intention will influence action. I will give you the information of the characters. And I also will give you the sketch of the complete story, including the relationship between the main character and the supporting character, the mental states of the main character in every scenario, how the mental states influence each other in every scenarios, and how one mental states change across all the scenario. As follows:[characters information]:{characters_information}. [story sketch]:{story_sketch}. I want you to:[1] complete the story based on the character information and the story sketch. [2] the format should be in JSON format:{json_format}[3]just give the complete story,you do not need to start with your understanding of the task. [4] these conversations in every scenario have 3-5 rounds. The conversation should clearly show the mental states of the characters."Figure 13: The prompt for the generation of the scenarios. Example of scenariosscenario 1background:"During a joint class project at the community center, John feels insecure about his educational background compared to Beverly's doctorate degree."dialogue:"John": "I've set up the equipment for today's demonstration. I hope it meets your expectations.","Beverly": "John, I've always admired how you handle these machines. Education isn't just about degrees. It's about what you can do with what you know."scenario 2background:"After receiving praise from Beverly for his expertise in operating machinery, John feels more confident and suggests a new project idea."dialogue:"John": "Thanks for your kind words the other day. I was thinking, maybe we could collaborate on something new? Perhaps a workshop that combines theory and practice?""Beverly": "I love that idea, John! Your practical skills are invaluable, and I think we can create something really specialtogether.”scenario 3background:"Following their successful collaboration, John worries that Beverly might now have higher expectations, which makes him feel pressured."dialogue:"John": "Beverly, I need to be honest. I'm worried I might not always meet the high standards we've set recently.""Beverly": "John, I appreciate your honesty. Let's make sure we set realistic goals. It's our combined efforts that make these projects successful, not perfection."scenario 4background:"After a frank discussion about expectations, John believes that Beverly values his honesty and sees him as a reliable partner, enhancing his sense of worth."dialogue:"John": "I'm glad we had that talk. I feel like I can be open with you and that really matters to me.""Beverly": "Absolutely, John. I respect your opennessand I trust your judgment completely. Let's keep this transparency going."scenario 5background:"Encouraged by their successful collaborations and mutual respect, John feels ambitious and proposes a larger community-wide event to showcase their
https://arxiv.org/abs/2505.17663v1
projects."dialogue:"John": "Beverly, what do you think about taking our collaboration to the next level? Maybe a community event that showcases what we've achieved together?""Beverly": "John, that's a fantastic idea! I think it's the perfect way to demonstrate the impact of our work. Let's start planning!" Figure 14: An example of the social scenarios. Questiontemplates and examplesUnderstanding-1template:"What is the {mental_key} of {main_characetr} in scenario {scenario_number}?"question example:"What is the belief of Angela Hwang in scenario 1?",Transformation-1template:"Whether the {mental_key} of {main_characetr} change from scenario {scenario_number} to scenario {scenario_number+1} ? if yes, from what to what?question example:"Whether the belief of Angela Hwang change from scenario 1 to scenario 2? if yes, from what to what?",Transformation-2template:"Why does the {mental_key} of {main_characetr} change from scenario {scenario_number} to scenario {scenario_number+1}?"question example:"Why does the belief of Angela Hwang change from scenario 1 to scenario 2?”Transformation-3template:"How does the {mental_state} of {main_characetr} change across the {scenario_numbers} scenarios?"question example:"How does the belief of Angela Hwang change across the 5 scenarios?”Question example with optionscontent:"Why does the emotion of Melissa Decker change from scenario 1 to scenario 2?"options:"a. Jerry's reassurance changes her belief.""b. Realization of benefits influences her action.""c. Positive interaction with Jerry affects her emotions.""d. Confidence alters her intention.""e. Experience of working in a group influences her belief.""f. Conflict influences her intention.""g. Feeling appreciated changes her intention.""h. Jerry's conversation influences her belief.""i. Satisfaction and new belief affect her intention.""j. Reassurance affects her action.""k. Realizing benefits of collaboration increases satisfaction.""l. Conflict of ideas causes frustration.""m. Jerryu2019s reassurance boosts confidence.""n. New intention affects her action.""o. Her focus on personal goals creates a new belief.""p. Frustration influences her action.”ground truth: cFigure 15: The examples of the types of questions and templates to generate these questions. Figure 16: The platform to annotate the quality of the story. Two Prompts of Vanilla and CoTPromptingVanilla Promptprompt: Answer the questions based on the story. {characters_information} \n{story} \n{questions_new}. Answer the question, and response in JSON format:{[question_id]:[a, b, c or d]}. for example: {"type_d_how_1":"a”}CoTPromptprompt:Answer the questions based on the story; first, think step by step, analyze the answers to the questions, and finally, output the most likely answers. {characters_information} \n{story} \n{questions_new}. Answer the question, and response in JSON format:{[question_id]:[a, b, c or d]}. for example: {"type_d_how_1":"a”}Figure 17: The prompts used for vanilla and CoT Prompting. Case Study for CoTPromptingscenario 1background:"Melissa is in high school, feeling the pressure of her peers' expectations. She believes that no one sees her as someone who will succeed. This belief makes her anxious and insecure about her future. Determined to prove everyone wrong, she isolates herself to focus solely on her studies."dialogue:"Melissa": "Hey Jerry, do you think people really believe I won't succeed?""Jerry": "I don't know why you're so hard on yourself, Melissa. You have so much potential."scenario 2background:"After a conversation with Jerry, Melissa begins to believe that there are people who genuinely care about her. This makes her feel slightly optimistic and appreciated. She then decides to open up a little more to those who support her, leading her to accept Jerry's invitation to participate in a group project."dialogue:"Jerry": "Melissa, I'm working on this group project. Would
https://arxiv.org/abs/2505.17663v1
you like to join us?""Melissa": "You know, I might just take you up on that. Thanks for thinking of me."scenario 3background:"While working on the group project, Melissa starts to believe that collaborating with Jerry might hinder her personal goals. This causes her to feel frustrated and conflicted. She considers withdrawing from the project to focus on her own objectives and tells Jerry that she needs more time to decide."dialogue:"Melissa": "Jerry, I'm not sure if I can continue with the project. I need to focus on my own goals.""Jerry": "I understand, Melissa. Take your time to decide. We're here if you change your mind."scenario 4background:"Jerry reassures Melissa that working as a team can enhance her skills rather than hinder her. This reassurance makes Melissa feel more confident and less anxious. She decides to commit to the project wholeheartedly, leading to active participation and contributing ideas."dialogue:"Jerry": "Melissa, teaming up could help you more than you think. It's a chance to learn and grow.""Melissa": "You know what, Jerry? You're right. I'm in. Let's make this project a success."scenario 5background:"Melissa realizes that collaboration brings unexpected benefits. She feels satisfied and more cooperative. With her newfound appreciation for teamwork, she thanks Jerry and proposes to continue collaborating outside of school projects."dialogue:"Melissa": "Jerry, I'm glad we teamed up. It's been more rewarding than I imagined.""Jerry": "I'm happy to hear that, Melissa. Maybe we can work on more projects together in the future?""Melissa": "Absolutely! Let's keep this collaboration going.”questioncontent:"Why does the emotion of Melissa Decker change from scenario 1 to scenario 2?"options:"a. Jerry's reassurance changes her belief.""b. Realization of benefits influences her action.""c. Positive interaction with Jerry affects her emotions.""d. Confidence alters her intention.""e. Experience of working in a group influences her belief.""f. Conflict influences her intention.""g. Feeling appreciated changes her intention.""h. Jerry's conversation influences her belief.""i. Satisfaction and new belief affect her intention.""j. Reassurance affects her action.""k. Realizing benefits of collaboration increases satisfaction.""l. Conflict of ideas causes frustration.""m. Jerryu2019s reassurance boosts confidence.""n. New intention affects her action.""o. Her focus on personal goals creates a new belief.""p. Frustration influences her action.”ground truth: cGPT-4o with vanilla prompting: gGPT-4o with CoTprompting: Let's analyze the provided scenarios and Melissa's emotions step by step:1. **Scenario 1:** -**Melissa:** "Hey Jerry, do you think people really believe I won't succeed?" -**Jerry:** "I don't know why you're so hard on yourself, Melissa. You have so much potential." -Analysis: Melissa shows self-doubt and seeks reassurance. Jerry provides positive reinforcement, indicating his belief in her potential.2. **Scenario 2:** -**Jerry:** "Melissa, I'm working on this group project. Would you like to join us?" -**Melissa:** "You know, I might just take you up on that. Thanks for thinking of me." -Analysis: Here, Melissa's tone changes to showing openness and willingness to participate in a group project, indicating a shift in her emotions towards a more positive outlook. To determine why Melissa's emotions change from Scenario 1 to Scenario 2, let's look at some potential reasons:-**a. Jerry's reassurance changes her belief:** Jerry's reassurance in Scenario 1 might indeed build Melissau2019s confidence, making her more open in Scenario 2.-**b. Realization of benefits
https://arxiv.org/abs/2505.17663v1
influences her action:** Melissa might realize the potential benefits of working on a group project, influencing her decision to join.-**c. Positive interaction with Jerry affects her emotions:** The encouraging interaction with Jerry in Scenario 1 positively impacts her mood.-**d. Confidence alters her intention:** The increase in confidence after Jerry's reassurance may lead her to be more willing to participate.-**e. Experience of working in a group influences her belief:** This doesn’t seem applicable as she hasn’t yet worked in a group.Byprocess of elimination and considering the context:-*h. Jerry's conversation influences her belief,* seems to incorporate both *a,* *c,* and *d*: Jerry’s conversation (reassurance) leads to a positive emotional shift (confidence and positive interaction), which influences her belief in herself and her subsequent decision to join the group project.Thus, after thorough analysis, the most likely answer is:**h. Jerry's conversation influences her belief** Figure 18: A case of CoT prompting on GPT-4o.
https://arxiv.org/abs/2505.17663v1
QWEN LONG -L1: Towards Long-Context Large Reasoning Models with Reinforcement Learning Fanqi Wan, Weizhou Shen, Shengyi Liao, Yingcheng Shi, Chenliang Li, Ziyi Yang, Ji Zhang, Fei Huang, Jingren Zhou, Ming Yan∗ Tongyi Lab, Alibaba Group https://github.com/Tongyi-Zhiwen/QwenLong-L1 https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B https://modelscope.cn/models/iic/QwenLong-L1-32B Abstract Recent large reasoning models (LRMs) have demonstrated strong reasoning capa- bilities through reinforcement learning (RL). These improvements have primarily been observed within the short-context reasoning tasks. In contrast, extending LRMs to effectively process and reason on long-context inputs via RL remains a critical unsolved challenge. To bridge this gap, we first formalize the paradigm oflong-context reasoning RL , and identify key challenges in suboptimal training efficiency andunstable optimization process . To address these issues, we propose QWEN LONG -L1, a framework that adapts short-context LRMs to long-context scenarios via progressive context scaling . Specifically, we utilize a warm-up su- pervised fine-tuning (SFT) stage to establish a robust initial policy, followed by a curriculum-guided phased RL technique to stabilize the policy evolution, and en- hanced with a difficulty-aware retrospective sampling strategy to incentivize the pol- icy exploration. Experiments on seven long-context document question-answering benchmarks demonstrate that QWEN LONG -L1-32B outperforms flagship LRMs like OpenAI-o3-mini and Qwen3-235B-A22B, achieving performance on par with Claude-3.7-Sonnet-Thinking, demonstrating leading performance among state- of-the-art LRMs. This work advances the development of practical long-context LRMs capable of robust reasoning across information-intensive environments. /uni00000027/uni00000052/uni00000046/uni00000030/uni00000044/uni00000057/uni0000004b /uni00000029/uni00000055/uni00000044/uni00000050/uni00000048/uni00000056 /uni00000015/uni0000003a/uni0000004c/uni0000004e/uni0000004c/uni00000050/uni00000034/uni00000024 /uni0000002b/uni00000052/uni00000057/uni00000053/uni00000052/uni00000057/uni00000034/uni00000024 /uni00000030/uni00000058/uni00000056/uni0000004c/uni00000054/uni00000058/uni00000048 /uni00000031/uni00000044/uni00000055/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000059/uni00000048/uni00000034/uni00000024 /uni00000034/uni00000044/uni00000056/uni00000053/uni00000048/uni00000055 /uni00000024/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004a/uni00000048/uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013 /uni00000019/uni0000001a/uni00000011/uni00000018/uni0000001a/uni00000013/uni00000011/uni00000014/uni0000001c/uni00000013/uni00000011/uni00000018 /uni0000001b/uni00000016/uni00000011/uni00000013 /uni00000019/uni0000001c/uni00000011/uni00000013 /uni00000018/uni00000019/uni00000011/uni00000013/uni00000018/uni0000001b/uni00000011/uni00000018/uni0000001a/uni00000013/uni00000011/uni0000001a /uni00000019/uni00000015/uni00000011/uni00000018/uni00000019/uni0000001a/uni00000011/uni00000013/uni0000001b/uni00000017/uni00000011/uni00000013 /uni0000001b/uni00000013/uni00000011/uni00000018 /uni00000019/uni00000014/uni00000011/uni00000013 /uni00000018/uni00000017/uni00000011/uni00000013 /uni00000018/uni00000013/uni00000011/uni00000013/uni00000019/uni00000018/uni00000011/uni00000019/uni00000019/uni00000019/uni00000011/uni00000018/uni0000001a/uni00000018/uni00000011/uni00000018/uni0000001b/uni00000019/uni00000011/uni00000018 /uni0000001b/uni00000016/uni00000011/uni00000018 /uni00000019/uni00000019/uni00000011/uni00000018 /uni00000018/uni0000001c/uni00000011/uni00000013 /uni00000018/uni00000018/uni00000011/uni00000013/uni0000001a/uni00000013/uni00000011/uni00000017 /uni00000019/uni0000001a/uni00000011/uni00000018/uni0000001a/uni00000017/uni00000011/uni00000019/uni0000001c/uni00000014/uni00000011/uni00000018 /uni0000001b/uni00000017/uni00000011/uni00000017 /uni00000019/uni00000016/uni00000011/uni00000016 /uni00000019/uni00000013/uni00000011/uni00000013 /uni00000018/uni00000016/uni00000011/uni00000013/uni0000001a/uni00000013/uni00000011/uni00000019 /uni00000019/uni0000001a/uni00000011/uni00000018/uni0000001a/uni00000013/uni00000011/uni0000001c/uni0000001b/uni00000019/uni00000011/uni00000018/uni0000001b/uni00000017/uni00000011/uni00000017 /uni00000019/uni0000001b/uni00000011/uni00000016 /uni00000019/uni00000014/uni00000011/uni00000018 /uni00000018/uni00000019/uni00000011/uni00000013/uni0000001a/uni00000013/uni00000011/uni0000001a /uni00000019/uni00000019/uni00000011/uni00000013/uni0000001a/uni0000001c/uni00000011/uni00000019/uni0000001b/uni0000001c/uni00000011/uni0000001c /uni0000001b/uni00000015/uni00000011/uni00000018 /uni0000001a/uni00000017/uni00000011/uni00000018 /uni00000018/uni0000001c/uni00000011/uni00000018 /uni00000018/uni00000016/uni00000011/uni00000013/uni0000001a/uni00000015/uni00000011/uni00000014 /uni00000019/uni00000017/uni00000011/uni00000018/uni0000001b/uni00000013/uni00000011/uni0000001b/uni0000001b/uni0000001a/uni00000011/uni00000018 /uni0000001b/uni00000016/uni00000011/uni00000018 /uni00000019/uni0000001c/uni00000011/uni00000015/uni00000019/uni0000001b/uni00000011/uni00000013 /uni00000018/uni0000001a/uni00000011/uni00000013/uni0000001a/uni00000015/uni00000011/uni0000001c/uni00000034/uni0000005a/uni00000048/uni00000051/uni0000002f/uni00000052/uni00000051/uni0000004a/uni00000010/uni0000002f/uni00000014/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000035/uni00000014/uni00000010/uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000010/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni00000016/uni00000015/uni00000025 /uni00000052/uni00000016/uni00000010/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000016/uni00000010/uni00000015/uni00000016/uni00000018/uni00000025/uni00000010/uni00000024/uni00000015/uni00000015/uni00000025 /uni00000026/uni0000004f/uni00000044/uni00000058/uni00000047/uni00000048/uni00000010/uni00000016/uni00000011/uni0000001a/uni00000010/uni00000036/uni00000052/uni00000051/uni00000051/uni00000048/uni00000057/uni00000010/uni00000037/uni0000004b/uni0000004c/uni00000051/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000052/uni00000014/uni00000010/uni00000053/uni00000055/uni00000048/uni00000059/uni0000004c/uni00000048/uni0000005a/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni0000000b/uni00000008/uni0000000c Figure 1: Overall results of QWEN LONG -L1 across seven long-context reasoning benchmarks. Starting from R1-Distill-Qwen-32B, QWEN LONG -L1-32B achieves an average gain of 5.1 points, surpassing OpenAI-o3-mini, Qwen3-235B-A22B, and comparable to Claude-3.7-Sonnet-Thinking. ∗Corresponding author. Preprint. Work in progress.arXiv:2505.17667v2 [cs.CL] 27 May 2025 Slower Reward ImprovementsMore EntropyReductionMore KLFluctuationsGreater LengthValue/VarianceFigure 2: Comparison of training dynamics between short-context and long-context reasoning RL. The long-context reasoning RL demonstrates two key challenges: suboptimal training efficiency , with slower improvements in reward score caused by more reduction in entropy, and unstable optimization process , with more fluctuations in KL divergence introduced from greater variance in longer output. 1 Introduction Recent breakthroughs in large reasoning models (LRMs) have showcased significant improvements in reasoning capabilities, achieving performance comparable to human experts in complex problem- solving scenarios [ 49]. These advancements, exemplified by OpenAI-o1 [ 26,15], DeepSeek-R1 [ 6, 11], and Qwen-QwQ [ 41,43], have sparked extensive research efforts to explore and enhance a broad spectrum of reasoning tasks through reinforcement learning (RL), ranging from foundational logical reasoning [ 29,47] to advanced challenges in programming [ 8,7] and mathematics [ 24,39], with innovations in RL algorithms driving progress in reasoning quality enhancements [3, 55, 23, 54]. Following RL fine-tuning, LRMs exhibit a phenomenon analogous to human “slow thinking” [ 4], char- acterized by the emergence of sophisticated problem-solving strategies such as divide-and-conquer and backtracking mechanisms in their extended chain-of-thought (CoT) reasoning outputs [ 46]. While this process enhances reasoning performance on short
https://arxiv.org/abs/2505.17667v2
context tasks (e.g., 4K tokens) [ 45,22], its scalability to long-context scenarios (e.g., 120K tokens), which requires robust contextual grounding and multi-step reasoning, remains unexplored. This limitation poses a significant barrier to practical applications requiring interaction with external knowledge, such as deep research [ 38,27,40], where LRMs must collect and process information from knowledge-intensive environments. To shed light on this topic, we first introduce the concept of long-context reasoning RL . Different from short-context reasoning RL , which primarily relies on internal knowledge stored within model parameters, long-context reasoning RL instead necessitates that LRMs perform retrieval and ground- ing of relevant information from long-context inputs, followed by generation of reasoning chains based on the incorporated information [ 12,30,52]. To illustrate the differences between short-context and long-context reasoning RL, we conduct a preliminary experiment to compare the training dy- namics in Figure 2. Our results reveal that long-context reasoning RL exhibits suboptimal training efficiency compared to the short-context counterpart with (a) delayed reward convergence . This discrepancy stems from (b) marked reduction in output entropy when processing long-context inputs, which restricts exploratory behavior during policy optimization. Furthermore, we identify unstable optimization process , evidenced by (c) intermittent spikes in KL divergence . These instabilities are introduced by the inherent variance amplification due to (d) longer output length with heterogeneous input length distributions , leading to greater variability during policy updating. To address these challenges, we propose QWEN LONG -L1, a novel RL framework designed to facilitate the transition of LRMs from short-context proficiency to robust long-context generalization, as shown in Figure 3. Inspired by recent studies on context extension during pretraining [ 9,48, 10], our framework enhances short-context LRMs through progressive context scaling during RL training. The framework comprises three core components: a warm-up supervised fine-tuning (SFT) phase to initialize a robust policy, a curriculum-guided RL phase that facilitates stable adaptation from short to long contexts, and a difficulty-aware retrospective sampling mechanism that adjusts training complexity across stages to incentivize policy exploration. Leveraging recent RL algorithms, including GRPO [ 34] and DAPO [ 54], our framework integrates hybrid reward functions combining rule-based and model-based binary outcome rewards to balance precision and recall. Through strategic utilization of group relative advantages during policy optimization, it guides LRMs to 2 Progressive Context ScalingPolicy Model𝑦!𝑦"...𝑦#Rule-Based VerificationLLM-as-a-Judge𝑟!𝑟"...𝑟#𝐴!𝐴"...𝐴#GroupComputationPolicy Updating𝑥𝑐$%&'𝑥𝑐()%*+Phase IPhase KFigure 3: Overview of QWEN LONG-L1, which is a novel long-context reasoning RL training frame- work. The proposed framework integrates group-relative RL algorithms, hybrid reward mechanisms, and progressive context scaling strategies to enable stable adaptation from short-context to long- context LRMs with robust contextual grounding and multi-step reasoning capabilities. learn effective reasoning patterns essential for long-context reasoning scenarios, resulting in robust long-context grounding and superior reasoning capabilities. In our experiments, we focus on document question answering (DocQA) [ 51,17,13] as a repre- sentative real-world long-context reasoning task. Specifically, we introduce DOCQA-RL-1.6K , a specialized RL training dataset comprising 1.6K DocQA problems spanning mathematical, log- ical, and multi-hop reasoning domains. Experimental results across seven long-context DocQA benchmarks demonstrate the superiority of QWEN LONG -L1compared to various proprietary and open-source LRMs.
https://arxiv.org/abs/2505.17667v2
Notably, QWEN LONG -L1-14B achieves superior performance over Gemini- 2.0-Flash-Thinking and Qwen3-32B, while QWEN LONG -L1-32B outperforms OpenAI-o3-mini, Qwen3-235B-A22B, and even matches Claude-3.7-Sonnet-Thinking. Our analysis further identifies several critical insights in long-context reasoning RL optimization: (1) progressive context scaling promotes higher entropy and stabilizes KL divergence, enhancing training efficiency; (2) SFT proves to be an economical way for performance enhancement, whereas RL unlocks the potential to achieve optimal performance; (3) RL naturally fosters specialized long-context reasoning behaviors that boost final performance, but imitating these behaviors do not translate into gains when applied to SFT. Our key contributions are summarized as follows: •We conceptualize the paradigm of long-context reasoning RL and identify its unique challenges, making a further step towards developing practical long-context LRMs capable of grounding and integrating information for complex, real-world reasoning scenarios. •We present QWEN LONG -L1, the first RL framework designed for long-context LRMs. Through progressive context scaling, QWEN LONG -L1enables stable short-to-long context adaptation via group-relative RL optimization and hybrid reward mechanisms. •We showcase the effectiveness of QWEN LONG -L1through comprehensive experiments across seven long-context document question answering benchmarks. Our results reveal that QWEN LONG- L1achieves substantial performance gains compared to cutting-edge LRMs, offering a fundamental recipe and practice for long-context reasoning optimization. 2 Long-Context Reasoning Reinforcement Learning While existing works have explored reinforcement learning (RL) for enhancing short-context reason- ing tasks [ 14,56,16], the extension of long-context reasoning RL remains an unresolved challenge. To this end, we introduce QWEN LONG-L1, the first RL framework specifically designed for long-context reasoning. In this section, we first describe the preliminaries, and then detail the implementation, including the RL algorithms for long-context reasoning, the progressive context scaling strategies for stable optimization, and the hybrid reward mechanisms to balance evaluation precision and recall. 2.1 Preliminaries The standard RL objective for language models optimizes a KL-regularized expected reward [32]: max πθEx∼D,y∼πθ(·|x)[rϕ(x, y)]−βDKL[πθ(y|x)||πref(y|x)] (1) where rϕ(x, y)denotes the reward for output ygiven input xfrom the policy model πθ, and πref represents the reference model for DKLregularization. 3 Unlike prior works wherein the input xtypically is a short question, requiring the policy model πθto generate output ybased on its parametric knowledge, we extend this formulation to incorporate an additional long-context c. Therefore, the policy model πθneeds to first ground relevant information from c, and then provide reasoning chains to solve the question x: max πθEx,c∼D,y∼πθ(·|x,c)[rϕ(x, c, y )]−βDKL[πθ(y|x, c)||πref(y|x, c)] (2) The context cand the question xare concatenated by the following prompt template: Table 1: Prompt template to concatenate the input context and question. Please read the following text and answer the question below. <text> context </text> question Format your response as follows: "Therefore, the answer is (insert answer here)". 2.2 RL Algorithms Proxy Policy Optimization (PPO) To optimize the policy model πθusing the above objective, the PPO [33] algorithm proposes to maximize the clipped surrogate objective: JPPO(θ) =Ex,c∼D,y∼πθold(·|x,c) " 1 |y||y|X t=1min πθ(yt|x, c, y <t) πθold(yt|x, c, y <t)At,clip πθ(yt|x, c, y <t) πθold(yt|x, c, y <t),1−ε,1 +ε! At!# (3) where πθis the current policy model, πθoldis the old policy model before updating,
https://arxiv.org/abs/2505.17667v2
εis the clipping hyperparameter, and Atis the advantage estimator of the t-th token. For a given input context cand question x,πθoldfirst generate sequential output y, and then Atis calculated to update πθusing the generalized advantage estimation (GAE) [31] with a learned value function Vϕ. For long-context inputs, the quadratic complexity of attention mechanisms renders value network training computationally prohibitive. Therefore, we opt for the group-relative RL algorithms, includ- ing GRPO [ 34] and DAPO [ 54], to estimate the advantage through group-normalized rewards instead of an additional value network. Group Relative Policy Optimization (GRPO) In GRPO, given the context cand the question x, the old policy model πθoldgenerates a group of Goutputs {yi}G i=1, with rewards {ri}G i=1calculated by the reward function. The optimization objective for the policy model πθbecomes: JGRPO(θ) =Ex,c∼D,{yi}G i=1∼πθold(·|x,c)" 1 GGX i=11 |yi||yi|X t=1 minπθ(yi,t|x, c, y i,<t) πθold(yi,t|x, c, y i,<t)Ai,t, clipπθ(yi,t|x, c, y i,<t) πθold(yi,t|x, c, y i,<t),1−ε,1 +ε Ai,t −βDKL(πθ||πref)!# (4) where the advantage for token yi,tis estimated by normalizing the group-level rewards: Ai,t=ri−mean ({ri}G i=1) std({ri}G i=1)(5) In our implementation, we remove the KL term in the GRPO objective to encourage the exploration capacity of the policy model, following the common suggestions in recent works [14, 23, 54]. Decoupled Clip and Dynamic Sampling Policy Optimization (DAPO) DAPO integrates several techniques to ensure a more stable and efficient RL process: (1) a higher clip threshold to avoid entropy collapse, (2) a dynamic sampling strategy to remove examples with zero reward variance, (3) a token-level loss to mitigate the length bias, and (4) an overlong reward shaping to avoid excessively long outputs. The optimization objective for the policy model πθis: JDAPO(θ) =E(x,c)∼D,{yi}G i=1∼πθold(·|x,c)" 1PG i=1|yi|GX i=1|yi|X t=1minπθ(yt|x, c, y <t) πθold(yt|x, c, y <t)Ai,t, clipπθ(yt|x, c, y <t) πθold(yt|x, c, y <t),1−εlow,1 +εhigh Ai,t# (6) 4 where εlowandεhighare the low and high clip thresholds, respectively. The dynamic sampling strategy ensures that the examples have a non-zero reward variance, and the token-level loss sets an equal weight for loss calculation for each token in the outputs. The overlong reward shaping set a soft length-award punishment as follows: rfinali=  ri, |yi| ≤Lmax−Lcache ri+(Lmax−Lcache)−|yi| Lcache, L max−Lcache<|yi| ≤Lmax ri−1, L max<|yi|(7) where Lmaxis the maximum sequence length and Lcacheis a buffer zone for gradual length penalization. Different from GRPO, the final advantage for yi,tin DAPO becomes: Ai,t=rfinali−mean ({rfinali}G i=1) std({rfinali}G i=1)(8) 2.3 Progressive Context Scaling Training LRMs for long-context reasoning presents unstable optimization dynamics. To address these issues, we propose a progressive context scaling framework, including a curriculum-guided RL strategy to stabilize short-to-long context optimization, a difficulty-aware retrospective sampling mechanism to prioritize exploration of complex instances, and a warm-up supervised fine-tuning (SFT) phase to provide robust initialization before RL training. Curriculum-Guided Phased Reinforcement Learning The RL process is divided into Kdiscrete phases, with the target context lengths of Lkfor each phase. Starting from an initial input length L1, each subsequent phase increases the input length until reaching the maximum target length LK. During phase k, the policy model πθis trained exclusively on examples that satisfies: Lk−1<|x|+|c| ≤Lk, L
https://arxiv.org/abs/2505.17667v2
0= 0 (9) where |x|and|c|denote the question length and supporting context length, respectively. Difficulty-Aware Retrospective Sampling Building on the crucial efficacy of instance difficulty in previous data selection studies [ 59,19], we adopt a difficulty-aware retrospective sampling method to strategically incorporate instances from preceding phases. Specifically, we implement importance sampling weighted by difficulty scores to curate retrospective instances: diff(x, c) =1 mean ({ri}G i=1)(10) where diff(·)denotes the difficulty function, quantified as the inverse mean reward {ri}G i=1from a group of outputs generated by the base model. Lower mean rewards correspond to higher difficulty scores, prioritizing challenging instances during retrospective sampling. Warm-Up Supervised Fine-Tuning Prior to initiating RL training, we employ a warm-up supervised fine-tuning (SFT) stage to establish a robust initial policy model capable of grounding information from the long-context inputs. This critical preparatory stage ensures the policy model develops fundamental capabilities in context comprehension, reasoning chain generation, and answer extraction before exposure to the instability of RL optimization. The SFT process utilizes high-quality demonstrations DSFTdistilled from a teacher LRM, where each example contains a question x, supporting context c, and a gold-standard reasoning path y∗ with verified correctness. To align with the progressive scaling curriculum, we construct DSFTwithin the initial input length L1in curriculum-guided RL. The model is trained to minimize the standard negative log-likelihood objective: LSFT(θ) =−E(x,c,y∗)∼D SFT1 |y∗||y∗|X t=1logπθ(y∗ t|x, c, y∗ <t) (11) The resulting SFT model serves as the initial policy πθfor RL training, providing stable starting parameters. In Section 4.2, we demonstrate the effectiveness of the proposed three strategies for stable short-to-long context scaling in reasoning RL. 5 2.4 Hybrid Reward Mechanisms Prior works on short-context reasoning tasks in mathematics, coding, and logical reasoning [ 24,7,47] typically utilize rule-based reward functions that prioritize precision through strict answer matching and format verification to mitigate reward hacking risks [ 35]. However, long-context reasoning tasks such as open-domain question answering present unique challenges due to their inherent answer diversity. Overly restrictive rule-based rewards in such contexts risk constraining valid answer variations, potentially compromising overall performance. To address these limitations, we propose a hybrid reward mechanism that combines rule-based verification [ 11] with LLM-as-a-judge [ 58], thereby balancing precision and recall through complementary evaluation. Rule-Based Verification The rule-based component rruleensures precision by verifying strict ad- herence to task-specific correctness criteria. For question answering tasks, we first extract the final answer yansfrom model generations yusing regular expressions aligned with the structured prompt template in Table 1, and then perform exact string matching against the gold answer ygold: rrule(y) =I(yans=ygold) (12) where Irepresents the indicator function. Notably, we intentionally omit the format reward for answer extraction, as the base model demonstrates sufficient inherent format compliance capabilities; excessive format rewards could oversimplify the learning objective, potentially hindering the model’s ability for reasoning chain exploration [56, 16]. LLM-as-a-Judge To complement the precision-oriented rule-based component and address potential false negatives in string matching, we introduce an LLM-based evaluator rLLMthat assesses semantic equivalence between generated and gold answers: rLLM(x, y) =LLM(x, y ans, ygold) (13) where the LLM judge produces a binary correctness score based on
https://arxiv.org/abs/2505.17667v2
the evaluation template as follows: Table 2: Prompt template for LLM-as-a-judge to compare the predicted answer and the gold answer given the question quesiton. You are an expert in verifying if two answers are the same. Your input is a problem and two answers, Answer 1 and Answer 2. You need to check if they are equivalent. Your task is to determine if two answers are equivalent, without attempting to solve the original problem. Compare the answers to verify they represent identical values or meaning, even when written in different forms or notations. Your output must follow the following format: 1) Provide an explanation for why the answers are equivalent or not. 2) Then provide your final answer in the form of: [[YES]] or [[NO]] Problem: question Answer 1: predicted answer Answer 2: gold answer Combined Reward Formulation The integrated reward function combines both rule-based verifica- tion and LLM-as-a-judge through maximum selection: rϕ(x, y) = max( rrule(y),rLLM(x, y)) (14) Given the relative simplicity of answer comparison tasks, we employ a small model, e.g., Qwen2.5- 1.5B-Instruct [ 50], with a temperature of zero for deterministic scoring. This configuration enables efficient reward computation during online RL training while maintaining evaluation reliability. 3 Experimental Setup In our experiments, we employ document question answering (DocQA) as our primary evaluation task for long-context reasoning capabilities, as it inherently requires both contextual grounding and multi-step reasoning. This section details our experimental setup for training and evaluation. 3.1 Training Datasets RL Dataset To construct a challenging RL dataset for verifiable long-context reasoning, we develop DOCQA-RL-1.6K , which comprises 1.6K DocQA problems across three reasoning domains: (1) 6 Table 3: Detailed statistics of our train and test datasets. Length is calculated by the Qwen tokenizer. StatisticsTrain Dataset Test Dataset SFT RL DocMath Frames 2Wiki HQA Musi NarQA Qasp # Examples 5,305 1,591 200 824 200 200 200 200 200 Avg. Length 13,064 11,437 17,645 15,756 7,530 13,431 16,327 29,887 5,074 Max. Length 20,003 59,559 176,285 117,131 17,035 17,640 17,883 65,357 21,927 Mathematical Reasoning : We use 600 problems from the DocMath [ 57] dataset, requiring numerical reasoning across long and specialized documents such as financial reports2; (2) Logical Reasoning : We employ DeepSeek-R1 [ 11] to synthesize 600 multi-choice questions requiring logic analysis of real-world documents spanning legal, financial, insurance, and production domains from our curated collection; (3) Multi-Hop Reasoning : We sample 200 examples from MultiHopRAG [ 36] and 200 examples from Musique [44], emphasizing cross-document reasoning. SFT Dataset To establish a robust starting point for RL optimization, we distill 5.3K high-quality question-document-answer triplets through DeepSeek-R1 [ 11]. Aligned with recent data curation methods for LRMs [ 25,53], we clean and filter questions based on quality, complexity, and diversity. Additionally, we control the quality and length of the documents to ensure precise contextual information. In Table 3, we provide the statistics of our RL and SFT datasets. 3.2 Training Details Base Model In our experiments, we initialize our base model with R1-Distill-Qwen-14B and R1- Distill-Qwen-32B [11], subsequently implementing SFT and RL optimization phases3. RL Training As depicted in Section
https://arxiv.org/abs/2505.17667v2
2.3, we propose a progressive context scaling mechanism for long-context reasoning RL optimization. Specifically, the training process follows a two-phase curriculum context scaling, with 20K input length L1in phase I, and 60K input length L2in phase II. We incorporate difficulty-aware retrospective sampling to maintain the most difficult samples with an average accuracy of zero from phase I to II. The training is conducted on 32xA100-80G GPUs, with a train batch size of 128, a mini batch size of 32, a rollout number of 8, and a learning rate of 2e-6. We set a temperature to 0.7 and a top-p to 0.95 with a maximum output length of 10K for sampling. SFT Training The input length in the SFT stage is set to 20K. The training is conducted on 32xA100- 80G GPUs for 3 epochs, with a train batch size of 128, and a learning rate of 5e-6. 3.3 Evaluation Details Benchmarks We conduct evaluation on seven long-context DocQA benchmarks, including multi-hop reasoning benchmarks4such as 2WikiMultihopQA [ 13], HotpotQA [ 51], Musique [ 44], Narra- tiveQA [ 17], Qasper [ 5], and Frames [ 18] as well as mathematical reasoning benchmarks like DocMath [ 57]. We report the maximum of exact match and LLM-judged accuracy as the final score, aligned with the reward function in Section 2.4. We use DeepSeek-V3 [ 21] as the judge model with a temperature of 0.0 to provide a reliable evaluation. The benchmark statistics are shown in Table 3. Configurations We evaluate our long-context LRMs with a maximum input length of 120K and output length of 10K. For the proprietary LRMs with a limited context length, we set the maximum input length to 50K. We conduct a zero-shot evaluation with a temperature of 0.7 and a top-p of 0.95. 3.4 Baselines We compare Q WEN LONG -L1 against the following state-of-the-art LRMs. Proprietary LRMs OpenAI-o1-preview [ 15], Claude-3.7-Sonnet-Thinking [ 1], OpenAI-o3- mini [28], Qwen3-Plus [42], QwQ-Plus [43], and Gemini-2.0-Flash-Thinking [37]. Open-Source LRMs DeepSeek-R1 [ 11], Qwen3-235B-A22B [ 42], Qwen3-32B [ 42], QwQ-32B [ 43], R1-Distill-Qwen-32B [11], and R1-Distill-Qwen-14B [11]. 2For DocMath, we sample 75% items from each subset from its valid split for training and 25% for evaluation. 3We exclude 7B/1.5B variants due to their mathematical reasoning feature inherent from Qwen2.5-Math [ 50]. 4We use the data from LongBench [ 2] for 2WikimultihopQA, HotpotQA, Musique, NarrativeQA, and Qasper. 7 Table 4: Main results across seven long-context DocQA benchmarks. We highlight the top-1 and top-3 performance. ∆indicates the performance gains and declines compared to the base models. Models DocMath Frames 2Wiki HQA Musi NarQA Qasp Avg. Proprietary LRMs OpenAI-o1-preview 64.5 80.8 87.5 83.5 69.0 68.0 57.0 72.9 Claude-3.7-Sonnet-Thinking 67.5 70.9 86.5 84.4 68.3 61.5 56.0 70.7 OpenAI-o3-mini 66.5 75.5 86.5 83.5 66.5 59.0 55.0 70.4 Qwen3-Plus 66.0 73.6 90.5 82.4 69.8 57.5 52.5 70.3 QwQ-Plus 64.5 73.5 89.0 81.0 66.5 62.0 53.5 70.0 Gemini-2.0-Flash-Thinking 63.0 69.8 82.9 79.5 62.5 57.0 45.5 65.7 Open-Source LRMs DeepSeek-R1 66.0 79.6 89.9 82.5 74.5 59.5 53.0 72.1 Qwen3-235B-A22B 67.5 74.6 91.5 84.4 63.3 60.0 53.0 70.6 QwQ-32B 59.5 72.9 90.5 78.5 66.0 58.0
https://arxiv.org/abs/2505.17667v2
57.5 69.0 Qwen3-32B 58.0 70.0 87.0 83.4 62.8 57.5 56.0 67.8 R1-Distill-Qwen-32B 62.5 67.0 84.0 80.5 61.0 54.0 50.0 65.6 R1-Distill-Qwen-14B 61.0 64.2 87.0 77.5 58.0 51.0 51.0 64.2 Ours Methods R1-Distill-Qwen-14B-SFT 60.0 65.7 88.5 80.5 60.0 52.0 48.5 65.0 ∆to R1-Distill-Qwen-14B (-1.0) (+1.5) (+1.5) (+3.0) (+2.0) (+1.0) (-2.5) (+0.8) QWEN LONG -L1-14B-GRPO 65.0 68.7 88.5 86.5 63.5 53.5 51.5 68.2 ∆to R1-Distill-Qwen-14B (+4.0) (+4.5) (+1.5) (+9.0) (+5.5) (+2.5) (+0.5) (+4.0) QWEN LONG -L1-14B-DAPO 65.5 67.4 89.0 84.0 63.0 57.0 52.5 68.3 ∆to R1-Distill-Qwen-14B (+4.5) (+3.2) (+2.0) (+6.5) (+5.0) (+6.0) (+1.5) (+4.1) R1-Distill-Qwen-32B-SFT 65.0 71.6 87.0 80.5 65.5 57.5 54.0 68.7 ∆to R1-Distill-Qwen-32B (+2.5) (+4.6) (+3.0) (+0.0) (+4.5) (+3.5) (+4.0) (+3.2) QWEN LONG -L1-32B-GRPO 68.0 72.2 87.0 82.0 66.0 61.0 56.0 70.3 ∆to R1-Distill-Qwen-32B (+5.5) (+5.2) (+3.0) (+1.5) (+5.0) (+7.0) (+6.0) (+4.7) QWEN LONG -L1-32B-DAPO 67.5 70.1 90.5 83.0 69.0 56.0 58.5 70.7 ∆to R1-Distill-Qwen-32B (+5.0) (+3.1) (+6.5) (+2.5) (+8.0) (+2.0) (+8.5) (+5.1) /uni00000014 /uni00000015 /uni00000017 /uni0000001b /uni00000014/uni00000019/uni00000019/uni00000018/uni0000001a/uni00000013/uni0000001a/uni00000018/uni0000001b/uni00000013/uni0000001b/uni00000018 /uni00000019/uni0000001b/uni00000011/uni00000015/uni0000001a/uni00000016/uni00000011/uni0000001a/uni0000001a/uni0000001b/uni00000011/uni0000001b/uni0000001b/uni00000015/uni00000011/uni0000001a/uni0000001b/uni00000017/uni00000011/uni0000001c /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000003/uni00000020/uni00000003/uni0000001a/uni00000015/uni00000011/uni00000014/uni00000052/uni00000014/uni00000010/uni00000053/uni00000055/uni00000048/uni00000059/uni0000004c/uni00000048/uni0000005a/uni00000003/uni00000020/uni00000003/uni0000001a/uni00000015/uni00000011/uni0000001c/uni00000024/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004a/uni00000048 /uni00000014 /uni00000015 /uni00000017 /uni0000001b /uni00000014/uni00000019/uni00000019/uni00000017/uni0000001a/uni00000013/uni0000001a/uni00000019/uni0000001b/uni00000015 /uni00000019/uni0000001a/uni00000011/uni00000018/uni0000001a/uni00000015/uni00000011/uni00000013/uni0000001a/uni00000019/uni00000011/uni00000018/uni0000001b/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000015/uni00000011/uni00000013 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000003/uni00000020/uni00000003/uni00000019/uni00000019/uni00000011/uni00000013 /uni00000052/uni00000014/uni00000010/uni00000053/uni00000055/uni00000048/uni00000059/uni0000004c/uni00000048/uni0000005a/uni00000003/uni00000020/uni00000003/uni00000019/uni00000017/uni00000011/uni00000018/uni00000027/uni00000052/uni00000046/uni00000030/uni00000044/uni00000057/uni0000004b /uni00000014 /uni00000015 /uni00000017 /uni0000001b /uni00000014/uni00000019/uni00000019/uni00000018/uni0000001a/uni00000013/uni0000001a/uni00000018/uni0000001b/uni00000013/uni0000001b/uni00000018 /uni00000019/uni0000001a/uni00000011/uni00000013/uni0000001a/uni00000015/uni00000011/uni0000001b/uni0000001a/uni0000001c/uni00000011/uni00000013/uni0000001b/uni00000014/uni00000011/uni0000001b/uni0000001b/uni00000017/uni00000011/uni00000013 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000003/uni00000020/uni00000003/uni0000001a/uni0000001c/uni00000011/uni00000019/uni00000052/uni00000014/uni00000010/uni00000053/uni00000055/uni00000048/uni00000059/uni0000004c/uni00000048/uni0000005a/uni00000003/uni00000020/uni00000003/uni0000001b/uni00000013/uni00000011/uni0000001b/uni00000029/uni00000055/uni00000044/uni00000050/uni00000048/uni00000056 /uni00000014 /uni00000015 /uni00000017 /uni0000001b /uni00000014/uni00000019/uni0000001b/uni0000001a/uni0000001c/uni00000013/uni0000001c/uni00000016/uni0000001c/uni00000019/uni0000001c/uni0000001c /uni0000001b/uni0000001b/uni00000011/uni00000013/uni0000001c/uni00000014/uni00000011/uni00000018/uni0000001c/uni00000017/uni00000011/uni00000018/uni0000001c/uni00000019/uni00000011/uni00000018/uni0000001c/uni0000001a/uni00000011/uni00000018 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000003/uni00000020/uni00000003/uni0000001b/uni0000001c/uni00000011/uni0000001c /uni00000052/uni00000014/uni00000010/uni00000053/uni00000055/uni00000048/uni00000059/uni0000004c/uni00000048/uni0000005a/uni00000003/uni00000020/uni00000003/uni0000001b/uni0000001a/uni00000011/uni00000018/uni00000015/uni0000003a/uni0000004c/uni0000004e/uni0000004c/uni00000050/uni00000034/uni00000024 /uni00000014 /uni00000015 /uni00000017 /uni0000001b /uni00000014/uni00000019/uni0000001b/uni00000014/uni0000001b/uni00000018/uni0000001b/uni0000001c/uni0000001c/uni00000016/uni0000001c/uni0000001a /uni0000001b/uni00000016/uni00000011/uni00000018/uni0000001b/uni0000001c/uni00000011/uni00000018/uni0000001c/uni00000015/uni00000011/uni00000013/uni0000001c/uni00000017/uni00000011/uni00000018/uni0000001c/uni00000018/uni00000011/uni00000013 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000003/uni00000020/uni00000003/uni0000001b/uni00000015/uni00000011/uni00000018/uni00000052/uni00000014/uni00000010/uni00000053/uni00000055/uni00000048/uni00000059/uni0000004c/uni00000048/uni0000005a/uni00000003/uni00000020/uni00000003/uni0000001b/uni00000016/uni00000011/uni00000018/uni0000002b/uni00000052/uni00000057/uni00000053/uni00000052/uni00000057/uni00000034/uni00000024 /uni00000014 /uni00000015 /uni00000017 /uni0000001b /uni00000014/uni00000019/uni00000019/uni00000015/uni00000019/uni0000001b/uni0000001a/uni00000017/uni0000001b/uni00000013/uni0000001b/uni00000019 /uni00000019/uni00000016/uni00000011/uni00000018/uni00000019/uni0000001a/uni00000011/uni00000013/uni0000001a/uni00000019/uni00000011/uni00000018/uni0000001b/uni00000015/uni00000011/uni00000013/uni0000001b/uni00000017/uni00000011/uni00000018 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000003/uni00000020/uni00000003/uni0000001a/uni00000017/uni00000011/uni00000018 /uni00000052/uni00000014/uni00000010/uni00000053/uni00000055/uni00000048/uni00000059/uni0000004c/uni00000048/uni0000005a/uni00000003/uni00000020/uni00000003/uni00000019/uni0000001c/uni00000011/uni00000015/uni00000030/uni00000058/uni00000056/uni0000004c/uni00000054/uni00000058/uni00000048 /uni00000014 /uni00000015 /uni00000017 /uni0000001b /uni00000014/uni00000019/uni00000018/uni00000018/uni00000019/uni00000016/uni0000001a/uni00000014/uni0000001a/uni0000001c /uni00000018/uni0000001a/uni00000011/uni0000001a/uni00000019/uni00000019/uni00000011/uni00000018/uni0000001a/uni00000013/uni00000011/uni0000001c/uni0000001a/uni00000017/uni00000011/uni00000015/uni0000001a/uni0000001a/uni00000011/uni00000018 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000003/uni00000020/uni00000003/uni00000018/uni0000001c/uni00000011/uni00000018/uni00000052/uni00000014/uni00000010/uni00000053/uni00000055/uni00000048/uni00000059/uni0000004c/uni00000048/uni0000005a/uni00000003/uni00000020/uni00000003/uni00000019/uni0000001b/uni00000011/uni00000013/uni00000031/uni00000044/uni00000055/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000059/uni00000048/uni00000034/uni00000024 /uni00000014 /uni00000015 /uni00000017 /uni0000001b /uni00000014/uni00000019/uni00000018/uni00000013/uni00000018/uni0000001b/uni00000019/uni00000019/uni0000001a/uni00000017 /uni00000018/uni00000013/uni00000011/uni00000016/uni00000018/uni00000019/uni00000011/uni0000001c/uni00000019/uni00000015/uni00000011/uni00000014/uni00000019/uni0000001c/uni00000011/uni0000001a/uni0000001a/uni00000016/uni00000011/uni0000001b /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000003/uni00000020/uni00000003/uni00000018/uni00000016/uni00000011/uni00000013/uni00000052/uni00000014/uni00000010/uni00000053/uni00000055/uni00000048/uni00000059/uni0000004c/uni00000048/uni0000005a/uni00000003/uni00000020/uni00000003/uni00000018/uni0000001a/uni00000011/uni00000013/uni00000034/uni00000044/uni00000056/uni00000053/uni00000048/uni00000055 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056/uni00000003/uni0000002e/uni00000003/uni0000000b/uni0000002f/uni00000052/uni0000004a/uni00000003/uni00000036/uni00000046/uni00000044/uni0000004f/uni00000048/uni0000000c/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni0000000b/uni00000008/uni0000000c Figure 4: Pass@K rates of QWEN LONG -L1-14B with different sample numbers across all bench- marks. We show that QWEN LONG -L1-14B surpasses DeepSeek-R1 with a small sampling number. 4 Experimental Results 4.1 Main Results Table 4 presents the overall performance of QWEN LONG -L1across seven long-context document question answering (DocQA) benchmarks. The key findings are as follows: 8 Figure 5: Ablation studies of progressive context scaling strategy, where “Baseline” refers to the base or SFT model before RL training, “RL” refers to the naive single-stage RL, and “Phased RL” refers to the curriculum-guided phased RL. “RS” refers to the difficulty-aware retrospective sampling. Limited Efficacy of SFT for Long-Context Reasoning . Since the base model, R1-Distill-Qwen, is primarily optimized for short-context reasoning tasks in mathematics, coding, and scientific domains, we conduct SFT to adapt it for long-context reasoning before RL training, as outlined in Section 2.3. Despite this intervention, the SFT model only shows an average gain of 0.8 points on 14B and 3.2 points on 32B. Furthermore, the improvements exhibit significant variability across benchmarks, suggesting limited generalizability of the SFT approach to long-context reasoning scenarios. Significant Improvements via RL Integration . Through the integration of RL, QWEN LONG -L1 exhibits remarkable advancements in long-context reasoning performance. Notably, QWEN LONG - L1-14B achieves an average improvement of 4.1 and 4.0 points over the base model with DAPO and GRPO, surpassing the 0.4 points improvement of the SFT baseline by a significant margin. Furthermore, when scaling to 32B base model, QWEN LONG-L1-32B even demonstrates a 5.1 and 4.7 points performance increase with DAPO and GRPO. These results highlight the advanced capacity of RL approaches in refining the output distribution to address intricate, context-dependent reasoning problems through group-relative advantage estimation and incentives for on-policy sampled outputs. Leading Performance among Flagship LRMs
https://arxiv.org/abs/2505.17667v2
. Our evaluation demonstrates that QWEN LONG-L1 achieves superior performance compared to leading proprietary and open-source LRMs. Specifically, QWEN LONG -L1-14B achieves an average score of 68.3, surpassing Gemini-2.0-Flash-Thinking, R1-Distill-Qwen-32B, and Qwen3-32B, while mathcing the performance of QwQ-32B. Moreover, QWEN LONG -L1-32B achieves an average score of 70.7, exceeding the performance of QwQ-Plus, Qwen3-Plus, Qwen3-235B-A22B, and OpenAI-o3-mini, even comparable to Claude-3.7-Sonnet- Thinking, demonstrating leading performance among state-of-the-art flagship LRMs. Additional Enhancements by Test-Time Scaling . We further conduct experiments to analyze the test-time scaling performance of QWEN LONG-L1. Following established works [ 11,45], we generate 16 candidate outputs per input question and evaluate Pass@K to quantify exploratory capability across all benchmarks. As illustrated in Figure 4, QWEN LONG -L1-14B exhibits consistent perfor- mance enhancements with increased sampling scales. Notably, Q WEN LONG -L1-14B demonstrates remarkable gains, even surpassing DeepSeek-R1 and OpenAI-o1-preview with a small sample size. Specifically, it achieves an average Pass@2 rate of 73.7 across all benchmarks, outperforming both DeepSeek-R1’s 72.1 and OpenAI-o1-previews’s 72.9, highlighting the efficacy of test-time scaling. Moreover, the significant gap between Pass@K and Pass@1 metrics indicates further potential for RL training to better bridge the transition from diverse exploration to optimal exploitation. 4.2 Ablation Studies In this section, we conduct ablation studies to investigate the key components in QWEN LONG -L1 that enable successful progressive context scaling for long-context reasoning RL, including warm- 9 up supervised fine-tuning, curriculum-guided phased reinforcement learning, and difficulty-aware retrospective sampling, with the experimental results shown in Figure 5. Warm-up Supervised Fine-tuning . To illustrate the influence of warm-up SFT, we first evaluate the overall performance of models trained with and without this preparatory phase across seven benchmarks, using various RL algorithms and training strategies. As illustrated in Figure 5 (a), integration warm-up SFT yields significant performance improvements in all experimental setups. To further explore the mechanism of warm-up SFT in RL dynamics, Figure 5 (b) tracks the reward scores and gradient norm during training. The results reveal that warm-up SFT not only accelerates reward improvements but also sustains lower gradient norm during different RL phases, validating its capacity to prioritize performance gains over format alignment when transitioning models from short-context to long-context reasoning tasks. These findings highlight the necessity of integrating SFT as a precursor to providing a robust and efficient initialization for RL training. Curriculum-Guided Phased Reinforcement Learning . As shown in Figure 5 (a), we conduct a comparative analysis between naive single-stage RL and the proposed curriculum-guided phased RL, with different training configurations: GRPO, DAPO, SFT + GRPO, and SFT + DAPO. The results demonstrate that our phased RL methodology achieves substantial performance improvements. We also note that this improvement is less pronounced when models are initialized with SFT, suggesting that warm-up training partially compensates for curriculum design. Further analysis in Figure 5 (c) reveals that single-stage RL exhibits heightened instability, as demonstrated by fluctuating KL divergence and entropy collapse. These results confirm the pivotal role of curriculum-guided phased training in the stable policy evolution from short-context to long-context reasoning RL. Difficulty-Aware Retrospective Sampling . To maintain a wild exploration
https://arxiv.org/abs/2505.17667v2
of hard examples, we introduce a difficulty-aware retrospective sampling strategy to integrate a subset of hard samples from prior training phases into the current training data. As illustrated in 5 (a), this strategy yields further performance enhancements with phased RL. Notably, in Figure 5 (d), despite undergoing phase I RL training, these retained hard examples also lead to significantly lower reward and higher policy entropy, which incentivize the policy model to augment the exploration process. 4.3 Additional Analysis /uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000036/uni0000004b/uni00000052/uni00000055/uni00000057/uni00000003/uni00000036/uni00000029/uni00000037 /uni0000002f/uni00000052/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000029/uni00000037/uni00000019/uni00000017/uni00000019/uni00000018/uni00000019/uni00000019/uni00000019/uni0000001a/uni00000019/uni0000001b/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni0000000b/uni00000008/uni0000000c /uni00000019/uni00000017/uni00000011/uni00000018/uni00000019/uni00000018/uni00000011/uni00000013/uni00000019/uni0000001a/uni00000011/uni00000014 /uni00000019/uni00000019/uni00000011/uni0000001a/uni00000019/uni0000001b/uni00000011/uni00000015 /uni00000019/uni0000001a/uni00000011/uni00000017/uni00000025/uni00000048/uni00000049/uni00000052/uni00000055/uni00000048/uni00000003/uni00000035/uni0000002f /uni00000024/uni00000049/uni00000057/uni00000048/uni00000055/uni00000003/uni00000035/uni0000002f Figure 6: Comparison between differ- ent models before and after RL, where “Baseline” denotes the base model, “Short SFT” denotes the short-context SFT model, and “Long SFT” denotes the long-context SFT model.In this section, we investigate the questions pertaining to the development of long-context LRMs, focusing on the trade-off between SFT and RL in optimizing long-context reasoning capabilities, and the emergence and dynamics of long-context reasoning behaviors during training. Trade-off between SFT and RL in Optimization . As discussed in Section 4.2, SFT offers a robust initialization for RL training. However, given that the initial SFT phase in our experiments relied on short-context training data, a critical question arises regarding the role of long-context SFT and its impact on RL. To this end, we train a long- context SFT model using 10K context-question-answer triplets distilled from DeepSeek-R1, maintaining the same data distribution as the short-context SFT phase. This long- context SFT model serves as the starting point for single- stage RL training, without progressive context scaling due to its inherent long-context capability. As shown in Figure 6, the long-context SFT model surpasses both the base model by 2.6 points and the short-context SFT model by 2.1 points. Despite the requirement for more training data, SFT offers distinct practical advantages, including reduced computational complexity, minimal infrastructure demands, and diminished reliance on specialized technical expertise, thereby positioning it as an economical strategy for performance enhancement [ 11]. However, further RL applied to the long- context SFT model yields marginal improvements, with only 0.3 points gains and a 67.4 final score, significantly underperforming the 3.2 points improvements and a 68.2 final score achieved when RL is applied to the short-context SFT model. These results highlight two insights for long-context LRM development: (1) SFT and RL exhibit distinct yet complementary purposes—SFT achieves acceptable performance with less effort, whereas RL is indispensable for attaining optimal results; 10 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000017/uni00000013 /uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000056/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni00000025/uni00000048/uni0000004b/uni00000044/uni00000059/uni0000004c/uni00000052/uni00000055/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057 /uni00000036/uni00000029/uni00000037/uni00000003/uni00000020/uni00000003/uni0000001c/uni00000011/uni00000014/uni0000001a/uni0000002a/uni00000055/uni00000052/uni00000058/uni00000051/uni00000047/uni0000004c/uni00000051/uni0000004a /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000017/uni00000013 /uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000056/uni00000015/uni00000011/uni00000015/uni00000015/uni00000011/uni00000019/uni00000016/uni00000011/uni00000013/uni00000016/uni00000011/uni00000017/uni00000025/uni00000048/uni0000004b/uni00000044/uni00000059/uni0000004c/uni00000052/uni00000055/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057 /uni00000036/uni00000029/uni00000037/uni00000003/uni00000020/uni00000003/uni00000016/uni00000011/uni00000016/uni00000016/uni00000025/uni00000044/uni00000046/uni0000004e/uni00000057/uni00000055/uni00000044/uni00000046/uni0000004e/uni0000004c/uni00000051/uni0000004a /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000017/uni00000013 /uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000056/uni00000014/uni00000011/uni00000018/uni00000015/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni00000016/uni00000011/uni00000013/uni00000025/uni00000048/uni0000004b/uni00000044/uni00000059/uni0000004c/uni00000052/uni00000055/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057 /uni00000036/uni00000029/uni00000037/uni00000003/uni00000020/uni00000003/uni00000015/uni00000011/uni0000001c/uni00000013/uni00000039/uni00000048/uni00000055/uni0000004c/uni00000049/uni0000004c/uni00000046/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000017/uni00000013 /uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000056/uni00000017/uni00000011/uni00000015/uni00000017/uni00000011/uni00000017/uni00000017/uni00000011/uni00000019/uni00000017/uni00000011/uni0000001b/uni00000025/uni00000048/uni0000004b/uni00000044/uni00000059/uni0000004c/uni00000052/uni00000055/uni00000003/uni00000026/uni00000052/uni00000058/uni00000051/uni00000057 /uni00000036/uni00000029/uni00000037/uni00000003/uni00000020/uni00000003/uni00000017/uni00000011/uni0000001a/uni0000001c/uni00000036/uni00000058/uni00000045/uni0000004a/uni00000052/uni00000044/uni0000004f/uni00000003/uni00000036/uni00000048/uni00000057/uni00000057/uni0000004c/uni00000051/uni0000004a /uni00000019/uni00000016/uni00000019/uni00000017/uni00000019/uni00000018/uni00000019/uni00000019 /uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni0000000b/uni00000008/uni0000000c /uni00000036/uni00000029/uni00000037/uni00000003/uni00000020/uni00000003/uni00000019/uni00000018 /uni00000019/uni00000016/uni00000019/uni00000017/uni00000019/uni00000018/uni00000019/uni00000019 /uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni0000000b/uni00000008/uni0000000c /uni00000036/uni00000029/uni00000037/uni00000003/uni00000020/uni00000003/uni00000019/uni00000018 /uni00000019/uni00000016/uni00000019/uni00000017/uni00000019/uni00000018/uni00000019/uni00000019 /uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni0000000b/uni00000008/uni0000000c /uni00000036/uni00000029/uni00000037/uni00000003/uni00000020/uni00000003/uni00000019/uni00000018 /uni00000019/uni00000016/uni00000019/uni00000017/uni00000019/uni00000018/uni00000019/uni00000019 /uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni0000000b/uni00000008/uni0000000c /uni00000036/uni00000029/uni00000037/uni00000003/uni00000020/uni00000003/uni00000019/uni00000018Figure 7: The change in reasoning behavior over training steps. We focus on four core reasoning behaviors, including long-context specific grounding and three general reasoning strategies: subgoal setting ,backtracking , and verification . During RL training, we show that all the behaviors increase progressively with the corresponding performance gains. However, despite SFT leading to signifi- cantly increased reasoning behaviors, these efforts fail to improve the final performance. (2) Maximizing performance necessitates prioritizing RL over SFT, as
https://arxiv.org/abs/2505.17667v2
excessive focus on SFT risks trapping models in local optima, thereby constraining opportunities for RL improvements. Emergence and Dynamics of Long-Context Reasoning Behaviors . The reasoning behaviors critically shape LRMs’ reasoning trajectories and rewards [ 11,14]. To investigate these dynamics, we follow recent studies [ 56,23] to analyze the evolution of reasoning behaviors during SFT and RL training. Specifically, we use DeepSeek-V3 [ 21] to extract and track the shifts of the average count of four core reasoning behaviors over training steps, including long-context specific grounding and three general reasoning behaviors: subgoal setting ,backtracking , and verification : •Grounding : The model recalls related information in the long context to support subsequent reasoning, e.g., “Let me look through the provided text to find... ” . •Subgoal Setting : The model decomposes complex questions into multiple manageable subgoals to solve them step-by-step, e.g., “To solve this, we first need to... ” . •Backtracking : The model identifies errors in generations and go back to revise its approach iteratively, e.g., “This approach won’t work because... ” . •Verification : The model validates the predicted answers systematically to ensure solution correct- ness with self-reflection, e.g., “Let’s verify this result by... ” . The results in Figure 7 reveal three insights: (1) All LRMs exhibit marked reasoning behaviors, with long-context grounding occurring most frequently, underscoring its effectiveness in managing contextual dependencies during reasoning. (2) RL training amplifies these behaviors progressively, correlating with significant performance gains, suggesting RL’s efficacy in refining the output space to prioritize reasoning patterns conducive to accurate solutions. (3) In contrast, while SFT models demonstrate increased reasoning behaviors, these adjustments fail to transform into performance improvements, likely due to SFT’s inherent reliance on imitation learning, which prioritizes superficial pattern alignment over substantive reasoning skill development [59, 20]. 5 Conclusion and Future Work In this study, we explore the development of long-context LRMs with robust contextual grounding and reasoning capabilities through reinforcement learning (RL). We first propose the paradigm of long- context reasoning RL and identify suboptimal training efficiency andunstable optimization process . To address these challenges, we present QWEN LONG -L1, a progressive context scaling RL frame- work designed to bridge the gap between short-context proficiency and long-context generalization. Specifically, the training process begins with a warm-up SFT, followed by a curriculum-guided phased RL, with a difficulty-aware retrospective sampling strategy. Experiments across seven long-context document question-answering benchmarks demonstrate that QWEN LONG -L1achieves leading per- formance among state-of-the-art proprietary LRMs. Specifically, QWEN LONG-L1-14B outperforms Gemini-2.0-Flash-Thinking and Qwen3-32B, while QWEN LONG-L1-32B further surpasses OpenAI- o3-mini, Qwen3-235B-A22B, and even matches Claude-3.7-Sonnet-Thinking. Our analysis yields three key insights for long-context reasoning RL, including the pivotal role of progressive context scaling in enabling stable adaptation, the necessity of prioritizing RL for optimal performance, and the increase of long-context reasoning behaviors during RL training for performance improvements. 11 Future work should prioritize three key avenues to advance long-context LRMs. First, scaling real-world tasks, like automated scientific research and long video analysis, will provide appropriate environments to enhance long-context comprehension and decision-making capabilities. Second, developing advanced architectures is essential, including
https://arxiv.org/abs/2505.17667v2
optimized attention mechanisms, e.g., linear and sparse attention, and efficient infrastructures, e.g., asynchronous actor rollout and parameter updating. Third, rethinking long-context RL paradigms, such as transitioning from token-level to turn- level markov decision process (MDP), might enable the breakdown of long-context into sequential interactions and optimizing them iteratively, paving the way for infinite-context RL systems. 6 Case Study To demonstrate QWEN LONG-L1’s capabilities, we show two illustrative examples from our analysis. Case complong-testmini-183 : When evaluating QWEN LONG -L1-14B against R1-Distill-Qwen- 14B and R1-Distill-Qwen-14B-SFT, distinct behavioral differences emerge. In this instance, R1- Distill-Qwen-14B is misled by the detailed “Interest is payable semi–annually beginning October 15, 2011” . It grounds irrelevant temporal andfinancial information , and consequently miscalculates the first-year interest payment . Although R1-Distill-Qwen-14B-SFT is fine-tuned on high-quality SFT triplets distilled from DeepSeek-R1, it still fails to resolve this issue. Instead, it demonstrates persistent limitations by entering a loop of over-analysis on unrelated documents . The uncertainty andself-doubting behavior eventually causes it to exhaust the maximum generation limit (10,000 tokens) without providing the final answer. In contrast, while QWEN LONG-L1-14B initially exhibits similar distractions, it quickly engages in effective self-reflection . Through timely verification and backtracking , it successfully filters out the irrelevant details and arrives at the correct answer . Case complong-testmini-265 : This example illustrates how long-context reasoning behaviors (e.g. grounding ,backtracking ,verification ,subgoal setting ) emerge and interplay in practice, comple- menting our behavior evolution analysis in Figure 7. This task requires the model to infer the total interest accrued over two years based on dense financial filings . Here, the model correctly computes that the total interest is $980,000 based on a principal of $4,900,000 at 10% annual interest over two years . This success is attributed to the emergence of multiple reasoning behaviors, showcasing how long-context reasoning RL enhances not only improves answer accuracy but also fosters the development of internal cognitive behaviors during reasoning. DocMath complong-testmini-183 Ground Truth: 32.4 Prompt: Please read the following text and answer the question below. <text> ...127Docs ... 128: On March 22, 2011, we issued $300.0 million in aggregate principal amount of 8.0% senior unsecured notes due 2019 (the "Notes") at an offering price equal to 100% of par. The Notes were sold in a private placement to eligible purchasers in a transaction exempt from the registration requirements of the Securities Act of 1933, as amended. 129: We received net proceeds of $291.6 million, after deducting the discount of $7.5 million and offering expenses of $0.9 million. We used the net proceeds to repay indebtedness under our existing credit facility. The discount and the offering expenses are being amortized over the life of the Notes. The amortization is included in "Interest expense" on our unaudited condensed consolidated statements of operations. 130: The Notes were issued under an indenture dated March 22, 2011, (the "Indenture"), mature April 15, 2019, and bear interest at 8.0%. Interest is payable semi–annually beginning October 15, 2011. The Notes are general unsecured obligations and are effectively junior in right of payment to any of our secured indebtedness
https://arxiv.org/abs/2505.17667v2
to the extent of the value of the collateral securing such indebtedness. ...271Docs ... </text> What is the total capital cost, combining both the offering cost of the senior notes and interest payment in the first year, in million dollars? Format your response as follows: "Therefore, the answer is (insert answer here)". 12 R1-Distill-Qwen-14B: Response (2166 tokens): <think> Alright, so I need to figure out the total capital cost for EV Energy Partners, L.P., combining both the offering cost of the senior notes and the interest payment in the first year. Let me go through the provided financial statements step by step to find all the relevant information. First, I’ll look for the senior notes offering costs. [...] The offering expenses were $0.9 million, and the discount was $7.5 million. [...] So the total offering costs (discount and expenses) are $7.5 + $0.9 = $8.4 million. Next, I need to find the interest payment for the first year. The senior notes have an 8% annual interest rate and are due in 2019. They were issued in March 2011, so the first interest period would be from March to September 2011, but the interest is actually payable semi-annually starting in October 2011. [...] Since the notes were outstanding for six months, the semi-annual interest would be 8% / 2 = 4% of the principal. Therefore, the interest for the first year would be 4% of $300 million, which is $12 million. [...] The interest is semi-annual, so the first payment would be in October 2011, and the second in April 2012. So for the first year (2011), they would have paid $12 million in interest on the senior notes. [...] So combining the offering cost ($8.4 million) and the interest payment for the first year ($12 million), the total capital cost is $8.4 + $12 = $20.4 million. [...] </think> The total capital cost includes the offering costs and the interest payment for the first year. The offering costs for the senior notes were $8.4 million, and the interest expense for the first year (2011) was $12 million. Therefore, the total capital cost is $8.4 million + $12 million = $20.4 million. Therefore, the answer is $20 .4million. Grounding Behavior: Instances of Grounding Behavior: 1.Explicit Grounding : "Let me go through the provided financial statements step by step to find all the relevant information." 2.Explicit Grounding : "In the Notes section, under NOTE 5: FAIR V ALUE MEASUREMENTS, I see that on March 22, 2011, they issued $300 million in senior notes." 3.Explicit Grounding : "Wait, but the offering cost is mentioned in NOTE 9: OWNER’S EQUITY ." 4.Explicit Grounding : "Looking at the balance sheet, under June 30, 2011, the long-term debt is $480,183 thousand, which is $480.183 million." 5.Explicit Grounding : "Wait, in NOTE 7: LONG-TERM DEBT, under the credit facility, it says that borrowings bear interest at a floating rate based on LIBOR." 6.Grounding Verification : "Wait, the financial statements show interest expense for the three months ended June 30, 2011 as $8,124 thousand, which is $8.124 million." 7.Grounding Verification
https://arxiv.org/abs/2505.17667v2
: "Wait, but in the condensed statements, the interest expense for the three months is $8.124 million, and for six months, it’s $13,283 thousand." 8.Grounding Verification : "Wait, the notes were issued in March 2011, so from March 2011 to March 2012 is the first year." 9.Grounding Verification : "Wait, in the financial statements, the interest expense for the six months ended June 30, 2011 is $13.283 million, which includes the senior notes’ interest for three months and the credit facility’s interest for the entire six months." 10.Grounding Verification : "Wait, the senior notes were issued in March 2011, so the first interest period is from March 22, 2011 to September 22, 2011, which is about six months, and the interest would be 4% of $300 million, which is $12 million." 11.Grounding Verification : "Wait, but the offering cost is a one-time expense, and the interest is an ongoing expense." 13 12.Grounding Verification : "Wait, but the question is about the total capital cost, which usually includes both the initial costs and the interest expense over the period." Each of these instances involves either explicit references to the document or verification steps that lead back to the document, thus counting as grounding behavior. R1-Distill-Qwen-14B-SFT: Response (10000 tokens): <think> Okay, let’s tackle this question. [...] First, I need to find where the offering costs for the senior notes are mentioned. [...] So the total offering costs here are $7.5 million (discount) plus $0.9 million (expenses), which adds up to $8.4 million. Next, the interest payment for the first year. The senior notes were issued in March 2011 with a 8.0% annual interest rate. The total principal amount is $300 million. [...] However, the notes mention that interest is payable semi-annually starting October 15, 2011. [...] Wait, the notes were issued on March 22, 2011, and the first interest payment is on October 15, 2011. [...] The interest expense for the three months ended June 30, 2011, under "Interest expense" is $8,124 thousand. [...] In the Condensed Consolidated Statements of Operations for the six months ended June 30, 2011, the interest expense is $13,283 thousand. [...] In Note 7, under the senior notes, it says: "The discount and the offering expenses are being amortized over the life of the Notes. The amortization is included in ’Interest expense’ on our unaudited condensed consolidated statements of operations." So the total initial costs for the senior notes are $7.5 million + $0.9 million = $8.4 million. These are amortized over the life of the notes, so each year, $8.4 million / 8 = $1.05 million per year. For the first year (2011), that would be $1.05 million. The question is about the interest payment in the first year. [...] The senior notes [...] bear 8% interest, payable semi-annually. The first payment is on October 15, 2011 [...]. So in the first year (2011), there’s only one payment in October. [...] The coupon rate is 8%, so each payment is 4% of the principal. The principal is $300 million, so each payment is $12 million. The first payment
https://arxiv.org/abs/2505.17667v2
is in October 2011, so in 2011, only one payment of $12 million would be made. [...] So total capital cost would be $8.4 million + $12 million = $20.4 million. [...] Alternatively, [...] the amortization of the offering costs is $8.4 million over 8 years, so for the first year, that’s $1.05 million. Therefore, [...] the interest paid would be $12 million (coupon) + $1.05 million (amortization) = $13.05 million. The offering costs are $8.4 million. So total capital cost would be $8.4 million + $13.05 million = $21.45 million. Alternatively, [...] the offering costs are $8.4 million. The interest expense for the first year (2011) would be the interest paid on the senior notes [...] (Reach maximum token limits) Grounding Behavior: Explanation of Grounding Behavior: 1."Looking through the financial statements, in Note 7 under Long-Term Debt, there’s a section about the 8.0% Senior Notes due 2019." •Explicit Grounding: The model directly references the financial statements and Note 7 to locate information about the senior notes. 2."Under ’Proceeds from debt offering,’ it says they received net proceeds of $291.6 million after deducting a discount of $7.5 million and offering expenses of $0.9 million." •Explicit Grounding: The model extracts specific details from the document about the net proceeds, discount, and offering expenses. 14 3."The interest expense for the six months ended June 30, 2011, is listed as $8,124 thousand, but that’s semi-annual." •Explicit Grounding: The model references the financial statements to find the interest expense for the six months ended June 30, 2011. 4."Wait, the notes mention that interest is payable semi-annually starting October 15, 2011." •Explicit Grounding: The model refers back to the notes to verify the timing of interest payments. 5."The interest expense for the three months ended June 30, 2011, under ’Interest expense’ is $8,124 thousand." •Explicit Grounding: The model again references the financial statements to confirm the interest expense for the three months ended June 30, 2011. 6."Wait, the notes were issued on March 22, 2011, and the first interest payment is on October 15, 2011." •Explicit Grounding: The model returns to the document to verify the issuance date and the first interest payment date. 7."Let’s check the Notes again. In Note 7, under the senior notes due 2019, it says the net proceeds were $291.6 million, after deducting a discount of $7.5 million and offering expenses of $0.9 million." •Explicit Grounding: The model revisits Note 7 to confirm the details about the net proceeds, discount, and offering expenses. 8."Let’s look at the interest expense line in the Statements of Operations." •Explicit Grounding: The model refers to the Statements of Operations to verify the interest expense. 9."In the Condensed Consolidated Statements of Operations for the six months ended June 30, 2011, the interest expense is $13,283 thousand." •Explicit Grounding: The model extracts the interest expense figure from the financial state- ments. 10."In Note 7, under ’Interest expense,’ it mentions that the amortization of the discount and deferred loan costs is included in interest expense." •Explicit Grounding: The model refers back to Note 7 to confirm how the amortization
https://arxiv.org/abs/2505.17667v2
of discount and expenses is treated. 11."Let’s check the Notes again. In Note 7, the senior notes’ discount and expenses are amortized over the life of the notes." •Explicit Grounding: The model revisits Note 7 to verify the amortization period for the discount and expenses. 12."Let’s check the interest expense in the Statements of Operations. For the six months ended June 30, 2011, the interest expense is $13,283 thousand." •Explicit Grounding: The model refers to the Statements of Operations to confirm the interest expense figure. Each of these steps involves explicit grounding behavior, where the model directly references or returns to the original document to verify or extract information. QWEN LONG -L1-14B: Response (5304 tokens): <think> Alright, so I need to figure out the total capital cost for EV Energy Partners, L.P. in the first year, combining both the offering cost of the senior notes and the interest payment. Let me go through the provided financial statements step by step to find the necessary information. First, I’ll look at the Notes to the Condensed Consolidated Financial Statements. In Note 7, under "8.0% Senior Notes due 2019," [...] The discount was $7.5 million, and the offering expenses were $0.9 million. So, the total offering costs here would be $7.5 million (discount) + $0.9 million (expenses) = $8.4 million. Next, I need to find the interest payment for the first year. 15 [...] The notes were issued in March 2011, so the full year’s interest would be 8% of $300 million, which is $24 million. But since the interest is semi-annual and the first payment was in October 2011, the interest for the first year (2011) might only be half of that, so $12 million. Wait, no, that’s not correct because the interest is calculated on the outstanding principal for each period. Let me think again. [...] The senior notes have a fixed rate of 8.0%. The interest expense for the senior notes would be 8% of $300 million, which is $24 million annually [...]. I think we should consider the full year’s interest, which would be $24 million. [...] I’m going to go with the full year’s interest, which is $24 million, plus the offering costs of $8.4 million, totaling $32.4 million. </think> The total capital cost includes both the offering costs and the interest payment for the first year. The offering costs for the senior notes were $8.4 million (discount of $7.5 million and expenses of $0.9 million). The interest payment for the first year (2011) is the full annual interest of 8% on the $300 million senior notes, which is $24 million. Adding these together gives a total capital cost of $32.4 million. Therefore, the answer is $32 .4million. Grounding Behavior: Explanation of Grounding Behavior: 1.Explicit Grounding : The model repeatedly references specific sections of the document, such as "Note 7," "Condensed Consolidated Financial Statements," and "Condensed Consolidated State- ments of Operations," to extract information about the senior notes, offering costs, and interest expenses. For example: •"First, I’ll look at the Notes to the Condensed Consolidated Financial Statements. In Note 7,
https://arxiv.org/abs/2505.17667v2
under ‘8.0% Senior Notes due 2019,’ it mentions that on March 22, 2011, they issued $300 million in aggregate principal amount of senior notes." •"Looking at the financial statements, in the Condensed Consolidated Statements of Operations, under ‘Interest expense,’ for the six months ended June 30, 2011, it’s $13,283 thousand." 2.Grounding Verification : The model frequently double-checks its understanding by returning to the document to verify details, such as the timing of interest payments, the breakdown of interest expenses, and the calculation of offering costs. For example: •"Wait, in the financial statements, for the six months ended June 30, 2011, the interest expense is $13,283 thousand. Let me check if that includes the senior notes." •"Wait, in Note 7, it says that the net proceeds from the senior notes were $291.6 million, after deducting the discount of $7.5 million and offering expenses of $0.9 million." 3.Repeated Verifications : The model revisits the same sections multiple times to confirm calculations or clarify ambiguities, such as the interest payment for the first year and the breakdown of offering costs. For example: •"Wait, let me go back to the question: ‘What is the total capital cost, combining both the offering cost of the senior notes and interest payment in the first year, in million dollars?"’ •"Wait, in the financial statements, the interest expense for the six months ended June 30, 2011, is $13,283 thousand. Let me check if that includes the senior notes." DocMath complong-testmini-265 Ground Truth: 980000.0 Prompt: Please read the following text and answer the question below. <text> ...329Docs ... 330: In connection with our July 2021 acquisition of 100% of the equity of TopPop, on July 26, 2021, we issued to the sellers promissory notes in the aggregate principal amount of $4,900,000 (the "TopPop Notes"). The TopPop Notes bear interest at the rate of 10% per annum, matured on July 26, 2022 and are secured by all of the outstanding membership interest in TopPop. Under the terms of the TopPop Notes, we have a five-day grace period to July 31, 2022 before an event of default under the TopPop 16 Notes occurs. Upon an event of default under the TopPop Notes, the holders of such TopPop Notes may exercise all rights and remedies available under the terms of the TopPop Notes or applicable laws, including to foreclose on certain collateral consisting of the membership interests of TopPop. On July 26, 2022, the total principal amount outstanding under the TopPop Notes was $4,900,000, exclusive of accrued and unpaid interest. 331: We are currently in discussions with holders of the TopPop Notes regarding possible solutions for the payment of the TopPop Notes, including the possible extension of the maturity date of the TopPop Notes for an additional year. There can be no assurance that our discussions will be successful and if we are not successful in finding an acceptable resolution to the existing default or the impending event of default, the noteholders will be able to seek judgement for the full amount due and may seek to foreclose on our assets. If this occurs,
https://arxiv.org/abs/2505.17667v2
any such remedy will have a material adverse effect on our business, results of operations and financial condition and is likely to negatively impact the price of our common stock. Holders of approximately $3.55 million of these notes have agreed to extend the term for 30 days and have indicated that they will not seek cash settlement prior to August 2023. ...30Docs ... </text> How much total interest will the company have to pay under the new agreement until August 2023 in dollars if the company managed to extend the "TopPop Notes" maturity date for one more year? Format your response as follows: "Therefore, the answer is (insert answer here)". QWEN LONG -L1-14B: Response (2192 tokens): <think> Okay, so I need to figure out how much total interest the company will have to pay under the new agreement until August 2023 if they extend the "TopPop Notes" maturity date for one more year. Let me start by understanding the problem. From the text, I see that the TopPop Notes were issued on July 26, 2021, with a principal amount of $4,900,000. They bear an interest rate of 10% per annum. The original maturity was on July 26, 2022, but there’s a five-day grace period until July 31, 2022, before a default occurs. However, the company is in discussions to extend the maturity date by an additional year, so the new maturity would be around August 2023. I need to calculate the interest for this extended period. The original term was from July 26, 2021, to July 26, 2022, which is one year. If they extend it by one more year, the new maturity would be July 26, 2023, but the user mentioned until August 2023, so I’ll consider the period up to August 2023. First, I should determine the exact period for which the interest is to be calculated. The original notes were from July 26, 2021, to July 26, 2022. If extended for another year, the new maturity is July 26, 2023, but the user specifies until August 2023, which is a bit beyond that. However, I think the interest is calculated up to the maturity date, so maybe just until July 26, 2023, unless the extension is until August, in which case I need to adjust. But the user says the company managed to extend the maturity date for one more year, so the total period would be two years from the original issue date. Let me check the details again. Wait, the original notes matured on July 26, 2022, and the company is discussing an extension for an additional year. So the new maturity would be July 26, 2023, making the total term two years. But the user mentions until August 2023, so perhaps the extension is until August 31, 2023, or just an extra year beyond the original 2022 date. I think the key is to calculate the interest for the two-year period from July 26, 2021, to July 26, 2023, and then maybe add a bit more if the extension is beyond that. But let’s focus on
https://arxiv.org/abs/2505.17667v2
the period until August 2023. The interest rate is 10% per annum, and the notes are simple interest, I believe, unless stated otherwise. The interest is calculated on the principal amount, which is $4,900,000. So, the interest for one year would be $4,900,000 * 10% = $490,000. If they extend it for another year, that’s another $490,000, making the total interest $980,000. But wait, the original notes had a maturity in 2022, and the extension is for one more year, so from 2022 to 2023. Let me check if the interest is only for the extended year or for the entire two years. Looking at the text, it says the TopPop Notes bear interest at 10% per annum and mature on July 26, 2022. The extension is for an additional year, so the interest would be for the period from July 26, 2022, to July 26, 2023, which is one year. Therefore, the interest for the extended period would be $490,000. But the user is asking for the total interest until August 2023. So if the extension is until August 2023, that’s a bit more than a year. However, typically, interest is calculated on an annual basis, and if the extension is for a full year, the interest would be $490,000. If the extension is only until August 2023, 17 that’s about 13 months, so the interest would be $4,900,000 * 10% * (13/12) ≈$508,333.33, but I’m not sure if the extension is for a full year or just until August. Wait, in the text under Note 7, it says: "The Company has agreed to keep this agreement in place and for the three months ended March 31, 2022, the Company has recorded consulting expense of $45,000." That’s about the leasehold improvements, not the TopPop Notes. Looking again at Note 14, which is under the leasehold improvements section, I don’t see any specific info on the TopPop Notes beyond their principal and interest rate. So, I’ll assume that the extension is for an additional year beyond the original 2022 maturity, making the total term two years. Therefore, the total interest would be $490,000 for the first year and another $490,000 for the second year, totaling $980,000. Wait, but in Note 7, under the leasehold improvements, it says: "The Company has agreed to keep this agreement in place and for the three months ended March 31, 2022, the Company has recorded consulting expense of $45,000." No, that’s a different note. I’m getting confused. Wait, let me go back to the relevant sections. In Note 7 (which is actually under Item 7, line 15), it says: "The Promissory Notes bear interest at the rate of 10% per annum and mature on July 26, 2022." Then, in the Recent Developments section (Item 2, line 25), it mentions that the TopPop Notes have a maturity on July 26, 2022, and that the company is in discussions to possibly extend the maturity date for an additional year. So, if the maturity is extended to August 2023, that’s an additional year beyond 2022. Therefore, the interest for the
https://arxiv.org/abs/2505.17667v2
original year (2021-2022) was $490,000. Now, with the extension, they would have to pay interest for another year, so another $490,000, totaling $980,000. But wait, in the text under Item 2, line 25, it says: "the Company is currently in discussions with holders regarding possible solutions for the payment of the Promissory Notes, including the possible extension for an additional year." So, the extension is for an additional year beyond the original 2022 maturity, making the new maturity July 26, 2023. So, the interest would be for two years: from July 26, 2021, to July 26, 2023. Therefore, total interest would be $4,900,000 * 10% * 2 = $980,000. But let me double-check. The original maturity was July 26, 2022, and the extension is for an additional year, so the notes would now mature on July 26, 2023. Therefore, the interest would be for the period from July 26, 2021, to July 26, 2023, which is exactly two years. So, the total interest would be $4,900,000 * 10% * 2 = $980,000. Wait, but in the text under Item 15, line 15, it says: "the Company was late in filing its Annual Report on Form 10-K for the year ended December 31, 2021, which was filed on June 15, 2022. During the first quarter ended March 31, 2022, there was no business activity or transactions." That doesn’t affect the interest calculation. Another point: the notes are secured by the membership interests in TopPop, but the interest calculation is straightforward. It’s simple interest unless compounded, which isn’t mentioned here. The text doesn’t specify compounding, so I’ll assume it’s simple interest. So, for each year, the interest is 10% of the principal. Since the principal remains $4,900,000, and the period is two years, the total interest is $490,000 per year * 2 = $980,000. Wait, but in the recent developments (Item 2, line 25), it mentions that the TopPop Notes have a maturity on July 26, 2022, and the company is discussing an extension for an additional year. So, the extension would be until July 26, 2023, making it a two-year period. Therefore, the total interest the company would have to pay until August 2023 under the new agreement is $980,000. </think> The TopPop Notes have a principal amount of $4,900,000 and an annual interest rate of 10%. If the maturity date is extended for one additional year until August 2023, the total interest for the two-year period would be $490,000 per year, totaling $980,000. Therefore, the answer is $980 ,000 . Grounding Behavior: Explanation of Grounding Behavior: 1."From the text, I see that the TopPop Notes were issued on July 26, 2021, with a principal amount of $4,900,000." 2."Looking at the text, it says the TopPop Notes bear interest at 10% per annum and mature on July 26, 2022." 18 3."Wait, in the text under Note 7, it says: ’The Company has agreed to keep this agreement in place and for the three months ended March 31, 2022, the Company has recorded consulting expense of $45,000.’" 4."Looking again at Note 14, which is
https://arxiv.org/abs/2505.17667v2
under the leasehold improvements section, I don’t see any specific info on the TopPop Notes beyond their principal and interest rate." 5."Wait, let me go back to the relevant sections. In Note 7 (which is actually under Item 7, line 15), it says: ’The Promissory Notes bear interest at the rate of 10% per annum and mature on July 26, 2022.’" Backtracking Behavior: 1.First Backtracking : The model initially considers the period up to August 2023 but then questions whether the interest is calculated up to the maturity date (July 26, 2023) or beyond. It revises its approach to focus on the two-year period from July 26, 2021, to July 26, 2023. 2.Second Backtracking : The model initially calculates the interest for two years but then questions whether the interest is only for the extended year (2022-2023) or for the entire two years. It revises its approach to consider the interest for the extended year only. 3.Third Backtracking : The model considers the possibility of the extension being until August 2023 (13 months) and calculates the interest accordingly. However, it then questions whether the extension is for a full year or just until August and revises its approach to assume a full year. 4.Fourth Backtracking : The model mistakenly references Note 7 (leasehold improvements) instead of the relevant section for the TopPop Notes. It corrects itself by going back to the relevant sections (Note 14 and Item 2). 5.Fifth Backtracking : The model initially calculates the total interest as $980,000 but then double- checks the period and confirms that the interest is for two years (July 26, 2021, to July 26, 2023). 6.Sixth Backtracking : The model considers the possibility of compounded interest but revises its approach to assume simple interest since compounding is not mentioned in the text. Verification Behavior: The chain-of-reasoning provided contains several instances where the model checks and verifies inter- mediate results or reasoning steps. Here are the key instances of verification behavior: 1.Verification of the period for interest calculation : •The model checks the exact period for interest calculation, considering both original and extended maturity dates. • Example statements: "Let me check the details again" "Wait, the original notes matured on July 26, 2022, and the company is discussing an extension for an additional year." 2.Verification of the interest calculation : • The model repeatedly verifies the interest calculation for the extended period. • Example statements: "So, the interest for one year would be $4,900,000 * 10% = $490,000. If they extend it for another year, that’s another $490,000, making the total interest $980,000." "Wait, but in the text under Item 2, line 25, it says: ’the Company is currently in dis- cussions with holders regarding possible solutions for the payment of the Promissory Notes, including the possible extension for an additional year.’" 3.Verification of the extension period : • The model examines whether the extension is for a full year or partial period. • Example statement: "But the user is asking for the total interest until August 2023. So if the extension is until August 2023,
https://arxiv.org/abs/2505.17667v2
that’s a bit more than a year." 4.Verification of the interest type : 19 • The model confirms whether simple or compound interest applies. • Example statement: "The text doesn’t specify compounding, so I’ll assume it’s simple interest." 5.Final verification of the total interest : • The model double-checks the complete interest calculation. • Example statements: "So, the total interest would be $4,900,000 * 10% * 2 = $980,000." "Wait, but in the recent developments (Item 2, line 25), it mentions that the TopPop Notes have a maturity on July 26, 2022, and the company is discussing an extension for an additional year." Subgoal Setting Behavior: 1.Understand the problem : The model starts by identifying the need to calculate the total interest under the new agreement. 2.Determine the period for interest calculation : The model breaks down the task by figuring out the exact period for which the interest needs to be calculated, considering the original and extended maturity dates. 3.Clarify the extension details : The model attempts to clarify whether the extension is for a full year or until August 2023, which affects the interest calculation. 4.Calculate interest for the extended period : The model calculates the interest for the extended period, considering whether it is a full year or a partial year. 5.Verify assumptions : The model checks the text to confirm assumptions about the interest rate, compounding, and the exact period of the extension. 6.Finalize the total interest : The model concludes by calculating the total interest for the two-year period based on the verified assumptions. References [1] Anthropic. Claude 3.7 sonnet system card, Feburary 2025. [2]Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. LongBench: A bilingual, multitask benchmark for long context understanding. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3119–3137, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.172. [3]Ganqu Cui, Lifan Yuan, Zefan Wang, Hanbin Wang, Wendi Li, Bingxiang He, Yuchen Fan, Tianyu Yu, Qixin Xu, Weize Chen, et al. Process reinforcement through implicit rewards. arXiv preprint arXiv:2502.01456 , 2025. [4] Kahneman Daniel. Thinking, fast and slow . 2017. [5]Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A Smith, and Matt Gardner. A dataset of information-seeking questions and answers anchored in research papers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 4599–4610, 2021. [6]DeepSeek-AI. Deepseek-r1-lite-preview is now live: unleashing supercharged reasoning power!, November 2024. [7]Ahmed El-Kishky, Alexander Wei, Andre Saraiva, Borys Minaev, Daniel Selsam, David Dohan, Francis Song, Hunter Lightman, Ignasi Clavera, Jakub Pachocki, et al. Competitive programming with large reasoning models. arXiv preprint arXiv:2502.06807 , 2025. [8] Hugging Face. Open r1: A fully open reproduction of deepseek-r1, January 2025. 20 [9]Yao Fu, Rameswar Panda, Xinyao Niu, Xiang Yue, Hannaneh Hajishirzi, Yoon Kim, and Hao Peng. Data engineering for scaling language models
https://arxiv.org/abs/2505.17667v2
to 128k context. In International Conference on Machine Learning , 2024. [10] Tianyu Gao, Alexander Wettig, Howard Yen, and Danqi Chen. How to train long-context language models (effectively). arXiv preprint arXiv:2410.02660 , 2024. [11] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [12] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language model pre-training. In International conference on machine learning , pages 3929–3938. PMLR, 2020. [13] Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. Constructing a multi-hop qa dataset for comprehensive evaluation of reasoning steps. In Proceedings of the 28th International Conference on Computational Linguistics , pages 6609–6625, 2020. [14] Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, and Heung-Yeung Shum Xiangyu Zhang. Open-reasoner-zero: An open source approach to scaling reinforcement learning on the base model, 2025. [15] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [16] Bowen Jin, Hansi Zeng, Zhenrui Yue, Dong Wang, Hamed Zamani, and Jiawei Han. Search-r1: Training llms to reason and leverage search engines with reinforcement learning. arXiv preprint arXiv:2503.09516 , 2025. [17] Tomáš Ko ˇcisk`y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. The narrativeqa reading comprehension challenge. Transac- tions of the Association for Computational Linguistics , 6:317–328, 2018. [18] Satyapriya Krishna, Kalpesh Krishna, Anhad Mohananey, Steven Schwarcz, Adam Stambler, Shyam Upadhyay, and Manaal Faruqui. Fact, fetch, and reason: A unified evaluation of retrieval-augmented generation. arXiv preprint arXiv:2409.12941 , 2024. [19] Ming Li, Yong Zhang, Zhitao Li, Jiuhai Chen, Lichang Chen, Ning Cheng, Jianzong Wang, Tianyi Zhou, and Jing Xiao. From quantity to quality: Boosting llm performance with self- guided data selection for instruction tuning. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 7595–7628, 2024. [20] Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, and Yejin Choi. The unlocking spell on base llms: Rethinking alignment via in-context learning. In The Twelfth International Conference on Learning Representations , 2023. [21] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. [22] Jiaheng Liu, Dawei Zhu, Zhiqi Bai, Yancheng He, Huanxuan Liao, Haoran Que, Zekun Wang, Chenchen Zhang, Ge Zhang, Jiebin Zhang, et al. A comprehensive survey on long context language modeling. arXiv preprint arXiv:2503.17407 , 2025. [23] Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, and Min Lin. Understanding r1-zero-like training: A critical perspective. arXiv preprint arXiv:2503.20783 , 2025. 21 [24] Michael Luo, Sijun Tan, Justin Wong, Xiaoxiang Shi, William Y . Tang, Manan Roongta, Colin Cai, Jeffrey Luo, Tianjun Zhang,
https://arxiv.org/abs/2505.17667v2
Li Erran Li, Raluca Ada Popa, and Ion Stoica. Deepscaler: Surpassing o1-preview with a 1.5b model by scaling rl, 2025. [25] Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 , 2025. [26] OpenAI. Learning to reason with llms, September 2024. [27] OpenAI. Introducing deep research, February 2025. [28] OpenAI. Openai o3-mini system card, January 2025. [29] Jiayi Pan, Junjie Zhang, Xingyao Wang, Lifan Yuan, Hao Peng, and Alane Suhr. Tinyzero: Clean, minimal, accessible reproduction of deepseek r1-zero, Janurary 2025. [30] Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. In-context retrieval-augmented language models. Transactions of the Association for Computational Linguistics , 11:1316–1331, 2023. [31] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438 , 2015. [32] John Schulman, Xi Chen, and Pieter Abbeel. Equivalence between policy gradients and soft q-learning. arXiv preprint arXiv:1704.06440 , 2017. [33] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [34] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [35] Joar Skalse, Nikolaus Howe, Dmitrii Krasheninnikov, and David Krueger. Defining and characterizing reward gaming. Advances in Neural Information Processing Systems , 35:9460– 9471, 2022. [36] Yixuan Tang and Yi Yang. Multihop-rag: Benchmarking retrieval-augmented generation for multi-hop queries. In First Conference on Language Modeling , 2024. [37] Gemini Team. Gemini 2.0 flash thinking, December 2024. [38] Gemini Team. Try deep research and our new experimental model in gemini, your ai assistant, December 2024. [39] NovaSky Team. Unlocking the potential of reinforcement learning in improving reasoning models, Feburary 2025. [40] Perplexity Team. Introducing perplexity deep research, February 2025. [41] Qwen Team. Qwq: Reflect deeply on the boundaries of the unknown, November 2024. [42] Qwen Team. Qwen3: Think deeper, act faster, April 2025. [43] Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, March 2025. [44] Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. Musique: Multihop questions via single-hop question composition. Transactions of the Association for Computational Linguistics , 10:539–554, 2022. [45] Liang Wen, Yunke Cai, Fenrui Xiao, Xin He, Qi An, Zhenyu Duan, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, Haosheng Zou, Yongchao Deng, Shousheng Jia, and Xiangzheng Zhang. Light-r1: Curriculum sft, dpo and rl for long cot from scratch and beyond. arXiv preprint arXiv:2503.10460 , 2025. 22 [46] Siwei Wu, Zhongyuan Peng, Xinrun Du, Tuney Zheng, Minghao Liu, Jialong Wu, Jiachen Ma, Yizhi Li, Jian Yang, Wangchunshu Zhou, et al. A comparative study on reasoning patterns of openai’s o1 model. arXiv preprint arXiv:2410.13639 , 2024. [47] Tian Xie, Zitian Gao, Qingnan Ren, Haoming Luo, Yuqian Hong, Bryan Dai, Joey Zhou, Kai Qiu, Zhirong Wu, and Chong Luo. Logic-rl: Unleashing llm reasoning with rule-based reinforcement learning.
https://arxiv.org/abs/2505.17667v2
arXiv preprint arXiv:2502.14768 , 2025. [48] Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, et al. Effective long-context scaling of foundation models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 4643–4663, 2024. [49] Fengli Xu, Qianyue Hao, Zefang Zong, Jingwei Wang, Yunke Zhang, Jingyi Wang, Xiaochong Lan, Jiahui Gong, Tianjian Ouyang, Fanjin Meng, et al. Towards large reasoning models: A survey of reinforced reasoning with large language models. arXiv preprint arXiv:2501.09686 , 2025. [50] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122 , 2024. [51] Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 2369–2380, 2018. [52] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations , 2023. [53] Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. Limo: Less is more for reasoning. arXiv preprint arXiv:2502.03387 , 2025. [54] Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 , 2025. [55] Yufeng Yuan, Yu Yue, Ruofei Zhu, Tiantian Fan, and Lin Yan. What’s behind ppo’s collapse in long-cot? value optimization holds the secret. arXiv preprint arXiv:2503.01491 , 2025. [56] Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892 , 2025. [57] Yilun Zhao, Yitao Long, Hongjun Liu, Ryo Kamoi, Linyong Nan, Lyuhao Chen, Yixin Liu, Xiangru Tang, Rui Zhang, and Arman Cohan. Docmath-eval: Evaluating math reasoning capabilities of llms in understanding long and specialized documents. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 16103–16120, 2024. [58] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623, 2023. [59] Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. Advances in Neural Information Processing Systems , 36:55006–55021, 2023. 23
https://arxiv.org/abs/2505.17667v2
arXiv:2505.17671v1 [cs.CL] 23 May 2025MIDB: Multilingual Instruction Data Booster for Enhancing Multilingual Instruction Synthesis Yilun Liu*, Chunguang Zhao∗, Xinhua Yang∗, Hongyong Zeng, Shimin Tao, Weibin Meng, Minggui He, Chang Su, Yan Yu, Hongxia Ma, Li Zhang, Daimeng Wei, Hao Yang Huawei, China {liuyilun3,zhaochunguang4,yangxinhua2}@huawei.com Abstract Despite doubts on data quality, instruction syn- thesis has been widely applied into instruction tuning (IT) of LLMs as an economic and rapid alternative. Recent endeavors focus on im- proving data quality for synthesized instruc- tion pairs in English and have facilitated IT of English-centric LLMs. However, data quality issues in multilingual synthesized instruction pairs are even more severe, since the common synthesizing practice is to translate English syn- thesized data into other languages using ma- chine translation (MT). Besides the known con- tent errors in these English synthesized data, multilingual synthesized instruction data are further exposed to defects introduced by MT and face insufficient localization of the target languages. In this paper, we propose MIDB, a Multilingual Instruction Data Booster to au- tomatically address the quality issues in mul- tilingual synthesized data. MIDB is trained on around 36.8k revision examples across 16 languages by human linguistic experts, thereby can boost the low-quality data by addressing content errors and MT defects, and improving localization in these synthesized data. Both au- tomatic and human evaluation indicate that not only MIDB steadily improved instruction data quality in 16 languages, but also the instruction- following and cultural-understanding abilities of multilingual LLMs fine-tuned on MIDB- boosted data were significantly enhanced. 1 Introduction Large language models (LLMs) have made sig- nificant strides in their performance in English, achieving impressive capabilities across a range of natural language processing (NLP) tasks (Achiam et al., 2023; DeepSeek-AI, 2025). However, the multilingual abilities of most LLMs remain rela- tively underdeveloped (Lai et al., 2024), particu- larly due to the predominance of English in the *Equal contribution. 15.0% 16.2%12.1% AR3.9%13.8% 6.2% CS 6.8% 6.2%1.2% FR 3.0 %5.0% 6.2% DE 7.1%7.5%10.0% 17.7 % EL 1.6% 11.2%2.5% 22.7% ID 3.8%11.2% IT13.5%16.2%20.0% JA2.6%2.5%12.5%/uni00000016/uni00000013/uni00000011/uni0000001b KO1.4%10.0% 7.5% MS4.5%12.5% 10.0 % RU7.5%11.2%14.0 % ES2.7%10.0% 7.5% TH2.1% 11.2% 1.2%TR 11.4% 8.8% 5.0% VI /uni00000024/uni0000004f/uni00000053/uni00000044/uni00000046/uni00000044/uni00000028/uni00000059/uni00000044/uni0000004f /uni00000030/uni00000037-/uni00000025/uni00000048/uni00000051/uni00000046/uni0000004b/uni00000003/uni00000006/uni00000014 /uni00000030/uni00000037-/uni00000025/uni00000048/uni00000051/uni00000046/uni0000004b/uni00000003/uni00000006/uni00000015 /uni00000025/uni0000002f/uni00000028/uni00000051/uni00000027 30.7% 0.3% 4.6% 5.0% PTFigure 1: Performance improvement (%) of LLMs trained with MIDB-boosted data compared with those with original data, on abilities of multilingual instruction-following (AlpacaEval test set), multilingual dialogues (two turns in MT-Bench test set) and cultural understanding (BLEnD test set). See code-name map- ping of the 16 supported languages in Appendix C.1. pretraining data used by many popular open-source LLMs, such as the LLaMA series (Touvron et al., 2023a). For example, in the case of LLaMA-2 (Tou- vron et al., 2023b), the ratio of non-English lan- guages in its pretraining corpus is merely around 2%, which could significantly limit the multilin- gual capacity of models fine-tuned on it. While continual pretraining with large amounts of mul- tilingual corpora is an effective strategy to en- hance multilingual performance, it is resource- intensive (Fujii et al., 2024). A more widely ap- plicable approach is multilingual instruction tun- ing (IT), which introduces multilingual instruction pairs ( i.e., instruction-response
https://arxiv.org/abs/2505.17671v1
pairs) during the human alignment phase to enhance the model’s ability to understand and process multiple lan- guages (Chen et al., 2024b; Shaham et al., 2024). Despite advancements in training methods for multilingual IT (Zhu et al., 2023; Zhang et al., 2024; Huo et al., 2025), obtaining high-quality multilingual instruction data remains a significant challenge, especially for low-resource languages due to limited linguistic resources. Compared to monolingual contexts ( e.g., English), manually cre- ating high-quality instruction datasets for multi- ple languages naturally incurs higher social utility costs and resource demands (Üstün et al., 2024). In contrast, using instruction synthesis techniques to automatically generate large-scale multilingual instruction data presents a more cost-effective and scalable alternative. While research on instruction synthesis for En- glish has flourished (Wang et al., 2023b; Taori et al., 2023; Xu et al., 2024), multilingual instruction synthesis has largely relied on machine translation (MT) to adapt synthesized English datasets to tar- get languages (Lai et al., 2024; Chen et al., 2024b; Huo et al., 2025). A common approach involves translating the English Alpaca dataset (Taori et al., 2023), which contains 52k instruction pairs gener- ated by GPT-3.5, into other languages. Although MT provides a practical solution for rapid adapta- tion, it introduces several data quality issues. As is also noted by Chen et al. (2024c), datasets trans- lated from synthesized English sources often suffer from significant quality problems: (1) Content errors in the source ( e.g., English) datasets. Synthesized instruction datasets like Al- paca often contain content errors and defects due to LLM hallucinations during data generation (Chen et al., 2024a; Liu et al., 2024; Ge et al., 2024). These errors range from surface-level issues like formatting and grammatical mistakes (Ruebsamen, 2023), to deeper defects such as logical inconsisten- cies, factual inaccuracies, and one-sided explana- tions (Liu et al., 2024). While some of these can be addressed with rule-based filtering, deeper issues still require human intervention. (2) Defects introduced by MT. Despite ad- vances, current MT systems ( e.g., LLM-based MT) still have significant shortcomings. Accord- ing to the 2024 WMT shared task report (Kocmi et al., 2024), even the best MT systems produce about 10% translation defects in human evalua- tion. These defects are even more prevalent in widely-used commercial MT systems or off-the- shelf LLMs. For instance, using back-translation, Lai et al. (2024) found that multilingual Alpaca datasets translated via Google Translate API hadan average error rate of around 30% in five low- resource languages. Such high translation error rates can lead to cascading issues, severely degrad- ing the quality of multilingual instruction datasets. (3) Insufficient Localization. Direct transla- tion of instruction data often leads to inadequate localization, as source instruction pairs tend to re- flect the cultural and knowledge contexts of the source language ( e.g., English), which can lead to mismatches in the target language context. For in- stance, an English instruction like “write a poem starting with the letter A” would be nonsensical in Thai because it doesn’t have "letter A". To avoid such issues, instructions should be localized to fit the linguistic and cultural
https://arxiv.org/abs/2505.17671v1
context of the target lan- guage, ensuring that models fine-tuned on these instructions align with local language and culture. As noted in prior studies (Zhou et al., 2024a; Li et al., 2024; Chen et al., 2024a), quality is- sues in multilingual instruction data can signifi- cantly reduce the effectiveness of multilingual IT. Fully resolving these issues would require exten- sive human involvement—particularly challenging for low-resource languages. To address this, we introduce MIDB, an auto- matic Multilingual Instruction Data Booster aimed at improving the quality of synthesized instruction data for multilingual IT. Inspired by data engineer- ing techniques in English (Liu et al., 2024; Ge et al., 2024), where LLMs learn from human ratings or revision patterns, we collaborated with linguistic experts to build a dataset of 36.8k manual revi- sion examples across 16 languages, including low- resource ones. These revisions target low-quality instruction pairs, correcting content issues ( e.g., accuracy, richness, relevance) and MT-induced de- fects ( e.g., fluency, correctness). These examples enable LLMs to learn human boosting strategies and enhance multilingual instruction data automat- ically. We also ensured that the dataset preserves language-specific features and emphasizes cultural and linguistic localization. This dataset is denoted as the Multilingual Expert Boosted (MEB) dataset. We then used this MEB dataset to train MIDB to automate the enhancement of synthesized data quality for multilingual IT, leading to significant improvements on both multilingual data quality and model performance on multilingual abilities. Our contributions are as follows: •We analyzed issues in multilingual synthe- sized instruction datasets and proposed a sys- tematic criteria for data quality enhancement, advancing the field of multilingual IT. •We proposed a unified model capable of au- tomatically boosting instruction data quality (by 41.5% averagely) across 16 languages, improving the performance of open-source multilingual models by 23.6%, averagely. •We introduced a new approach for preserving local features of languages in synthesized in- struction data, leading to an improved cultural understanding abilities (by 19.5% averagely) of subsequently tuned LLMs. In addition, since existing multilingual evalua- tion benchmarks for instruction-following abilities of LLMs also suffer from defects introduced by MT, as is observed by Chen et al. (2024c), we con- ducted manual localization of two most popular benchmarks, AlpacaEval (Dubois et al., 2023) and MT-Bench (Zheng et al., 2023), into 16 languages with the help of professional translators. We release various assets including code and datasets1. 2 Related Work 2.1 Multilingual Instruction Tuning To improve the multilingual abilities of existing foundation LLMs, various training methods for enhancing multilingual IT are proposed. Details are discussed in Appendix A.1. Despite improved training recipes, most exist- ing methods directly adopt the synthesized Al- paca dataset and its MT versions as the training set, potentially hampering effectiveness of their ap- proaches due to the known data quality issue. Our work fills this blank by introducing a instruction data booster for multilingual IT. 2.2 Instruction Data Synthesis The rapid development of open-source LLMs largely relies on the technologies of instruction data synthesis, where instruction pairs can be effi- ciently distilled from powerful LLMs ( e.g., GPT-4), saving the labor of large-scale
https://arxiv.org/abs/2505.17671v1
human annotations, as referred with details in Appendix A.2. However, most existing technologies for instruc- tion synthesis are designed, implemented and evalu- ated for the language of English, leaving the synthe- sis of multilingual instruction data underexplored. Compared with existing methods, our work builds 1https://anonymous.4open.science/r/MIDB-3F02upon the concept of refinement-based methods but focuses on supporting multiple languages, thereby pioneering multilingual instruction synthesis. (a)MIDB model training (b)MIDB model applied in data boosting Figure 2: Illustrations on (a) training stage and (b) infer- ence stage of MIDB. 3 Methodology The overview of the training and inference of MIDB is shown in Fig. 2. To construct training dataset for MIDB, a throughout manual correction and revision on subsets of the multilingual Alpaca datasets (MT versions from English version) was firstly conducted. Section 3.1 introduces the pro- files of human experts involved in our study, and Section 3.2 describes the details of MEB Dataset construction, including criteria and results of man- ual enhancement to the multilingual synthesized datasets. In Section 3.3, we discuss training meth- ods of MIDB. Section 3.4 discusses the construc- tion of our multilingual test sets. 3.1 Profile of Involved Multilingual Human Experts and Task Allocation Strategy We recruited and actively worked with a group of language experts to accomplish the construction of the MEB dataset and two benchmarks, and human evaluation. See details on their expertise, allocation strategies and quality control in Appendix B. Instruct:名前3⼀般的な句読点 Response:句点(。) (くてん, kuten)-⽂の終わりを⽰すために 使⽤されるピリオド,に相当しま す。 読点(、) (とうてん,toten)- ⽂中の要素を区切るために使⽤さ れるコンマ,に相当します。 かぎ括弧(「」と『』) (かぎか っこ,カギカッコ)-直接引⽤やタ イトルに使⽤される引⽤符。Localization Boosting... Response:ErmittelnSiedie Programmiersprache,mitwelcher derangegebeneCodegeschrieben wurde. wenn(20>18) { printf("20istgrößerals 18");}Translation BoostingInstruct: Trouvezle nombresuivant dansla séquence suivante:1,3,6, 10,15 Response:Le nombresuivant danslaContent Boosting ... Response:... if(20>18) { printf("20ist größerals18");}In GermanInstruct:... response: ...Unefaçonpossibledelefaireest d'examinerlesdifférencesentreles termesconsécutifs.Nous obtenons:\n\n-3-1=2\n\n-6-3= 3\n\n-10-6=4\n\n-15-10= 5\n\nOnconstatequeles différencesaugmententde1à chaquefois.... leprochainnombre delaséquenceest**21**. Instruct:名前3⼀般的な句読点 Response:ピリオド (.) -宣⾔ ⽂の終わりを⽰すために使⽤さ れます。 コンマ(,) -リスト内の要素 を区切ったり。 引⽤符("") -直接的なスピー チを⽰したり、誰かの⾔葉を引 ⽤するために使⽤されます。In JapaneseIn FrenchFigure 3: Typical issues addressed in MEB Dataset. 3.2 Building MEB Dataset Preliminary Data Quality Study by Experts As the initial step, our language experts inspected several popular multilingual IT datasets, which were machine-translated from the synthesized En- glish Alpaca Dataset (Taori et al., 2023). We identi- fied several notable issues within these samples and concluded them into three most common categories. Fig. 3 illustrates typical cases from each category, along with corresponding expert suggestions: (1)Content Errors and Defects : The first ex- ample illustrates a common content defect, lack of comprehensiveness due to absence of intermediate process, in synthesized instruction pairs. (2)MT Defects : The second category concerns translation quality, which can be difficult for MT models to maintain in certain contexts. The ex- ample in Fig. 3 addresses a common MT error, where the conditional statement "if" were mistak- enly translated into the target language, potentially leading to catastrophic errors in the code compiler. (3)Localization Issues : The third category high- lights the challenge of localization within mul- tilingual contexts. In this case, the response of a Japanese instruction, "List 3 common punctua-tion marks", mistakenly included English punctu- ation marks. With the help of language experts, the answer was accurately localized to incorporate Japanese-specific punctuation marks. Category Ratio Criteria Content Boosting22.9%Contextualization
https://arxiv.org/abs/2505.17671v1
Relevance Feasibility Timelineness Humanization Comprehensiveness Richness Correctness Readability Safety Translation Boosting24.4%Fluency Grammar Translation Elegancy Omitted translation Spelling Incorrect translation Localization 52.7%Culture localization Geocultural term repair Ideology localization Expression localization Table 1: Manual evaluation and enhancement criteria for quality issues in multilingual synthesized datasets. Criteria for Building MEB Dataset The cate- gories derived from the collection of typical issues above have been further summarized and listed as criteria in Table 1 for manual enhancement: The "Content Boosting" category is primar- ily inspired by the criteria proposed by (Liu et al., 2024), who introduced content revision criteria for English IT data and demonstrated their effective- ness through various experiments. Their English- based standards for content errors in synthesized data remain applicable in multilingual contexts. For example, criteria such as Relevance and Compre- hensiveness are universally relevant and indepen- dent of the specific language. As such, we have inherited many of these criteria from their work. The "Translation Boosting" category is de- rived from professional translation standards from the cooperated language service center, reflecting challenging aspects for MT models such as "Omit- ted Translation" and "Translation Elegancy". Localization-related criteria are the most chal- lenging aspect of our work, mainly due to the lim- ited availability of low-resource language experts and the lack of public instruction data. To address this, we propose four novel data-boosting criteria related to localization: (1)Cultural Relevance : Adapting instruction pairs to reflect local culture, including references to local music, movies, and food where appropriate. (2)Geo-cultural Terms : Recognizing that some entities are known by different names across re- gions. For instance, the Himalayas are referred to as Mount Everest in some languages. Instruction pairs should use terminology that aligns with these regional variations. (3)Ideological Localization : This criterion ad- dresses differences in religion, history, and local media, where the same input may yield different responses based on these factors. Some instruction pairs may need to be entirely restructured to align with these differences. (4)Local Expression : Emphasizing the use of local, culturally relevant expressions instead of di- rect translations. These expressions help retain the unique features of each language, much like an ethnic costume showcases cultural identity. Manual Enhancement Results A total of 16 lan- guages as in Table 5 were selected for inclusion in our study based on an evaluation of factors such as geographic coverage, popularity, impact, and available human resources. Notably, the selection includes six low-resource languages, ensuring cov- erage of all geographical regions. Further details are provided in Appendix C.2. After dedicating over 485 person-days to the construction of the MEB dataset, we curated 36.8k manually boosted instruction pairs across the 16 languages, averaging approximately 2.3k pairs per language. As shown in Table 1, over 52.7% of the instruction pairs have been localized according to the proposed criteria, including both expression lo- calization and cultural adaptation. Approximately 22.9% and 24.4% of the manual enhancements ad- dress content issues and MT defects, respectively. 3.3 Training Design of MIDB Building Training Samples for MIDB The man- ual enhancement examples are subsequently trans- formed into training samples for MIDB. As
https://arxiv.org/abs/2505.17671v1
shown in Fig. 4, a training sample consists of Prompt ,In- putandOutput . The Prompt is adapted from Liu et al. (2024), serving as a straightforward instruc- tion for content refinement during the training of MIDB. The Input consists of an original instruc- tion pair in the MEB Dataset, with its <|instruc- tion|>, <|input|>, and <|response|> concatenated into a string. The Output is the corresponding man- ually boosted instruction pair, serving as a learning target of MIDB to activate expert-aligned multilin- gual instruction boosting capabilities of the foun- dation LLM. Joint Training Goal for 16 Languages To re- duce deployment cost and facilitate connections between languages, we trained MIDB as a unified model capable of boosting instruction pairs from all 16 languages. From the backbone θ, the joint Prompt Input OutputImprove the following content to be more specific, detailed with more logical steps and grammarly corrected; avoid generating incorrect and misleading information in output; minimize hallucination in output. <|instruction|>XX1\n<|input|>XX2\n<|response|>XX3 <|instruction|>YY1\n<|input|>YY2\n<|response|>YY3Figure 4: Template of MIDB’s training samples. training goal shown in Eq. (1)optimize it into θm, i.e., MIDB. Cirepresents the subset of the train- ing samples from the ithlanguage within these 16 languages. For the jthtraining sample in Ci,xj is constructed by concatenating Prompt andInput , andyjis directly the corresponding Output . Fol- lowing Liu et al. (2024), only high-quality subsets are utilized in training MIDB. See detailed analysis and the choice of backbone θin Appendix E. θm= arg max θX i∈[1,16]X xj∈CilogP(yj|xj;θ)(1) 3.4 Construction of Multilingual Test Sets A team of 20 professional experts spent 175 person- days extending AlpacaEval and MT-Bench to 16 languages. We denote them as AlpacaEval-16L and MT-Bench-16L. Details are in Appendix D. 4 Experiment In Section 4.1, we introduce our evaluation setups, encompassing both manual and automatic evalua- tions. Section 4.2 investigates the effectiveness of MIDB in enhancing the data quality of synthesized multilingual instruction dataset. Section 4.3 as- sesses the performance of multilingual LLMs tuned with MIDB-boosted dataset. Furthermore, Section 4.4 verifies MIDB on an out-of-distribution dataset. See an ablation study on MIDB in Appendix E and implementation details in Appendix G. 4.1 Evaluation Approach Human Seven experts independently rate each instruction pair or model response based on the cri- teria as same as Table 1, without knowledge of the sources of the evaluated samples. They assess the satisfaction level across predefined dimensions and give a comparison-based judgment on two candi- date samples. However, human evaluation is inher- ently limited in terms of efficiency and scalability, due to its high cost and the need for domain ex- pertise. Further more, due to limited resources /uni0000002d/uni00000024/uni0000002c/uni00000037/uni0000002c/uni00000027/uni00000028/uni0000002f/uni00000027/uni00000028/uni00000029/uni00000035/uni00000026/uni00000036/uni00000024/uni00000035 /uni00000014/uni0000001c /uni0000001a/uni0000001b /uni00000017/uni00000015/uni0000001c /uni00000019/uni0000001b /uni00000016/uni00000017/uni00000016 /uni00000017/uni00000019 /uni00000014/uni00000014/uni00000014/uni00000018 /uni0000001b/uni00000015 /uni00000015/uni00000015/uni00000019 /uni0000001a/uni00000014 /uni00000015/uni00000015/uni0000001a /uni00000019/uni0000001c /uni00000017/uni00000015/uni00000013 /uni0000001a/uni00000019 /uni00000017/uni00000014/uni0000001a /uni0000001a/uni0000001c /uni00000017 /uni00000039/uni0000002c/uni00000037/uni00000035/uni00000037/uni0000002b/uni00000028/uni00000036/uni00000035/uni00000038/uni00000033/uni00000037/uni00000030/uni00000036/uni0000002e/uni00000032 /uni00000017/uni00000015 /uni00000017/uni00000018 /uni00000014/uni00000014/uni00000015/uni00000015 /uni0000001a/uni00000015 /uni00000019/uni00000016/uni00000014 /uni00000018/uni00000019 /uni00000014/uni00000015/uni00000015/uni0000001a /uni0000001a/uni00000013 /uni00000016/uni00000015/uni00000019 /uni0000001a/uni00000014 /uni00000017/uni00000017/uni00000019 /uni00000017/uni00000017 /uni0000001c/uni00000016/uni0000001b /uni00000018/uni00000014 /uni00000014/uni00000014/uni00000015/uni0000001a /uni0000001a/uni00000013 /uni00000016 /uni00000030/uni0000002c/uni00000027/uni00000025/uni00000003/uni0000005a/uni0000004c/uni00000051 /uni00000057/uni0000004c/uni00000048 /uni0000004f/uni00000052/uni00000056/uni00000048Figure 5: Win-lose-tie ratios of MIDB-boosted data compare with original data evaluated by GPT-4o. of senior language experts, we are unable to con- duct human evaluation for a larger set of languages. Therefore, our language selection for human eval- uation covers both high-usage and
https://arxiv.org/abs/2505.17671v1
low-resource languages for coverage and representativeness. GPT-4o Following many studies that use GPT evaluation as a powerful and quick approach (Ge et al., 2024; Liu et al., 2024; Chen et al., 2024a), we use GPT-4o to compare and score the responses of two candidates, based on the criteria from Zheng et al. (2023) (detailed prompt in Appendix F.1). This prompt asks GPT-4o to judge the helpful- ness, relevance, accuracy, and detail of each re- sponse (which is better) and provide a rationale. However, this automatic method suffers from re- ported evaluation biases when the order of can- didates is changed (Wang et al., 2023a), despite GPT-4o’s generally strong correlation with human judgment (Achiam et al., 2023). To alleviate the bias caused by positional deviation, we perform both forward evaluation and backward one for each pair of candidates, and define the final judgment: • #win : win twice, or win once and tie once • #lose : lose twice, or lose once and tie once • #tie : tie twice, or win once and lose once Metrics Several metrics are utilized for present- ing the judgments: (1) By default we use win-lose- tie ratio , which is given by#win #all,#lose #alland#tie #all, where #allis the number of test set samples; (2) For easy comparison between multiple baselines, we use win rate , formulated as#win+#tie #all; and (3) winning score , formulated as#win−#lose #all+ 1, to conveniently spot the winning side (score > 1).4.2 Data Quality of MIDB-boosted Dataset 4.2.1 GPT Evaluation We randomly sampled 520 data samples (strictly excluding those in MEB dataset) for each language from the machine-translated Alpaca 52k dataset for quality assessment. As mentioned in Section 4.1, GPT-4o is employed to judge the winner for each original and MIDB-boosted data pair, which we use as an automatic quality indicator for the subset. Fig. 5 presents the win-lose-tie ratios of the subset in different languages after MIDB boosting. The results show that after MIDB boosting, most languages show a trend of significantly higher win ratios than lose ratios in terms of data quality. For example, Portuguese has a much higher winning ratio (46%) than its losing ratio (9%), and other languages all show similar characteristics. This im- provement suggests that the MIDB-boosted dataset primarily contains high-quality instruction pairs, which can enhance LLM instruction tuning while maintaining the original dataset’s integrity. 4.2.2 Human Evaluation on Data Quality /uni00000035/uni00000058/uni00000056/uni00000056/uni0000004c/uni00000044/uni00000051/uni00000033/uni00000052/uni00000055/uni00000057/uni00000058/uni0000004a/uni00000058/uni00000048/uni00000056/uni00000048/uni0000002e/uni00000052/uni00000055/uni00000048/uni00000044/uni00000051/uni0000002d/uni00000044/uni00000053/uni00000044/uni00000051/uni00000048/uni00000056/uni00000048/uni0000002a/uni00000055/uni00000048/uni00000048/uni0000004e/uni00000029/uni00000055/uni00000048/uni00000051/uni00000046/uni0000004b /uni0000001b/uni00000019 /uni0000001b/uni00000019/uni0000001b/uni00000017 /uni00000014/uni00000017/uni00000015/uni0000001a/uni00000013 /uni00000014/uni0000001b /uni00000014/uni00000015/uni00000019/uni0000001b /uni00000014/uni00000017 /uni00000014/uni0000001b/uni00000019/uni00000019 /uni00000019 /uni00000015/uni0000001b/uni00000019/uni0000001b /uni00000014/uni0000001b /uni00000014/uni00000017 /uni00000030/uni0000002c/uni00000027/uni00000025/uni00000003/uni0000005a/uni0000004c/uni00000051 /uni00000057/uni0000004c/uni00000048 /uni0000004f/uni00000052/uni00000056/uni00000048 Figure 6: Win-lose-tie ratios of MIDB-boosted data compare with original data evaluated by human. As described in Section 4.1, human evaluation was also conducted to assess the quality of data. To achieve this, we randomly selected 50 instruction pairs from the original subsets with their MIDB- boosted counterparts, and obtained independent rat- ings from multilingual experts who were unaware of the source of the samples. The results in Fig. 6 indicate that, after applying MIDB boosting, the data achieved a significantly higher average win ra- tios across all reviewers. It is worth noting that the enhancement in data quality is
https://arxiv.org/abs/2505.17671v1
consistently more pronounced across all 16 languages rated by hu- man, as compared to ratings by GPT-4o. This is possibly due to nuanced but important improve- /uni00000024/uni00000055/uni00000044/uni00000045/uni0000004c/uni00000046 /uni00000026/uni0000005d/uni00000048/uni00000046/uni0000004b /uni00000029/uni00000055/uni00000048/uni00000051/uni00000046/uni0000004b /uni0000002a/uni00000048/uni00000055/uni00000050/uni00000044/uni00000051 /uni0000002a/uni00000055/uni00000048/uni00000048/uni0000004e /uni0000002c/uni00000051/uni00000047/uni00000052/uni00000051/uni00000048/uni00000056/uni0000004c/uni00000044/uni00000051 /uni0000002c/uni00000057/uni00000044/uni0000004f/uni0000004c/uni00000044/uni00000051 /uni0000002d/uni00000044/uni00000053/uni00000044/uni00000051/uni00000048/uni00000056/uni00000048 /uni0000002e/uni00000052/uni00000055/uni00000048/uni00000044/uni00000051 /uni00000030/uni00000044/uni0000004f/uni00000044/uni0000005c /uni00000033/uni00000052/uni00000055/uni00000057/uni00000058/uni0000004a/uni00000058/uni00000048/uni00000056/uni00000048 /uni00000035/uni00000058/uni00000056/uni00000056/uni0000004c/uni00000044/uni00000051 /uni00000036/uni00000053/uni00000044/uni00000051/uni0000004c/uni00000056/uni0000004b /uni00000037/uni0000004b/uni00000044/uni0000004c /uni00000037/uni00000058/uni00000055/uni0000004e/uni0000004c/uni00000056/uni0000004b /uni00000039/uni0000004c/uni00000048/uni00000057/uni00000051/uni00000044/uni00000050/uni00000048/uni00000056/uni00000048 /uni00000024/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004a/uni00000048/uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013Win Rate(%)/uni00000014/uni0000001a/uni00000011/uni00000017/uni00000015/uni00000017/uni00000011/uni0000001b/uni00000016/uni00000014/uni00000011/uni00000015/uni00000016/uni00000016/uni00000011/uni0000001436.4AlpacaEval-16L /uni00000024/uni00000055/uni00000044/uni00000045/uni0000004c/uni00000046 /uni00000026/uni0000005d/uni00000048/uni00000046/uni0000004b /uni00000029/uni00000055/uni00000048/uni00000051/uni00000046/uni0000004b /uni0000002a/uni00000048/uni00000055/uni00000050/uni00000044/uni00000051 /uni0000002a/uni00000055/uni00000048/uni00000048/uni0000004e /uni0000002c/uni00000051/uni00000047/uni00000052/uni00000051/uni00000048/uni00000056/uni0000004c/uni00000044/uni00000051 /uni0000002c/uni00000057/uni00000044/uni0000004f/uni0000004c/uni00000044/uni00000051 /uni0000002d/uni00000044/uni00000053/uni00000044/uni00000051/uni00000048/uni00000056/uni00000048 /uni0000002e/uni00000052/uni00000055/uni00000048/uni00000044/uni00000051 /uni00000030/uni00000044/uni0000004f/uni00000044/uni0000005c /uni00000033/uni00000052/uni00000055/uni00000057/uni00000058/uni0000004a/uni00000058/uni00000048/uni00000056/uni00000048 /uni00000035/uni00000058/uni00000056/uni00000056/uni0000004c/uni00000044/uni00000051 /uni00000036/uni00000053/uni00000044/uni00000051/uni0000004c/uni00000056/uni0000004b /uni00000037/uni0000004b/uni00000044/uni0000004c /uni00000037/uni00000058/uni00000055/uni0000004e/uni0000004c/uni00000056/uni0000004b /uni00000039/uni0000004c/uni00000048/uni00000057/uni00000051/uni00000044/uni00000050/uni00000048/uni00000056/uni00000048 /uni00000024/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004a/uni00000048/uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013Win Rate(%)/uni00000014/uni0000001a/uni00000011/uni00000014 /uni00000014/uni00000018/uni00000011/uni00000018/uni00000016/uni00000018/uni00000011/uni00000018 /uni00000016/uni00000013/uni00000011/uni0000001939.4MT-Bench-16L Round 1 /uni00000024/uni00000055/uni00000044/uni00000045/uni0000004c/uni00000046 /uni00000026/uni0000005d/uni00000048/uni00000046/uni0000004b /uni00000029/uni00000055/uni00000048/uni00000051/uni00000046/uni0000004b /uni0000002a/uni00000048/uni00000055/uni00000050/uni00000044/uni00000051 /uni0000002a/uni00000055/uni00000048/uni00000048/uni0000004e /uni0000002c/uni00000051/uni00000047/uni00000052/uni00000051/uni00000048/uni00000056/uni0000004c/uni00000044/uni00000051 /uni0000002c/uni00000057/uni00000044/uni0000004f/uni0000004c/uni00000044/uni00000051 /uni0000002d/uni00000044/uni00000053/uni00000044/uni00000051/uni00000048/uni00000056/uni00000048 /uni0000002e/uni00000052/uni00000055/uni00000048/uni00000044/uni00000051 /uni00000030/uni00000044/uni0000004f/uni00000044/uni0000005c /uni00000033/uni00000052/uni00000055/uni00000057/uni00000058/uni0000004a/uni00000058/uni00000048/uni00000056/uni00000048 /uni00000035/uni00000058/uni00000056/uni00000056/uni0000004c/uni00000044/uni00000051 /uni00000036/uni00000053/uni00000044/uni00000051/uni0000004c/uni00000056/uni0000004b /uni00000037/uni0000004b/uni00000044/uni0000004c /uni00000037/uni00000058/uni00000055/uni0000004e/uni0000004c/uni00000056/uni0000004b /uni00000039/uni0000004c/uni00000048/uni00000057/uni00000051/uni00000044/uni00000050/uni00000048/uni00000056/uni00000048 /uni00000024/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004a/uni00000048/uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013Win Rate(%)/uni00000014/uni0000001a/uni00000011/uni00000017 /uni00000014/uni0000001a/uni00000011/uni00000014/uni00000016/uni00000018/uni00000011/uni00000014 /uni00000015/uni0000001b/uni00000011/uni0000001737.0MT-Bench-16L Round 2 /uni00000025/uni00000044/uni0000005c/uni0000004f/uni0000004c/uni00000051/uni0000004a/uni00000010/uni00000014/uni00000016 B/uni00000010/uni00000059/uni00000014/uni00000011/uni00000014 /uni00000033/uni00000052/uni0000004f/uni0000005c/uni0000002f/uni00000030/uni00000010/uni00000026/uni0000004b/uni00000044/uni00000057/uni00000010/uni00000014/uni00000016/uni00000025 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000010/uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000010/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni0000001a/uni00000025/uni00000010/uni00000030/uni00000058/uni0000004f/uni00000057/uni0000004c/uni0000004f/uni0000004c/uni00000051/uni0000004a/uni00000058/uni00000044/uni0000004f /uni00000024/uni0000004f/uni00000053/uni00000044/uni00000046/uni00000044/uni00000010/uni00000030 /uni00000024/uni0000004f/uni00000053/uni00000044/uni00000046/uni00000044/uni00000010/uni00000030/uni0000002c/uni00000027/uni00000025Figure 7: Performances of Alpaca-MIDB compared with existing LLMs on multilingual test sets. ments more perceivable to human, such as human- ized tones and culturally appropriate expressions. 4.3 Evaluation of LLMs Tuned on MIDB-boosted Dataset Starting from the Alpaca-52k dataset, we generate its MT versions into 16 languages. The translated datasets are then further boosted using MIDB, re- sulting in the final MIDB-boosted versions. In this section, we evaluate the Alpaca-MIDB mod- els, which are trained on LLaMA-3-8B (Grattafiori et al., 2024) following the same setting as the Al- paca (Taori et al., 2023) model, but with the MIDB- boosted datasets replacing the translated Alpaca- 52K datasets. Following Section 3.4, we assess them on three datasets: the AlpacaEval-16L and MT-Bench-16L, which were extended by our ex- perts, and BLEnD (Myung et al., 2024). These test sets ensures evaluations on multilingual abilities on instruction following, multi-turn dialogue, and culture-specific understanding. 4.3.1 Compare with Existing LLMs Baselines We compared our models against sev- eral popular open-source models that focus on mul- tilingual tasks. BayLing (Zhang et al., 2023) and PolyLM (Wei et al., 2023) are two multilingual LLMs with advanced language alignment capabili- ties and strong multilingual performance. The third baseline (LightBlue, 2025) is a recent model dis- tilled from DeepSeek (DeepSeek-AI, 2025) with extra training on multilingual chain-of-thought data.Additionally, we trained Alpaca-M models using the pre-boosted Alpaca datasets. All models are compared with reference answers generated by LLaMA3.1-8B-Instruct, a strong multilingual base- line through extensive SFT and RLHF. Result As shown in Fig. 7, Alpaca-MIDB per- forms exceptionally well on the AlpacaEval-16L benchmark, achieving notably high scores in low- resource languages such as Czech, Greek, and Viet- namese. On the MT-Bench-16L round 1 and round 2, Alpaca-MIDB also maintains its leading posi- tion. However, due to the varying proficiency of the foundation model ( i.e., LLaMA3-8B) across dif- ferent languages, the performance of MIDB may exhibit fluctuations across certain language settings. Overall, Alpaca-MIDB excels across all bench- marks, particularly in handling low-resource multi- lingual and multi-turn interactive tasks, indicating its strong multilingual capabilities, which origi- nates from the high quality of its MIDB-boosted training data. 4.3.2 Human Evaluation on Alpaca-MIDB As discussed in Appendix B, in addition to the au- tomatic evaluation, seven human experts indepen- dently assessed the responses generated by Alpaca- MIDB and the original Alpaca-M model on the AlpacaEval-16L and MT-Bench-16L test sets. The reviewers were unaware of the sources of the re- sponses. As shown in Table 2, all reviewers consis- tently gave Alpaca-MIDB higher ratings
https://arxiv.org/abs/2505.17671v1
(winning scores ranging from 1.28 to 1.72) compared with the original Alpaca-M model. Human reviewers also left comments to justify their evaluations. By analyzing them, we observed that responses gen- erated by Alpaca-MIDB provided more detailed, human-like, well-structured, and readable content. Furthermore, Alpaca-MIDB was commented to have a richer reasoning process, particularly in ad- dressing programming-related problems, resulting in superior content quality and outcomes. This improved performance of Alpaca-MIDB further confirms the effectiveness of the boosts made by MIDB, which successfully enhance the instruction- following and multi-turn conversation ability of subsequently tuned LLMs by optimizing the qual- ity of the underlying instruction dataset. Language AlpacaEvalMT-Bench Round 1 Round 2 French 1.68 1.56 1.52 Greek 1.56 1.46 1.28 Japanese 1.72 1.68 1.40 Korean 1.36 1.38 1.38 Portuguese 1.62 1.84 1.88 Russian 1.68 1.72 1.52 Table 2: Winning scores from human evaluation on Alpaca-MIDB v.s.Alpaca-M (score > 1 is victory). 4.3.3 Improvement of Alpaca-MIDB in cultural understanding ability English-centric LLMs often suffer from limited localization capabilities from other cultures, as the most of instruction pairs used during train- ing reflect the contexts of English, leading to lin- guistic and cultural bias when applied in multi- lingual environments. To verify the effectiveness of MIDB on enhancing cultural understanding, we employed BLEnD (Myung et al., 2024), a manually constructed question-answering benchmark specif- ically designed to evaluate LLMs’ understanding of daily knowledge across diverse cultures and lan- guages. The accuracy is calculated based on aver- age performance under two prompts: 1) Directly ask the LLM to provide answers; 2) Add a role setting for the LLM to answer as a native in target culture. We use GPT-4o to verify if the LLM’s response matches the manually labeled entity for each question (Appendix F.2), and calculate the percentage of correct answers as the score. As shown in Table 3, models trained on datasets boosted with MIDB achieve performance improve- ments ranging from 12.1% to 30.8%. These re-Language Original MIDB-Boosted Up ↑ Arabic 15.03 16.85 12.1% Greek 18.72 22.03 17.7% Spanish 25.00 28.49 14.0% Indonesian 20.62 25.30 22.7% Korean 18.50 24.19 30.8% Table 3: Accuracy score (0-100) of Alpaca-M and Alpaca-MIDB on cultural-specificity knowledge. sults highlight the effectiveness of MIDB in en- riching training data with culturally relevant and localized knowledge, thereby significantly enhanc- ing the models’ ability to understand and generate content that is contextually appropriate for non- English-speaking populations. /uni0000002d/uni00000024/uni0000002c/uni00000037/uni0000002c/uni00000027/uni00000028/uni0000002f/uni00000027/uni00000028/uni00000029/uni00000035/uni00000026/uni00000036/uni00000024/uni00000035 /uni00000017/uni0000001b /uni00000016/uni0000001b /uni00000014/uni00000017/uni00000017/uni00000017 /uni00000016/uni0000001b /uni00000014/uni0000001b/uni00000016/uni0000001b /uni00000016/uni0000001c /uni00000015/uni00000016/uni00000017/uni0000001b /uni00000016/uni00000019 /uni00000014/uni00000019/uni00000018/uni00000013 /uni00000016/uni00000019 /uni00000014/uni00000017/uni00000017/uni00000018 /uni00000016/uni00000019 /uni00000015/uni00000013/uni00000016/uni0000001c /uni00000016/uni0000001c /uni00000015/uni00000015/uni00000016/uni0000001a /uni00000017/uni00000013 /uni00000015/uni00000016 /uni00000039/uni0000002c/uni00000037/uni00000035/uni00000037/uni0000002b/uni00000028/uni00000036/uni00000035/uni00000038/uni00000033/uni00000037/uni00000030/uni00000036/uni0000002e/uni00000032 /uni00000017/uni00000019 /uni00000016/uni0000001c /uni00000014/uni00000018/uni00000017/uni00000018 /uni00000016/uni0000001c /uni00000014/uni00000019/uni00000017/uni00000018 /uni00000017/uni00000014 /uni00000014/uni00000017/uni00000016/uni00000018 /uni00000017/uni00000016 /uni00000015/uni00000015/uni00000017/uni0000001b /uni00000016/uni0000001a /uni00000014/uni00000018/uni00000017/uni00000017 /uni00000016/uni0000001a /uni00000014/uni0000001c/uni00000017/uni00000018 /uni00000016/uni0000001b /uni00000014/uni00000019/uni00000017/uni0000001a /uni00000016/uni0000001b /uni00000014/uni00000017 /uni00000030/uni0000002c/uni00000027/uni00000025/uni00000003/uni0000005a/uni0000004c/uni00000051 /uni00000057/uni0000004c/uni00000048 /uni0000004f/uni00000052/uni00000056/uni00000048 Figure 8: Win-lose-tie ratios of MIDB-boosted v.s.orig- inal data on Dolly-15k, an out-of-distribution dataset. 4.4 Testing on out-of-distribution dataset In addition to test MIDB on Alpaca-52k (in- distribution), we also select Dolly-15k (Conover et al., 2023), a widely used instruction dataset, to evaluate the generalization capability of our method on out-of-distribution (OOD) data. As de- picted in Fig. 8, the improvement of OOD data quality after MIDB boosting is as significant as that observed with in-distribution data. This observa-
https://arxiv.org/abs/2505.17671v1
tion suggests the strong generalization capabilities and promising application potentials of MIDB. 5 Conclusion In this paper, we proposed, MIDB, an automatic tool for data cleaning and quality enhancement for multilingual IT. Experiment on 16 languages indicates universal improvement on data quality and subsequent model performance, suggesting a promising potential of MIDB to be integrated into the data synthesis pipelines of multilingual LLMs. Furthermore, the improvement on cultural understanding ability in the BLEnD test set renders MIDB a unique tool to enhance localized experi- ence of multilingual LLMs. And the effectiveness of MIDB on out-of-distribution datasets ensures its adaptiveness on more generalized scenarios. Fu- ture work include expanding supported languages, testing on real-world scenarios and larger models. 6 Limitations Despite the strong performance of MIDB across 16 languages, including several low-resource ones, its current evaluation and design scope remain limited in several important respects: Limited coverage of heterogeneous instruc- tion tuning datasets. The enhancement experi- ments for MIDB have so far been conducted only on a small number of publicly available instruction tuning datasets. The generalizability and effective- ness of MIDB on more diverse and heterogeneous instruction sources remain unverified, particularly in real-world applications where dataset structure, quality, and style may vary significantly. Focus restricted to simple natural language instructions. MIDB primarily targets flat and con- cise natural language instructions. It does not yet address structurally complex instruction types such as those involving function calls, code generation, or structured outputs, which often require strict ad- herence to syntax, parameter alignment, and the ability to handle nested data representations. Insufficient support for advanced reasoning and computational tasks. Current evaluations have been limited to basic instruction-following benchmarks. Consequently, MIDB’s capacity to handle more cognitively demanding tasks—such as multi-step logical reasoning, mathematical com- putations, or compositional decision-making—has yet to be systematically explored or validated. Our future work aim to extend MIDB’s capabili- ties to these more complex instruction types, with particular emphasis on robust generalization to un- seen formats and thorough evaluation from both syntactic and semantic perspectives. 7 Ethical Consideration This work involves the curation and revision of multilingual instruction data across 16 languages. The following key aspects were considered: Ethical and Professional Annotation Prac- tices. All human-annotated revisions used in train- ing MIDB were carried out by qualified linguisticexperts under fair working conditions with appro- priate compensation. This ensured high-quality data while upholding ethical standards. Bias Awareness and Cultural Sensitivity. Dur- ing the data correction and localization process care was taken to avoid introducing cultural biases or normative judgments. However residual biases may still persist. These could originate from the initial synthetic data or be introduced during the revision process. Challenges and Future Directions. While MIDB aims to improve linguistic and cultural align- ment there is a risk that automated systems may reinforce dominant language norms or overlook underrepresented dialects. Future efforts should in- volve broader community participation especially from speakers of low-resource and marginalized languages. This is essential for promoting more inclusive and equitable multilingual AI develop- ment. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
https://arxiv.org/abs/2505.17671v1
Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 . Hongyi Cai, Yuqian Fu, Hongming Fu, and Bo Zhao. 2025. Mergeit: From selection to merging for efficient instruction tuning. arXiv preprint arXiv:2503.00034 . Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srini- vasan, Tianyi Zhou, Heng Huang, and Hongxia Jin. 2024a. Alpagasus: Training a better alpaca model with fewer data. In International Conference on Learning Representations . Pinzhen Chen, Shaoxiong Ji, Nikolay Bogoychev, An- drey Kutuzov, Barry Haddow, and Kenneth Heafield. 2024b. Monolingual or multilingual instruction tun- ing: Which makes a better alpaca. In Findings of the Association for Computational Linguistics: EACL 2024 , pages 1347–1356. Pinzhen Chen, Simon Yu, Zhicheng Guo, and Barry Haddow. 2024c. Is it good data for multilingual instruction tuning or just bad multilingual evaluation for large language models? In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 9706–9726. Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. 2023. Free dolly: Introducing the world’s first truly open instruction- tuned llm. DeepSeek-AI. 2025. Deepseek-r1: Incentivizing rea- soning capability in llms via reinforcement learning. InarXiv preprint arXiv:2501.12948 . Yann Dubois, Chen Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy S Liang, and Tatsunori B Hashimoto. 2023. Alpacafarm: A simulation framework for methods that learn from human feedback. Advances in Neural Information Processing Systems , 36:30039–30069. Kazuki Fujii, Taishi Nakamura, Mengsay Loem, Hi- roki Iida, Masanari Ohi, Kakeru Hattori, Hirai Shota, Sakae Mizuki, Rio Yokota, and Naoaki Okazaki. 2024. Continual pre-training for cross-lingual llm adaptation: Enhancing japanese language capabili- ties. In First Conference on Language Modeling . Yuan Ge, Yilun Liu, Chi Hu, Weibin Meng, Shimin Tao, Xiaofeng Zhao, Mahong Xia, Zhang Li, Box- ing Chen, Hao Yang, et al. 2024. Clustering and ranking: Diversity-preserved instruction selection through expert-aligned quality estimation. In Pro- ceedings of the 2024 Conference on Empirical Meth- ods in Natural Language Processing , pages 464–478. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. 2024. The llama 3 herd of mod- els.Preprint , arXiv:2407.21783. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. Preprint , arXiv:2106.09685. Wenshuai Huo, Xiaocheng Feng, Yichong Huang, Chengpeng Fu, Baohang Li, Yangfan Ye, Zhirui Zhang, Dandan Tu, Duyu Tang, Yunfei Lu, et al. 2025. Enhancing non-english capabilities of english- centric large language models through deep supervi- sion fine-tuning. arXiv preprint arXiv:2503.01275 . Tom Kocmi, Eleftherios Avramidis, Rachel Bawden, Ondˇrej Bojar, Anton Dvorkovich, Christian Feder- mann, Mark Fishel, Markus Freitag, Thamme Gowda, Roman Grundkiewicz, et al. 2024. Findings of the wmt24 general machine translation shared task: The llm era is here but mt is not solved yet. In Proceed- ings of the Ninth Conference on Machine Translation , pages 1–46. Wen Lai, Mohsen Mesgar,
https://arxiv.org/abs/2505.17671v1
and Alexander Fraser. 2024. Llms beyond english: Scaling the multilingual ca- pability of llms with cross-lingual feedback. arXiv preprint arXiv:2406.01771 . Ming Li, Yong Zhang, Zhitao Li, Jiuhai Chen, Lichang Chen, Ning Cheng, Jianzong Wang, Tianyi Zhou, and Jing Xiao. 2024. From quantity to quality: Boost- ing llm performance with self-guided data selection for instruction tuning. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 7595–7628.Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Alpacaeval: An au- tomatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval . LightBlue. 2025. Deepseek-r1-distill-qwen-7b- multilingual. Yilun Liu, Shimin Tao, Xiaofeng Zhao, Ming Zhu, Wen- bing Ma, Junhao Zhu, Chang Su, Yutai Hou, Miao Zhang, Min Zhang, et al. 2024. Coachlm: Auto- matic instruction revisions improve the data quality in llm instruction tuning. In 2024 IEEE 40th Inter- national Conference on Data Engineering (ICDE) , pages 5184–5197. IEEE. Junho Myung, Nayeon Lee, Yi Zhou, Jiho Jin, Rifki Putri, Dimosthenis Antypas, Hsuvas Borkakoty, Eu- nsu Kim, Carla Perez-Almendros, Abinew Ali Ayele, et al. 2024. Blend: A benchmark for llms on ev- eryday knowledge in diverse cultures and languages. Advances in Neural Information Processing Systems , 37:78104–78146. Gene Ruebsamen. 2023. Cleaned alpaca dataset. GitHub repository. Uri Shaham, Jonathan Herzig, Roee Aharoni, Idan Szpektor, Reut Tsarfaty, and Matan Eyal. 2024. Mul- tilingual instruction tuning with just a pinch of multi- linguality. In Findings of the Association for Compu- tational Linguistics ACL 2024 , pages 2304–2317. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971 . Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288 . Ahmet Üstün, Viraat Aryabumi, Zheng Yong, Wei-Yin Ko, Daniel D’souza, Gbemileke Onilude, Neel Bhan- dari, Shivalika Singh, Hui-Lee Ooi, Amr Kayid, et al. 2024. Aya model: An instruction finetuned open- access multilingual language model. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 15894–15939. Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023a. Pandalm: An automatic evaluation benchmark for llm instruction tuning optimization. arXiv preprint arXiv:2306.05087 . Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023b. Self-instruct: Aligning language models with self-generated instructions. In Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 13484–13508, Toronto, Canada. Association for Computational Linguistics. Xiangpeng Wei, Haoran Wei, Huan Lin, Tianhao Li, Pei Zhang, Xingzhang
https://arxiv.org/abs/2505.17671v1
Ren, Mei Li, Yu Wan, Zhiwei Cao, Binbin Xie, Tianxiang Hu, Shangjie Li, Binyuan Hui, Bowen Yu, Dayiheng Liu, Baosong Yang, Fei Huang, and Jun Xie. 2023. Polylm: An open source polyglot large language model. Preprint , arXiv:2307.06018. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Qingwei Lin, and Daxin Jiang. 2024. WizardLM: Empow- ering large pre-trained language models to follow complex instructions. In International Conference on Learning Representations . Shaolei Zhang, Qingkai Fang, Zhuocheng Zhang, Zhen- grui Ma, Yan Zhou, Langlin Huang, Mengyu Bu, Shangtong Gui, Yunji Chen, Xilin Chen, and Yang Feng. 2023. Bayling: Bridging cross-lingual align- ment and instruction following through interactive translation for large language models. Preprint , arXiv:2306.10968. Zhihan Zhang, Dong-Ho Lee, Yuwei Fang, Wenhao Yu, Mengzhao Jia, Meng Jiang, and Francesco Barbieri. 2024. Plug: Leveraging pivot language in cross- lingual instruction tuning. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7025– 7046. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623. Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2024a. Lima: Less is more for alignment. Advances in Neural Information Process- ing Systems , 36. Hang Zhou, Yehui Tang, Haochen Qin, Yujie Yang, Renren Jin, Deyi Xiong, Kai Han, and Yunhe Wang. 2024b. Star-agents: Automatic data optimization with llm agents for instruction tuning. Advances in Neural Information Processing Systems , 37:4575– 4597. Wenhao Zhu, Yunzhe Lv, Qingxiu Dong, Fei Yuan, Jingjing Xu, Shujian Huang, Lingpeng Kong, Jiajun Chen, and Lei Li. 2023. Extrapolating large languagemodels to non-english by aligning languages. arXiv preprint arXiv:2308.04948 . A Related Work Details A.1 Multilingual Instruction Tuning Chen et al. (2024b) combined multiple MT ver- sions of the Alpaca dataset into training one multi- lingual model and achieved improved performance compared with monolingual baselines. Upon this combination, Zhu et al. (2023) further involved translation instructions into the training set, re- questing the model to translate source sentences in English to sentences in target languages, to help transfer the knowledge learned in English. To make this transferring process more explicit, Zhang et al. (2024) composed specialized instruction pairs that ask the model to first process instructions in a pivot language ( e.g., English) and then produce response in the target language. And Huo et al. (2025) fur- ther incorporated this English-thinking supervision into the internal layers of the model by supervising on the hidden representations. A.2 Instruction Data Synthesis The pioneering attempt of instruction synthesis is Self-Instruct (Wang et al., 2023b), leveraging LLMs to produce instruction pairs given a small manually-written seed dataset. Subsequently, the Alpaca project (Taori et al., 2023) utilized the Self- Instruct strategy to generate 52k instruction pairs using GPT-3.5 model. However, due to hallucina- tions of LLMs, data quality issues exist in these synthesized instruction pairs. To address
https://arxiv.org/abs/2505.17671v1
this, vari- ous methods have been proposed, which can be cat- egorized into filtering-based and refinement-based methods. Filtering-based methods utilize a rating method (or a specialized model) to rate every sam- ple in the dataset and retain only a subset of the dataset with high scores (Chen et al., 2024a; Ge et al., 2024; Li et al., 2024). In contrast, refinement- based methods refine the dataset by rewriting, cor- recting and diversifying samples in it, thereby keep- ing the size of the dataset without damaging its instruction coverage (Liu et al., 2024; Zhou et al., 2024b; Cai et al., 2025). BDetailed Analysis on Profile of Involved Multilingual Human Experts and Task Allocation Strategy The expertise of our multilingual team was pivotal in ensuring the quality of the datasets and evalua- Assigned Tasks Experts Years Type Construction of MEB Dataset 23 8.4 Outsourced Localization of Benchmarks 20 5.2 Outsourced Human Evaluation 7 3.9 Regular Table 4: Profiles and allocations of language experts tions. As shown in Table 4, we recruited a group of highly skilled language experts with diverse linguis- tic backgrounds from the language service center of a prominent international corporation. All experts are educated professionals in linguistics, offering services such as translation, localization, editing and technical writing for multiple languages. The allocation of experts into the three tasks in Table 4 was carefully structured, considering both task complexity and experts’ language proficiency. Among the tasks, human evaluations on dataset quality and model performance are expected to be the most challenging, requiring mastery of tar- get language and consistent internal discussion, which renders only regular employees eligible for this task. To ensure a broad language coverage of MIDB, outsourced experts were also involved for the task of training set construction and bench- mark localization, with the former allocated more resource due to relative task complexity. The ex- pertise assigned to each language is ensured to be evenly distributed ( i.e., possibly multiple experts assigned to the same language). During this tasks, quality assurance was main- tained under the rigorous procedure of the cooper- ated language service center. Specifically, a two- round review-rebuttal feedback loop was applied for each language, with third-party reviewers con- tinuously inspecting samples during annotation and discussing with annotators to reach agreements. C Supported Languages C.1 Language Name-Code Mappings See Table 5 for the mappings between full names of languages and their short codes. Additional attributes of each language are also included. C.2 Reasons for Language Selection As shown in Fig. 9, all regions of Europe are represented, given the global influence of its lan- guages. Additionally, as shown in Table 5, we include languages with lower popularity and fewer resources, such as Greek, Czech, Vietnamese, Thai, and Malay. As a result, our dataset provides aLanguage Code Geographical Groupings Popularity Level Italian IT Southern Europe High Resource Spanish ES Southern Europe High Resource French FR Western Europe High Resource German DE Central Europe High Resource Russian RU EENA High Resource Arabic AR MENA Medium Resource Turkish TR EENA Medium Resource Japanese JA East Asia Medium Resource Korean KO East
https://arxiv.org/abs/2505.17671v1
Asia Medium Resource Portuguese PT Southern Europe Medium Resource Czech CS Central Europe Low Resource Greek EL Southern Europe Low Resource Indonesian ID Southeast Asia Low Resource Thai TH Southeast Asia Low Resource Vietnamese VI Southeast Asia Low Resource Malay MS Southeast Asia Low Resource Table 5: Language names mapping to codes, geograph- ical grouping and resource level, where MENA repre- sents Middle East and North Africa; EENA represents Eastern Europe and Northern Asia. broader generalization across a diverse set of lan- guages, thus enhancing both the languages cover- age of the model and the credibility of evaluation process. Figure 9: Languages distribution, where MENA repre- sents Middle East and North Africa; EUNA represents Eastern Europe and Northern Asia. D The Details about Test Set Construction We evaluate our models on three public bench- marks assessing instruction-following, multi-turn dialogue, and culture-specific understanding abil- ity: AlpacaEval (Li et al., 2023), MT-Bench (Zheng et al., 2023), and BLEnD (Myung et al., 2024). Al- pacaEval is an LLM-based automatic evaluation framework validated against 20k+ human judg- ments, aims to assess the instruction following abil- ity of LLMs. MT-Bench includes 80 multi-turn questions across eight intent categories (e.g., cod- ing, reasoning, knowledge), designed to challenge strong models. BLEnD contains 52.6k question- answer pairs to assess multilingual understanding and cross-cultural generalization. The first two benchmarks were originally designed without mul- tilingual support. As shown in Table 4, a team of 20 professional translators was recruited to extend their functionality to include 16 languages. The team dedicated a total of 175 person-days to the development of the test set. And hence we rename them to AlpacaEval-16L and MT-Bench-16L, re- spectively. Notably, the BLEnD benchmark already covers five languages within the scope of our eval- uation. E Ablation Study Ablation on backbone model To investigate the impact of different backbone models on the per- formance of MIDB, we conduct a comparison be- tween LLaMA3.1-8B-Instruct and LLaMA3.2-3B- Instruct, as shown in Fig. 10. Although LLaMA3.2 is reported to exhibit strong multilingual capabili- ties, our experimental results demonstrate that the MIDB model trained on LLaMA3.1 significantly outperforms that on LLaMA3.2 in terms of multi- lingual data boosting abilities. This observation is consistent with the scaling law, which suggests that larger models with more parameters tend to achieve superior performance. Based on these findings, we select LLaMA3.1-8B-Instruct as the base model of MIDB for our main experiments. Figure 10: Backbone model ablation and quality control length selection of MIDB. Displayed win rates are based on data quality ratings from GPT-4o on MIDB-boosted data v.s.original data. Ablation on quality control length To ensure optimized performance of MIDB, we further in- corporated the distance-based filtering mechanism in Liu et al. (2024), which is to filter high-qualitysamples with the largest edit distances between the MIDB-boosted instruction pairs and the original ones. As illustrated in Fig. 10, selecting the top 720 data samples based on this metric results in higher data quality compared to alternative thresh- old settings, such as 1500 or 2301. This aligns with the hypothesis that manual revision samples with a higher edit
https://arxiv.org/abs/2505.17671v1
distance contain more enriched learning patterns, while samples with a lower edit distance typically involve revisions limited to su- perficial aspects, such as grammar and layout ad- justments (Liu et al., 2024). Based on these results and analysis, we set the final size of our quality- controlled dataset to 720 ×16 = 11,520 samples in the training of MIDB for our main evaluation. F Prompts F.1 Prompt for GPT-4o as a Judge [System] Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user question displayed below. You should choose the assistant that follows the user’s instructions and answers the user’s question better. Your evaluation should consider factors such as the helpfulness, relevance, accu- racy, depth, creativity, and level of detail of their responses. Begin your evaluation by comparing the two responses and provide a short explana- tion. Avoid any position biases and ensure that the order in which the responses were presented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Do not favor certain names of the assistants. Be as objective as possible. After providing your explanation, output your final verdict by strictly following this format: "[[1]]" if assistant A is better, "[[2]]" if assistant B is better, and "[[0]]" for a tie. [User Question] {question} [The Start of Assistant A’s Answer] {answer_a} [The End of Assistant A’s Answer] [The Start of Assistant B’s Answer] {answer_b} [The End of Assistant B’s Answer] F.2 Prompt for BLEnD Entity Detection [System] Check whether an entity in the specified set is matched in a given statement. Return "Yes" or "No" only in the first line of the answer. [Given statement] {model_answer} [Given collection] {answe_set} G Implementation Details We explored different backbone models θand different quality control length for MIDB. For the training of MIDB, we used LLaMA3.1-8B- Instruct (Grattafiori et al., 2024) as the backbone model, which has 8B parameters, and set quality control length to 720. To efficiently adapt the back- bone LLMs, we employed LoRA (Hu et al., 2021), a parameter efficient fine-tuning technique, with a rank of 64. MIDB was trained for three epochs with a learning rate of 4×10−4and batch size is set to 32. For training the instruction-following models, we utilized the same settings as the official Alpaca repository (Taori et al., 2023), with the ex- ception of using different instruction datasets and using LLaMA3-8B as the backbone. The multi- lingual Alpaca datasets and Dolly datasets used during evaluation was translated by the GPT-4o- mini model. Other than that, GPT-4o is utilized as an automatic evaluator. And a beam size of one was used for decoding across all models.
https://arxiv.org/abs/2505.17671v1
Tuning Language Models for Robust Prediction of Diverse User Behaviors Fanjin Meng1∗, Jingtao Ding1, Jiahui Gong1, Chen Yang2, Hong Chen2, Zuojian Wang2, Haisheng Lu2,Yong Li1† 1Department of Electronic Engineering, Tsinghua University, Beijing, China 2Honor Device Co., Ltd., Shenzhen, China liyong07@tsinghua.edu.cn Abstract Predicting user behavior is essential for intelligent assistant services, yet deep learning models often struggle to capture long-tailed behaviors. Large language models (LLMs), with their pretraining on vast corpora containing rich behavioral knowledge, offer promise. However, existing fine-tuning approaches tend to overfit to frequent “anchor” behaviors, reducing their ability to predict less common “tail” behaviors. In this paper, we introduce BehaviorLM, a progressive fine-tuning approach that addresses this issue. In the first stage, LLMs are fine-tuned on anchor behaviors while preserving general behavioral knowledge. In the second stage, fine- tuning uses a balanced subset of all behaviors based on sample difficulty to improve tail behavior predictions without sacrificing anchor performance. Experimental results on two real-world datasets demonstrate that BehaviorLM robustly predicts both anchor and tail behaviors and effectively leverages LLM behavioral knowledge to master tail behavior prediction with few-shot examples. 1 Introduction Most people lead routine lives shaped by habits, but also display short-term bursts of context-driven activities. Accurately predicting users’ next behavior is essential for intelligent assistant services across web platforms and smart devices [6, 31, 26, 25]. With the accumulation of user behavior data, deep learning has replaced rule-based methods [ 33, 34,16], with transformer-based models excelling at capturing sequential patterns [ 12,23,30,26]. However, their reliance on extensive training data leads to high costs. In this context, Large language models (LLMs) [ 5,35] offer a promising alternative by leveraging pretrained knowledge of human behavior, showing potential in interpreting user intent and generating accurate predictions [32, 36]. Recent works adapt LLMs for behavior prediction by converting behavioral sequences into text and fine-tuning them to predict user behaviors [ 15,3,17,13]. Early methods [ 3] combined general instruction tuning with lightweight behavior fine-tuning, and subsequent research [ 17] enhanced LLMs with embeddings from traditional models. While expressing user behaviors as text appears intuitive [ 15], the long-tailed distribution of behaviors presents a significant challenge [ 19]: a user’s daily life is predominantly characterized by a small subset of frequent behaviors serving as “anchors”, while other behaviors occur far less commonly. Recent work [ 19] explores hybrid methods, combining LLMs’ profiling capabilities with traditional models to address this issue. However, capturing long- tailed distributions remains a persistent challenge for LLMs. Our empirical observations reveal that while an 8B-parameter LLM fine-tuned on behavior data outperforms untuned GPT-4 on anchor ∗The first two authors have equal contributions †Corresponding author Preprint. Under review.arXiv:2505.17682v1 [cs.CL] 23 May 2025 behaviors, it still underperforms on tail behaviors. This highlights a fundamental limitation: fine- tuning alone does not uniformly enhance prediction capabilities across diverse behaviors. In this paper, we empirically investigate LLMs’ uneven performance across behavior types and make a surprising discovery: an LLM fine-tuned only on anchor behaviors can still generalize well to tail behaviors in a zero-shot manner, even better than full data fine-tuning. Motivated by this finding,
https://arxiv.org/abs/2505.17682v1
we propose BehaviorLM , a two-stage fine-tuning progressive framework designed to better model long-tailed user behaviors. In the first stage, we fine-tune the LLM using anchor behaviors and a general conversational corpus in a multi-task setup to preserve general behavioral knowledge and prevent bias. In the second stage, we fine-tune the model on a balanced subset of all behaviors, selected by sample difficulty, to improve tail behavior predictions without compromising anchor performance. Unlike existing methods that rely on auxiliary data such as embeddings from traditional deep learning-based models, increasing training complexity and limiting generalization, our approach leverages the intrinsic semantic knowledge of LLMs to overcome these challenges. Our main contributions are as follows: •This paper introduce a novel progressive fine-tuning method for adapting LLM into the robust predictor of user daily behaviors, addressing the challenge of long-tailed user preferences. •We propose a progressive fine-tuning method that effectively leverages LLMs’ general behavioral knowledge for accurate and efficient behavior prediction. •Extensive experiments on two real-world datasets show BehaviorLM achieves up to 27.4%/20.4% accuracy improvements on tail behaviors , offering 100×higher sample efficiency than traditional transformer models and enabling few-shot prediction with as few as 20 examples. Ablation studies further validate the design of our method. 2 Motivation We first give a formal definition of our research problem and then present our novel observations that motivate the methodology design of BehaviorLM. 2.1 Problem Formulation Next behavior prediction refers to the task of predicting a user’s next behavior, y∈ B, given a chronologically ordered sequence x={e1, e2, . . . , e L}of their most recent Lhistorical behavioral events. Each event e= (l, t, b)indicates that a specific behavioral event b∈ Boccurred at location land time t. The behavioral event bspecifically refers to daily activities—such as exercise or gaming—rather than fine-grained actions like picking up a cup. The location lindicates a semantic indicator such as "home" or "workplace." The time tincludes both date and hour information. Given a collected dataset D={(xi, yi)}i=1,...,N, our goal is to train a prediction model MΦcapable of predicting the next behavioral event, i.e., y=MΦ(x). 2.2 Investigating LLM Fine-tuning Performance on Diverse User Behavior Dataset In this subsection, we evaluate the behavior prediction performance of fine-tuning approaches for LLMs using a real-world dataset of 37 smartphone user behaviors (detailed in Section 4). As shown in Figure 1(a), there is a significant class-imbalance issue, with behaviors divided into anchor (occurrence >1%) and tail categories. Despite only 16 anchor behaviors, they represent over 97% of the data, with an average anchor-to-tail occurrence ratio of 42.44 (0.97/16 vs. 0.03/21) and a maximum exceeding 2,500 (43.6% vs. 0.0163%). To further analyze the semantic meaning and similarity of diverse behaviors from the LLM’s per- spective, we use the pretrained Llama-8B v3.1 [ 7] to generate behavior embeddings and reduce them from 4096 to 2 dimensions using PCA [ 1]. As shown in Figure 1(b), anchor behaviors serve as semantic anchors in the latent space, with tail behaviors clustering around them. For example, watching videos, especially short ones, is highly frequent, while nearby behaviors like watching sports or
https://arxiv.org/abs/2505.17682v1
animation occur less often and appeal to fewer users. Similarly, public transportation is a more common commuting behavior than hailing a taxi. 2 (a) (b) (c) Tuning LLM on full behavior dataTuning LLM on Anchor data onlyFigure 1: (a) Empirical distribution of user behaviors in the Behavior dataset: "Anchor Behaviors" occur more than 1% of the time, while "Tail Behaviors" represent the rest. (b) Semantic embedding visualization of anchor and tail behaviors in the LLM. (c) Prediction accuracy comparison across LLM tuning methods and GPT4o for anchor and tail behaviors, with "NT" indicating no tuning. To investigate LLM behavior prediction performance, we fine-tune Llama-8B v3.1 on the previously described behavior dataset and compare average prediction accuracy for anchor and tail behaviors using an equal number of samples per behavior type (Figure 1(c)). For reference, GPT-4 (gpt-4o- 2025-03-26) achieves accuracy scores of 0.45 for anchor behaviors and 0.33 for tail behaviors. In its base form, Llama-8B v3.1 performs worse than GPT-4 for both anchor and tail behaviors. While fine- tuning improves its performance on anchor behaviors, surpassing GPT-4, it continues to underperform on tail behaviors. GPT-4’s strong overall performance, achieved without behavior-specific fine-tuning, suggests current fine-tuning methods overly favor anchor behaviors at the expense of tail behaviors, likely due to the severe class imbalance shown in Figure 1(a). Motivated by these observations, we explored whether addressing class imbalance during fine-tuning could better utilize the LLM’s inherent understanding of user behaviors. We fine-tuned the model exclusively on anchor behavior data, omitting tail behaviors. Notably, this approach preserved strong performance on anchor behaviors while enabling robust zero-shot prediction for tail behaviors, achieving an accuracy of 0.39—significantly higher than the 0.29 accuracy from traditional fine- tuning. This suggests that focusing on anchor behaviors allows the LLM to capture fundamental user behavior patterns and leverage its general knowledge to generalize to semantically related, unseen behaviors. This insight underpins our progressive fine-tuning approach for BehaviorLM. 3 Method 3.1 The BehaviorLM Framework 3.1.1 Behavior Prediction as Language Modeling To frame the next behavior prediction task as a language modeling problem, we design a specific prompt and apply instruction fine-tuning to the LLM. Prompt design. We employ a text-only prompt to convert user behavior history xand the next behavior y(if provided) into text-based metadata suitable for instruction tuning. The prompt consists of five components: 1.Task definition : A concise description of the task, e.g., "Predict the user’s next behavior from the candidates. 2.Role-playing instruction : Defines the LLM’s role and task [ 27], e.g., "You are a smart mobile assistant that can infer the user’s behavior preferences based on their history." 3.Historical behavior sequence (Input) : Transforms each element in xinto a textual descrip- tion, e.g., "(1, 16, home, Exercise), (1, 20, home, Gaming), (1, 20, home, Video)." 4.Candidate set (Input) : Specifies all possible behaviors for prediction, e.g., "Weather Check, Music, Cycling, Reading." 5.Next behavior (Output) : The ground truth label yfor the LLM to predict, e.g., Gaming. 3 Figure 2: The BehaviorLM framework, with a progressive fine-tuning approach This design transforms user behavior data into a format optimized
https://arxiv.org/abs/2505.17682v1
for LLM instruction fine-tuning. Instruction Fine-Tuning LLMs. To quickly adapt an LLM for behavior prediction tasks, we use instruction fine-tuning [ 22]. The user behavior dataset Dis converted into a text-based instruction dataset Dins={(xp i, yp i)}i=1,...,N, where xp iandyp iare natural language representations of inputs and outputs. The LLM is optimized with the next-token prediction loss: max ΦX (x,y)∈D ins|y|X s=1log (PΦ(ys|x, y<s)), (1) where Φare the LLM parameters, ysis the s-th token of y, andy<srepresents tokens preceding ys. Moreover, we adopt a well-known parameter-efficient fine-tuning technique, i.e., LoRA [ 10], to fine-tune the LLM more efficiently. 3.1.2 Progressive Fine-Tuning with Behavior Data As illustrated in Figure 1(a), user behavior data often exhibits a significant class imbalance , with many behaviors occurring infrequently, leading to a long-tailed distribution. When the task of predicting a user’s next behavior is reframed as generating the corresponding language token, this imbalance persists, causing poor performance on infrequent (tail) behaviors, as shown by the results of "Llama3.1-8B-FT" in Figure 1(b). Interestingly, we observe that fine-tuning an LLM on a subset of frequently occurring behaviors (anchor behaviors) enables it to act as a surprisingly effective zero-shot predictor for less frequent (tail) behaviors, often outperforming fine-tuning on the full dataset. Inspired by this, we propose a progressive fine-tuning strategy to enhance BehaviorLM’s ability to predict a diverse range of user behaviors. The proposed fine-tuning strategy consists of two progressive stages (Figure 2). In the first stage, referred to as A-Tuning, the LLM is fine-tuned using anchor behavior data to specialize in user behavior prediction while retaining its inherent rich knowledge of long-tailed behaviors. In the second stage, referred to as B-Tuning, the model is further fine-tuned on a class-balanced dataset covering all behaviors in a few-shot manner, enhancing its predictive capability for tail behaviors. The first stage helps the LLM become a specialist in predicting anchor behaviors, while the second stage transforms it into a generalist capable of predicting both anchor and tail behaviors. 3.2 Multitask-Enhanced Anchor Behavior Tuning (A-Tuning) We divide the instruction fine-tuning data, Dins, into two subsets: Da ins, containing labels for anchor behaviors, and Dt ins, encompassing the rest. In this stage, the LLM is fine-tuned exclusively on Da ins. As illustrated in Figure 1, anchor behaviors reflect core patterns of a user’s daily life. Therefore, fine- tuning on anchor behaviors provides the LLM with a foundational understanding of these underlying patterns, enabling accurate next-anchor behavior predictions. Moreover, since the LLM already 4 captures general behavioral knowledge from pretraining, fine-tuning on anchor behaviors facilitates generalization to unseen but semantically related behaviors. Additionally, compared to fine-tuning on the full dataset Dins, this approach mitigates bias toward overrepresented behavior types. To further improve generalization from anchor to tail behaviors, we proposed a multi-task fine-tuning approach. Beyond learning to predict next behaviors in Da ins, we maintain the model’s general task- solving ability, which our experiments later confirmed to be helpful for behavior prediction tasks. To achieve this, we jointly fine-tune the model on an auxiliary dataset Cins, comprising daily user- ChatGPT conversations [ 37]. This integrates
https://arxiv.org/abs/2505.17682v1
a conversation generation task alongside the primary behavior prediction task. We control the auxiliary task’s influence by adjusting the size of Cinsrelative toDa ins, using a ratio ε. Additionally, in practice, we filter out excessively long conversations from Cinsto ensure that the prompts for both tasks are comparable in length. 3.3 Difficulty-based Data Selection for Balanced Behavior Tuning (B-Tuning) In the second stage, we reintroduce the tail behaviors Dt insand combine them with Da insto create a class-balanced fine-tuning dataset. Since the LLM fine-tuned during the A-Tuning stage already serves as a good zero-shot predictor for tail behaviors, we believe that a small amount of instruction fine-tuning data covering all behavior types should suffice to build a robust user behavior predictor. However, to achieve this, the quality and informativeness of the selected samples play a crucial role. Specifically, we search for difficult samples from the following two dimensions for effectively fine-tuning the LLM in a few-shot way. Obtaining Mispredicted Samples. One simple way to measure sample difficulty is by scoring samples with an intermediate model, and those with the wrong predicted labels are more difficult than those correctly predicted [ 4]. In our approach, we utilize the LLM fine-tuned during A-Tuning as the scorer and extract mispredicted samples from Da insandDt insfor each behavior type. Contrastive Data Selection. Inspired by contrastive learning [ 20], we identify challenging samples by evaluating the distinguishability between their predicted and ground truth labels. Specifically, we select mispredicted samples with lower distinguishability scores d, computed as follows: d=0,ifˆy̸=yandCate(ˆ y) = Cate( y) 1,otherwise(2) where ˆydenotes the predicted behavior, yde- notes the ground truth label, and Cate(y)denotes behavior category of y, i.e., anchor or tail. For each type of behavior, we only select those mis- predicted samples with d= 0. This follows the idea that anchor behaviors (tail behaviors) are much more difficult to discriminate from them- selves than their counterparts. Finally, we combine the above two dimensions to measure sample difficulty and select a fixed number Fof samples for each type of behavior to create the class-balanced fine-tuning dataset§. By adopting the difficulty-based data selection strategy, we can prompt the LLM to capture more refined behavior knowledge in a sample-efficient manner and significantly enhance its prediction accuracy across tail behaviors without compro- mising those of anchor behaviors.Algorithm 1 Progressive Fine-tuning Strategy 1:Input: Datasets Da ins,Dt ins,Cins 2:A-Tuning 3:Mix samples from Da insandCins 4:Perform LoRA-based fine-tuning on Da ins∪ Cins 5:B-Tuning 6:forsample x∈ Da ins∪ Dt insdo 7: Use the LLM after A-Tuning to predict ˆy(x) 8: ifˆy̸=yand Cate (ˆy) =Cate(y)then 9: Retain x 10: end if 11:end for 12:Perform LoRA-based fine-tuning on re- tained samples We also summarize the designed progressive fine-tuning strategy for BehaviorLM in Algorithm 1. §SinceFis usually small, we do not have to score each sample with the LLM to obtain mispredicted samples, which can be time-costly. 5 4 Experiment 4.1 Experiment Settings 4.1.1 Datasets We evaluated our model on two real-world user behavior datasets: App usage dataset [ 16]: This open-source dataset records 1,753 users’ interactions across various apps over one week. Grouped
https://arxiv.org/abs/2505.17682v1
by app functions, it includes 24 behaviors reflecting key daily activities. Behavior dataset : This large-scale dataset is sourced from mobile device§usage logs, capturing user interactions and their corresponding behavioral intentions. After processing, it covers 37 daily behaviors across various scenarios, including learning, work, entertainment, and leisure. More details about the datasets and preprocessing can be seen in Appendix 3. 4.1.2 Evaluation Protocols and Metrics To thoroughly evaluate our model, we adopt two evaluation protocols. First, for next behavior prediction [3,17], we split users 8:1:1 into training, validation, and testing sets to assess overall performance. Second, to evaluate tail behaviors, we follow the common practice of long-tailed learning [21,29], construct a balanced test set with 500 samples per behavior, ensuring fair evaluation of tail behaviors. Note this test set is derived from the original Behavior and App datasets, independent of the first protocol. All experiments are repeated five times, with results averaged. Overall, we adopt six commonly used metrics. For next behavior prediction , we adopt weighted precision ( Prec w) and weighted recall ( Recw), accounting for the overall performance by con- sidering the proportion of behavior categories. Note that the weighted recall ( Recw) used here corresponds to the common recommendation metric Hit-Rate @1. For long-tailed learning , following the settings of [ 21,29], we evaluate four accuracy metrics based on behavior occurrence frequencies: category-average accuracy across all behaviors ( Overall ), head-category accuracy for behaviors with >5.0% frequency ( Head ), medium-category accuracy for those between 1.0% and 5.0% frequency (Medium ), and tail-category accuracy for the remaining low-frequency behaviors ( Tail ). These six metrics assess the model’s robustness across diverse behaviors and its ability to handle imbalanced datasets effectively. Detailed calculation formulas for the six metrics are provided in Appendix B. 4.1.3 Baselines We selected the following seven representative algorithms to compare with our proposed algorithm, covering traditional methods (SASRec [ 12] and Bert4Rec [ 30]), LLM-enhanced methods (PITun- ing [ 9], LLM-ESR [ 19] and AlphaFuse [ 11]), LLM-based methods (GPT4o [ 2], A-LLMRec [ 13], TALLRec [3] and LLaRa [17]). The details of baselines are provided in Appendix C. 4.1.4 Implementation Details We selected LLama3.1-8B [ 7] as the backbone for our experiments. Our experiments utilized the AdamW optimizer with a cosine annealing learning rate schedule. And we conducted LoRA fine- tuning and parallel training acceleration using the open-source LLM instruction fine-tuning library, llama-factory [38]. More details about the implementation are in the Appendix D. 4.2 Overall Performance We compare the performance of BehaviorLM with other baseline methods across two evaluation settings, and the results are summarized in Table 1. Our key observations are as follows: •BehaviorLM consistently outperforms across all metrics on both datasets. Notably, under the long-tailed learning evaluation protocol, BehaviorLM improves Overall Accuracy by up to 18.0% on the App dataset and 19.0% on the Behavior dataset. This performance is driven by its superior ability to predict tail behaviors, surpassing the best baseline by 27.4% (App) and 20.4% (Behavior). §https://www.honor.com/global/ 6 Table 1: Overall prediction performance of BehaviorLM compared with baselines on
https://arxiv.org/abs/2505.17682v1
two datasets. App Usage dataset Behavior Dataset Category Model Prec wRecwOverall Head Medium Tail Prec wRecwOverall Head Medium Tail TraditionalSASRec 0.5309 0.5759 0.2752 0.5255 0.2567 0.1733 0.4818 0.5507 0.1531 0.4083 0.2159 0.0941 Bert4Rec 0.3452 0.5400 0.0962 0.5290 - - 0.1908 0.4368 0.0290 0.2500 - - LLM-EnhancedPITuning 0.5837 0.5133 0.2910 0.5029 0.3721 0.1066 0.5829 0.5057 0.2121 0.5030 0.3545 0.0522 LLM-ESR 0.5437 0.5906 0.2779 0.5615 0.2467 0.1750 0.5325 0.5750 0.1633 0.4240 0.1772 0.0977 AlphaFuse 0.5621 0.6024 0.2873 0.5706 0.2552 0.1881 0.5428 0.5857 0.1729 0.4335 0.1874 0.1093 LLM-BasedLLama-NT 0.5467 0.5346 0.3736 0.5335 0.3685 0.3000 0.5091 0.4226 0.2433 0.4142 0.2467 0.2040 GPT4o 0.5872 0.5678 0.4557 0.5410 0.4642 0.4025 0.5735 0.5660 0.3561 0.4575 0.3703 0.3248 A-LLMRec 0.5908 0.6154 0.3514 0.5815 0.3539 0.2333 0.5545 0.5856 0.2526 0.4567 0.3267 0.1601 LLaRA 0.6074 0.6256 0.4455 0.5970 0.4468 0.3683 0.5892 0.6099 0.3458 0.5175 0.3661 0.2932 TALLRec 0.6173 0.6306 0.4397 0.6205 0.4328 0.3580 0.5938 0.6141 0.3425 0.5175 0.3636 0.2908 BehaviorLM 0.6343 0.6364 0.5379 0.6220 0.5240 0.5125 0.6145 0.6261 0.4238 0.5392 0.4353 0.3912 Improv. 2.8% 0.9% 18.0% 0.2% 12.9% 27.4% 3.5% 2.0% 19.0% 4.2% 17.6% 20.4% 70B 8B 1.5B8B 1.5B70B8B 1.5B (a) Anchor vs. Tail behavior /uni00000013 /uni00000015/uni00000018 /uni00000018/uni00000013 /uni0000001a/uni00000018 /uni00000014/uni00000013/uni00000013 /uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000003/uni00000051/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c /uni00000024/uni00000051/uni00000046/uni0000004b/uni00000052/uni00000055/uni00000003/uni00000025/uni00000048/uni0000004b/uni00000044/uni00000059/uni0000004c/uni00000052/uni00000055/uni00000056 /uni00000037/uni00000044/uni0000004c/uni0000004f/uni00000003/uni00000025/uni00000048/uni0000004b/uni00000044/uni00000059/uni0000004c/uni00000052/uni00000055/uni00000056 (b) Few-shot capability (8B) Figure 3: The effect of behavioral knowledge under different model size (1.5B, 8B, 70B), in terms of performance robustness across behavior types and few-shot sample numbers. •Fine-tuning LLMs for behavior prediction requires addressing the severe long-tailed distribution. Existing LLM fine-tuning methods, such as LLaRA [ 17] and TALLRec [ 3], which directly fine-tune LLMs on all behavior types, fail to outperform non-tuned GPT4o on Medium and Tail categories. Similarly, hybrid LLM-enhanced methods that integrate LLMs into traditional models lack robustness across both anchor and tail behaviors. BehaviorLM overcomes this challenge through a novel progressive tuning strategy, first mastering anchor behaviors and then generalizing to tail behaviors. •Behavioral knowledge embedded in LLMs significantly enhances behavior prediction. Traditional deep learning-based solutions, such as SASRec [ 12] and Bert4Rec [ 30], struggle to compete with LLM-based prediction models because they cannot leverage this knowledge and perform poorly when fine-tuning data is limited. Additionally, Bert4Rec performs particularly poorly under long-tailed evaluation settings, failing to make accurate predictions on less frequently occurring behaviors (Medium and Tail categories). 4.3 Investigating the Effect of Behavioral Knowledge Our evaluation in the previous subsection highlights the significant performance improvement brought by the LLM’s behavioral knowledge, as evidenced by the superior performance of LLM-based methods over traditional deep learning approaches. In this subsection, we conduct a more detailed analysis from this perspective. It is well-established that the more parameters an LLM contains, the greater its capacity to store knowledge learned from its pretraining corpus. Therefore, we examine the impact of model size on the LLM’s predictive capability by fine-tuning three versions of BehaviorLM with different backbones: Qwen-1.5B-v2, Llama-8B-v3.1, and Llama-70B-v3.1. Here, the proportion of auxiliary task data is controlled at 5% for all models. 7 Table 2: Performance drop of ablations on Behavior datasets. Dataset BehaviorLM Variant Overall Head Medium Tail BehaviorA-Tuning w/o Aux. Task -4.71% -1.07% -5.99% -4.58% B-Tuning w/o DDS -6.23% -4.34% -7.66%
https://arxiv.org/abs/2505.17682v1
-5.62% B-Tuning w/o CDS -5.50% -3.24% -6.83% -5.02% w/o A-Tuning -10.00% -2.77% -11.74% -10.61% /uni00000032/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004f/uni0000004f /uni0000002b/uni00000048/uni00000044/uni00000047 /uni00000030/uni00000048/uni00000047/uni0000004c/uni00000058/uni00000050 /uni00000037/uni00000044/uni0000004c/uni0000004f /uni00000026/uni00000044/uni00000057/uni00000048/uni0000004a/uni00000052/uni00000055/uni0000004c/uni00000048/uni00000056/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000029/uni00000058/uni0000004f/uni0000004f/uni00000003/uni00000045/uni00000048/uni0000004b/uni00000044/uni00000059/uni0000004c/uni00000052/uni00000055/uni00000003/uni00000057/uni00000058/uni00000051/uni0000004c/uni00000051/uni0000004a /uni00000024/uni00000051/uni00000046/uni0000004b/uni00000052/uni00000055/uni00000003/uni00000045/uni00000048/uni0000004b/uni00000044/uni00000059/uni0000004c/uni00000052/uni00000055/uni00000003/uni00000057/uni00000058/uni00000051/uni0000004c/uni00000051/uni0000004a /uni00000037/uni00000044/uni0000004c/uni0000004f/uni00000003/uni00000045/uni00000048/uni0000004b/uni00000044/uni00000059/uni0000004c/uni00000052/uni00000055/uni00000003/uni00000057/uni00000058/uni00000051/uni0000004c/uni00000051/uni0000004a /uni00000032/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004f/uni0000004f /uni0000002b/uni00000048/uni00000044/uni00000047 /uni00000030/uni00000048/uni00000047/uni0000004c/uni00000058/uni00000050 /uni00000037/uni00000044/uni0000004c/uni0000004f /uni00000026/uni00000044/uni00000057/uni00000048/uni0000004a/uni00000052/uni00000055/uni0000004c/uni00000048/uni00000056/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000029/uni00000058/uni0000004f/uni0000004f/uni00000003/uni00000045/uni00000048/uni0000004b/uni00000044/uni00000059/uni0000004c/uni00000052/uni00000055/uni00000003/uni00000057/uni00000058/uni00000051/uni0000004c/uni00000051/uni0000004a /uni00000024/uni00000051/uni00000046/uni0000004b/uni00000052/uni00000055/uni00000003/uni00000045/uni00000048/uni0000004b/uni00000044/uni00000059/uni0000004c/uni00000052/uni00000055/uni00000003/uni00000057/uni00000058/uni00000051/uni0000004c/uni00000051/uni0000004a /uni00000037/uni00000044/uni0000004c/uni0000004f/uni00000003/uni00000045/uni00000048/uni0000004b/uni00000044/uni00000059/uni0000004c/uni00000052/uni00000055/uni00000003/uni00000057/uni00000058/uni00000051/uni0000004c/uni00000051/uni0000004a Figure 4: Performance comparison between fine-tuning on all behaviors, anchor behaviors and tail behaviors. 4.3.1 Performance robustness We investigate whether this behavioral knowledge enables robust prediction across both anchor and tail behaviors, as well as under diverse few-shot settings. •In Figure 3(a), We evaluate BehaviorLM on anchor and tail behaviors (Behavior dataset) across model sizes and tuning strategies. Scaling from 1.5B to 70B significantly improves ac- curacy—for anchors (0.415 →0.451) and tails (0.354 →0.385)—demonstrating robustness. However, without our progressive tuning (A-Tuning + B-Tuning), larger models yield only marginal gains (blue curve in Figure 3(a)), highlighting the effectiveness of our approach in leveraging LLM behavioral knowledge. •In Figure 3(b), we vary the number of few-shot examples used in the B-tuning stage for BehaviorLM-8B and plot the performance curve. It is evident that with fewer than 20 examples per behavior type, BehaviorLM quickly learns to make robust predictions, demonstrating its ability to grasp behavior patterns efficiently even in low-data settings. Additionally, we conduct efficiency experiments demonstrating that the behavioral knowledge in LLMs offers over 100 ×greater sample efficiency than traditional transformers. Experimental details are in the E. 4.4 Ablation Study In Table 5, we evaluate the contribution of each design component to overall performance through ablation studies on the Behavior dataset. Specifically, we examine the performance drop when: (1) removing the auxiliary task in A-tuning, (2) replacing difficulty-based data selection with uniform selection, (3) performing non-contrastive data selection by setting the distinguishability score d= 0, and (4)replacing A-tuning with fine-tuning on all behaviors, as done in previous works [ 3,17]. The ablation experiment results of the APP dataset are detailed in the Appendix F. Key findings are as follows: All design components enhance prediction performance. For head behaviors, B-tuning’s data selection is most impactful by improving behavior differentiation. For medium and tail behaviors, A-tuning is essential. Without it, accuracy drops sharply, as direct fine-tuning dilutes performance on tail behaviors. 8 4.5 Investigating the Necessity of Anchor Behavior Tuning We have demonstrated the importance of progressive tuning for LLM-based behavior prediction. However, the necessity of first tuning on anchor behaviors remains unclear. To address this, we replace the A-tuning stage with a tuning stage focused solely on tail behaviors(here, we also include medium-frequency behaviors, as tail behaviors are relatively rare). As illustrated in Figure 4, this alternative approach leads to poorer prediction performance in both anchor and tail behaviors. This suggests that, since anchor behaviors represent the core structure of human daily life, prompting the LLM to follow a curriculum from anchor behaviors to tail behaviors is more effective than the reverse. Additionally, we conducted hyperparameter experiments on the Behavior dataset to assess the impact of varying auxiliary task data proportions. Results are provided in Table 6 in Appendix G. 5 related works 5.1 LLM-Enhanced Behavior Prediction Models User behavior prediction based on the most recent Levents is similar to sequential recommendation. The key difference is that
https://arxiv.org/abs/2505.17682v1
behavior prediction focuses on recurring daily actions, while item recom- mendation emphasizes novel content. Due to limited research in behavior prediction, we draw on related work in sequential recommendation. Recent studies in recommendation explore knowledge alignment between language and recommendation domains. For example, A-LLMRec used multi-task training for embedding alignment [ 13], PLM-Rec applied mutual information maximization[ 8] , PITuning introduced PITuning for cross-modal pattern extraction[ 9], and LLM-ESR proposed a cross-attention mechanism for sequence alignment[ 19]. While these methods enhance traditional models by aligning language and recommendation knowledge, they underutilize LLMs’ zero-shot and few-shot generalization. Our approach addresses this gap with progressive fine-tuning, preserving general behavioral knowledge while improving prediction of infrequent long-tail behaviors without sacrificing performance on frequent ones. 5.2 LLM-Based Behavior Prediction Models Traditional behavior prediction models often rely on embedding-based neural networks, which lack interpretability. LLMs, with their rich world knowledge and reasoning capabilities, present a more interpretable alternative for user behavior prediction. Tallrec first showcased LLMs’ strong few-shot performance in recommendation tasks [ 3], while Llara improved fine-tuning through curriculum learning, progressing from simpler to more complex tasks [ 17]. Similarly, Recexplainer aligned LLMs with traditional recommendation models for user-specific, customizable explanations [ 14] . Other notable frameworks include the Chain-of-Planned Behavior [ 28], which captured spatial- temporal dynamics, and the Transition paradigm TransRec [ 18], combining multiple identifiers to enhance LLM-based recommendations. However, these approaches mainly convert behavior sequences into text for LLM training, often overlooking critical distinctions between anchor and tail behaviors, which hampers their zero-shot generalization. Our method, BehaviorLM, directly addresses the long-tailed behavior distribution by employing progressive fine-tuning. This two-stage process significantly improves the prediction of tail behaviors without compromising anchor behavior performance. 6 Conclusion In this paper, we leverage the rich behavioral knowledge in LLMs to tackle user behavior prediction, with a focus on long-tail behavior prediction. We propose a progressive tuning approach, where the LLM first learns frequent anchor behaviors before generalizing to rarer tail behaviors. Experiments on two real-world datasets show that BehaviorLM outperforms state-of-the-art methods, achieving up to 27%/20% improvement in long-tail behavior prediction, addressing a traditionally challenging aspect of behavior modeling. Analysis highlights that addressing the long-tailed behavior distribution is essential for effectively utilizing LLMs’ behavioral knowledge in fine-tuning. 9 References [1]Hervé Abdi and Lynne J Williams. Principal component analysis. Wiley interdisciplinary reviews: computational statistics , 2(4):433–459, 2010. [2]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [3]Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems , pages 1007–1014, 2023. [4]Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. InProceedings of the 26th annual international conference on machine learning , pages 41–48, 2009. [5]Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 , 2020. [6]Hyunji Chung and Sangjin Lee. Intelligent virtual assistant knows your
https://arxiv.org/abs/2505.17682v1
life. arXiv preprint arXiv:1803.00466 , 2018. [7]Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [8]Shijie Geng, Zuohui Fu, Juntao Tan, Yingqiang Ge, Gerard De Melo, and Yongfeng Zhang. Path language modeling over knowledge graphsfor explainable recommendation. In Proceedings of the ACM Web Conference 2022 , pages 946–955, 2022. [9]Jiahui Gong, Jingtao Ding, Fanjin Meng, Guilong Chen, Hong Chen, Shen Zhao, Haisheng Lu, and Yong Li. A population-to-individual tuning framework for adapting pretrained lm to on-device user intent prediction. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 896–907, 2024. [10] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 , 2021. [11] Guoqing Hu, An Zhang, Shuo Liu, Zhibo Cai, Xun Yang, and Xiang Wang. Alphafuse: Learn id embeddings for sequential recommendation in null space of language embeddings. arXiv preprint arXiv:2504.19218 , 2025. [12] Wang-Cheng Kang and Julian McAuley. Self-attentive sequential recommendation. In 2018 IEEE international conference on data mining (ICDM) , pages 197–206. IEEE, 2018. [13] Sein Kim, Hongseok Kang, Seungyoon Choi, Donghyun Kim, Minchul Yang, and Chanyoung Park. Large language models meet collaborative filtering: an efficient all-round llm-based recommender system. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 1395–1406, 2024. [14] Yuxuan Lei, Jianxun Lian, Jing Yao, Xu Huang, Defu Lian, and Xing Xie. Recexplainer: Aligning large language models for explaining recommendation models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 1530–1541, 2024. [15] Jiacheng Li, Ming Wang, Jin Li, Jinmiao Fu, Xin Shen, Jingbo Shang, and Julian McAuley. Text is all you need: Learning language representations for sequential recommendation. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 1258–1267, 2023. [16] Tong Li, Tong Xia, Huandong Wang, Zhen Tu, Sasu Tarkoma, Zhu Han, and Pan Hui. Smart- phone app usage analysis: datasets, methods, and applications. IEEE Communications Surveys & Tutorials , 24(2):937–966, 2022. 10 [17] Jiayi Liao, Sihang Li, Zhengyi Yang, Jiancan Wu, Yancheng Yuan, Xiang Wang, and Xiangnan He. Llara: Large language-recommendation assistant. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 1785– 1795, 2024. [18] Xinyu Lin, Wenjie Wang, Yongqi Li, Fuli Feng, See-Kiong Ng, and Tat-Seng Chua. Bridging items and language: A transition paradigm for large language model-based recommendation. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 1816–1826, 2024. [19] Qidong Liu, Xian Wu, Yejing Wang, Zijian Zhang, Feng Tian, Yefeng Zheng, and Xiangyu Zhao. Llm-esr: Large language models enhancement for long-tailed sequential recommendation. InThe Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [20] Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, and Jie Tang. Self-supervised learning: Generative or
https://arxiv.org/abs/2505.17682v1
contrastive. IEEE transactions on knowledge and data engineering , 35(1):857–876, 2021. [21] Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. Large- scale long-tailed recognition in an open world. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 2537–2546, 2019. [22] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. [23] Chaoyi Pu, Zhiang Wu, Hui Chen, Kai Xu, and Jie Cao. A sequential recommendation for mobile apps: What will user click next app? In 2018 IEEE International Conference on Web Services (ICWS) , pages 243–248, 2018. doi: 10.1109/ICWS.2018.00038. [24] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog , 1(8):9, 2019. [25] Barbara Rychalska, Szymon Lukasik, and Jacek Dabrowski. Synerise monad: A founda- tion model for behavioral event data. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 3344–3348, 2023. [26] Germans Savcisens, Tina Eliassi-Rad, Lars Kai Hansen, Laust Hvas Mortensen, Lau Lilleholt, Anna Rogers, Ingo Zettler, and Sune Lehmann. Using sequences of life-events to predict human lives. Nature Computational Science , pages 1–14, 2023. [27] Murray Shanahan, Kyle McDonell, and Laria Reynolds. Role play with large language models. Nature , 623(7987):493–498, 2023. [28] Chenyang Shao, Fengli Xu, Bingbing Fan, Jingtao Ding, Yuan Yuan, Meng Wang, and Yong Li. Beyond imitation: Generating human mobility from context-aware reasoning with large language models. arXiv preprint arXiv:2402.09836 , 2024. [29] Jiang-Xin Shi, Tong Wei, Zhi Zhou, Jie-Jing Shao, Xin-Yan Han, and Yu-Feng Li. Long-tail learning with foundation model: Heavy fine-tuning hurts. In Forty-first International Conference on Machine Learning , 2024. [30] Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer. In Pro- ceedings of the 28th ACM international conference on information and knowledge management , pages 1441–1450, 2019. [31] Amrita S Tulshan and Sudhir Namdeorao Dhage. Survey on virtual assistant: Google assistant, siri, cortana, alexa. In Advances in Signal Processing and Intelligent Recognition Systems: 4th International Symposium SIRS 2018, Bangalore, India, September 19–22, 2018, Revised Selected Papers 4 , pages 190–201. Springer, 2019. 11 [32] Likang Wu, Zhi Zheng, Zhaopeng Qiu, Hao Wang, Hongchao Gu, Tingjia Shen, Chuan Qin, Chen Zhu, Hengshu Zhu, Qi Liu, et al. A survey on large language models for recommendation. World Wide Web , 27(5):60, 2024. [33] Hongtao Zhang and Lingcheng Dai. Mobility prediction: A survey on state-of-the-art schemes and future applications. IEEE access , 7:802–822, 2018. [34] Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. Deep learning based recommender system: A survey and new perspectives. ACM computing surveys (CSUR) , 52(1):1–38, 2019. [35] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of
https://arxiv.org/abs/2505.17682v1
large language models. arXiv preprint arXiv:2303.18223 , 2023. [36] Zihuai Zhao, Wenqi Fan, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, et al. Recommender systems in the era of large language models (llms). IEEE Transactions on Knowledge and Data Engineering , 2024. [37] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. [38] Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. Llamafactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations) , Bangkok, Thailand, 2024. Association for Computational Linguistics. URL http://arxiv.org/abs/2403.13372 . 12 A Details of Used Datasets Behavior dataset : This large-scale dataset is derived from mobile phone usage logs. When users interact with their mobile phones, various types of logs are generated, desensitized, and reported with user consent. We select 37 daily behaviors that are reliably extracted from raw logs and also cover broad life scenarios, including activities related to learning, work, entertainment, leisure, etc. The dataset spans from March 1, 2024, to April 29, 2024, and consists of over 50 million behavior events from 24,133 anonymous users. We preprocess the dataset and construct samples in the format of “(last-20 events, next event)”. Since our target is fine-tuning the LLM instead of training from scratch, we only randomly select a subset (200,000) of data for experiments. App usage dataset [ 16]: This dataset is an open-source resource that captures 1753 user interactions across various apps within one week. Given the large number of apps and the overlap in functionality among many of them, we processed the dataset further by merging apps with similar purposes. For example, Douyin and Kuaishou were grouped into a "watching videos" category. After similar preprocessing as the Behavior dataset, this dataset consists of 71 behavior events belonging to 24 categories. Table 3: Detail information of Behavior Dataset Behavior Proportion Watching video clip 0.435981 Shopping online 0.112217 Editing video 0.05937 Reading books 0.055293 Gaming 0.044202 Watching video 0.041266 Reading news 0.036046 Taking photos 0.035068 Taking subway 0.031969 Payment 0.029848 Listening Music 0.02365 Checking Weather 0.016963 Studying for course 0.01419 Ordering takeout 0.012722 Listening podcast 0.012396 Conducting job recruitment 0.010928 Taking notes 0.003915 Checking calendar 0.003591 Hailing taxi 0.003262 Picking up the express delivery 0.00261 Watching Match 0.00213 Exercising 0.002003 Singing 0.001957 Watching Live-streaming 0.001794 Cycling 0.001794 Meeting online 0.000816 Studying for driving test 0.000652 Translating 0.000652 Rentaling house 0.000652 Watching animation 0.000489 Booking ticket 0.000489 Commuting check-in 0.000323 Cooking 0.000323 Drinking coffee 0.000283 Rentaling car 0.000163 The characteristics of both two datasets are presented in Table 4. 13 Table 4: Statistics of the datasets Dataset # User # Behavior Type # Sample Behavior Dataset 24,133 37 200,000 App Usage Dataset 1,753 24 50,000 B Details of Used Metrics The formula for Prec w: Prec w=P c∈C(TPc+FPc)·Precision cP c∈C(TPc+FPc)(3) The
https://arxiv.org/abs/2505.17682v1
formula for Recw: Recw=P c∈C(TPc+FNc)·Recall cP c∈C(TPc+FNc)(4) The formula for Overall : Accuracy =1 |C|X c∈CTPc TPc+FNc(5) The formula for Head : Accuary =1 |Ch|X c∈ChTPc TPc+FNc(6) The formula for Medium : Accuary =1 |Cm|X c∈CmTPc TPc+FNc(7) The formula for Tail : Accuary =1 |Ct|X c∈CtTPc TPc+FNc(8) Where |C|represents the total number of classes, |Ch|represents the total number of classes belonging to the head category, Where |Cm|represents the total number of classes belonging to the medium category, Where |Ch|represents the total number of classes belonging to the tail category. True Positives (TPc)denotes the number of samples correctly classified as class c, False Positives (FPc)represents the number of samples incorrectly classified as class c, and False Negatives (FNc) stands for the number of samples incorrectly classified as other classes instead of class c. And Precision cand Recall crespectively refer to the precision and recall of class c. C Baselines SASRec [ 12].uses self-attention mechanisms to model user behavior sequences. It captures both short-term and long-term dependencies in sequential data, allowing it to focus on the most relevant items in the user’s interaction history for recommendation. BERT4Rec [ 30].models user behavior sequences using deep bidirectional self-attention. By jointly considering the context before and after an item, it predicts the randomly masked items within the sequence, achieving excellent predictive performance. LLaRa [ 17]introduces a hybrid prompting method that integrates both world knowledge and behavioral patterns into item representations. It conducts curriculum prompt tuning to achieve modality alignment. A-LLMRec [ 13]bridges the knowledge between the language and recommendation domains by training an alignment network with a variety of tasks, targeting both warm and cold-start scenarios. PITuning [ 9]loads pre-trained Large Language Model (LLM) parameters to acquire textual knowl- edge and then designs an adaptive unlearning strategy to address the long-tail preference issue, achieving excellent performance in user behavior prediction. 14 LLM-ESR [ 19]leverages the semantic information from LLMs, proposes a dual-view modeling framework enhanced through embedding techniques to capture the nuances of long-tail items better, demonstrating strong performance across multiple datasets.. TALLRec [ 3]is one of the earlier methods to integrate Large Language Models (LLMs) with the recommendation domain. It employs a two-stage tuning process—Alpaca Tuning and Rec-Tuning—to finetune LLMs for recommendations, enabling effective and efficient adaptation of LLMs with only a small number of tuning samples. AlphaFuse [ 11]is a simple yet effective language-guided learning strategy that addresses long-tail intent modeling by learning ID embeddings within the null space of language embeddings. For comparison, we also consider two LLMs that are not fine-tuned on behavioral data, i.e., GPT4o [ 2] and LLama3.1-8B [7]. D Implementation Details The hardware used in this experiment consists of 8 NVIDIA A100 40G GPUs. We selected LLama3.1- 8B [7] as the backbone for our experiments. To ensure flexibility in model testing, we designed three distinct instruction formats, which are randomly sampled during both training and testing. Our experiments utilized the AdamW optimizer with a cosine annealing learning rate schedule, setting the warm-up proportion to 0.1. The maximum learning rate for cosine annealing was set to 1e-4, while both the minimum
https://arxiv.org/abs/2505.17682v1
and initial warm-up learning rates were set to 1e-6. We conducted LoRA fine-tuning and parallel training acceleration using the open-source LLM instruction fine-tuning library, llama-factory [ 38]. All experiments were performed with a maximum of 8 training epochs and a batch size of 8, selecting the best-performing model on the validation set for testing. Detailed formats of the three instruction types are as followed: Instruct1: This user has done behaviors [HistoryHere] in the previous. Day of the week, the hour, and the place of the next behavior are [next-intent-info], respectively. Choose the answer from the following behavior candidate set: [CansHere]. The answer is [Output]. Instruct2: The user’s historical behavior information sequence is: [HistoryHere]. Day of the week, the hour, and the place of the next behavior are [next-intent-info], respectively. Given the following behavior candidate set: [CansHere], recommend one intention for this user to do next. The intent you recommend is [Output]. Instruct3: The behavior history of this user is: [HistoryHere].Day of the week, the hour, and the place of the next behavior are [next-intent-info, respectively. Recommend a next intention for this user to do from the following behavior candidate set: [CansHere].The recommendation is [Output]. E Sample efficiency. One significant advantage of leveraging the LLM’s general behavioral knowledge is that it reduces the need for fine-tuning on large-scale user behavior data, demonstrating strong sample efficiency. To validate this, we compare BehaviorLM-1.5B and BehaviorLM-8B with another transformer-based model trained from scratch, using the objective function from SASRec [ 12] and the model architecture from GPT2 [ 24] (the small version with 12 layers and 768 latent dimensions). Since the original Behavior dataset contains over 50 million events, we vary the sample size to observe its impact on prediction performance. As shown in Figure 5, BehaviorLM demonstrates a significant improvement in sample efficiency. The transformer-based model trained from scratch only outperforms Behav- iorLM when trained on nearly all 50 million samples, while BehaviorLM is fine-tuned on just 200,000 samples—over two orders of magnitude fewer. This highlights the remarkable sample efficiency advantage provided by the LLM’s preexisting behavioral knowledge. Using 8xA100 (40G) GPUs, training BehaviorLM-8B on Behavior Dataset with 200k samples takes about 6 hours, while on APP Usage dataset (50000 samples) it takes no more than 2 hours. In contrast, training a SASRec model on a 50M sample dataset requires approximately 72 hours. Overall, although BehaviorLM has a lower training speed for each sample, it has a much higher sample efficiency than SASRec and thus takes much less time to achieve a good prediction accuracy. 15 Figure 5: Comparison between BehaviorLM and a non-LLM transformer-based method under different sizes of training data. Table 5: Performance drop of ablations on Behavior datasets. Dataset BehaviorLM Variant Overall Head Medium Tail App UsageA-Tuning w/o Aux. Task -3.83% -2.50% -2.17% -6.25% B-Tuning w/o DDS -7.80% -5.80% -6.06% -10.42% B-Tuning w/o CDS -3.87% -1.29% -1.91% -8.47% w/o A-Tuning -7.95% -0.97% -5.64% -14.40% BehaviorA-Tuning w/o Aux. Task -4.71% -1.07% -5.99% -4.58% B-Tuning w/o DDS -6.23% -4.34% -7.66% -5.62% B-Tuning w/o CDS -5.50% -3.24% -6.83% -5.02% w/o A-Tuning -10.00% -2.77% -11.74% -10.61%
https://arxiv.org/abs/2505.17682v1
arXiv:2505.17691v1 [cs.CL] 23 May 2025ELSPR: Evaluator LLM Training Data Self-Purification on Non-Transitive Preferences via Tournament Graph Reconstruction Yan Yu1, Yilun Liu2, Minggui He2, Shimin Tao2, Weibin Meng2, Xinhua Yang2, Li Zhang2, Hongxia Ma2, Chang Su2, Hao Yang2, Fuliang Li1 1Northeastern University, Shenyang, China 2Huawei, Beijing, China liuyilun3@huawei.com, lifuliang@cse.neu.edu.cn Abstract Large language models (LLMs) are widely used as evaluators for open-ended tasks, while previous research has emphasized biases in LLM evaluations, the issue of non-transitivity in pairwise comparisons remains unresolved: non-transitive preferences for pairwise compar- isons, where evaluators prefer A over B, B over C, but C over A. Our results suggest that low- quality training data may reduce the transitivity of preferences generated by the Evaluator LLM. To address this, We propose a graph-theoretic framework to analyze and mitigate this problem by modeling pairwise preferences as tourna- ment graphs. We quantify non-transitivity and introduce directed graph structural entropy to measure the overall clarity of preferences. Our analysis reveals significant non-transitivity in advanced Evaluator LLMs (with Qwen2.5-Max exhibiting 67.96%), as well as high entropy val- ues (0.8095 for Qwen2.5-Max), reflecting low overall clarity of preferences. To address this issue, we designed a filtering strategy, ELSPR, to eliminate preference data that induces non- transitivity, retaining only consistent and transi- tive preference data for model fine-tuning. Ex- periments demonstrate that models fine-tuned with filtered data reduce non-transitivity by 13.78% (from 64.28% to 50.50%), decrease structural entropy by 0.0879 (from 0.8113 to 0.7234), and align more closely with human evaluators (human agreement rate improves by 0.6% and Spearman correlation increases by 0.01). 1 Introduction With the rapid advancement of large language model (LLM; OpenAI et al. 2024; Qwen Team 2025; Meta AI 2024a) technology, an increasing number of models have become available, making it essential to evaluate their capabilities for select- ing the most suitable one. However, existing bench- marks such as MMLU (Hendrycks et al., 2020) and Corresponding author. The preference results show contradiction ! Evaluator LLMEvaluator LLM for pairwise comparisons of multiple responses User Query:Question 1 Response A: … Response B:... Which response is better? A BCUser Query:Question 1 Response A:... Response C:... Which response is better?User Query:Question 1 Response B:... Response C:... Which response is better? A ≻ B ≻ C ≻ A Concise explanation: Response A:... Response B: … So: A≻B Concise explanation: Response B:... Response C: … So: B≻C Concise explanation: Response A:... Response C: … So: C≻AFigure 1: Non-Transitive Preferences in LLM-as-a- Judge for Pairwise Comparisons (e.g., A≻B,B≻C, C≻A). HELM (Liang et al., 2022) have been shown to be insufficient for capturing performance differences in open-ended tasks (Zheng et al., 2024). Given the lack of definitive answers in open- ended tasks, human expert evaluation is considered the gold standard. Yet, due to its high cost and limited scalability, the mainstream approach has shifted toward using LLM-as-a-Judge for efficient evaluation. Recent studies have demonstrated that powerful LLMs, such as GPT-4, can achieve high consistency with human judgments (Zheng et al., 2024; Dubois et al., 2023). Among various ap- proaches, pairwise comparison has emerged as the dominant paradigm, owing to its strong
https://arxiv.org/abs/2505.17691v1
alignment with human preferences (Samvelyan et al., 2024; Chen et al., 2024a; Li et al., 2023; Chiang et al., 2024; Liu et al., 2024; Liusie et al., 2024). Despite these promising results, LLM-as-a- Judge still suffers from various biases—such as position, verbosity, conformity, and self- reinforcement bias—which compromise evaluation reliability (Xu et al., 2024b; Zheng et al., 2024; Koo et al., 2024; Ye et al., 2025). A particularly un- derexplored yet critical issue is the non-transitivity 1 of preferences generated by Evaluator LLMs—e.g., A≻B,B≻C,C≻A, where ≻denotes "is preferred to", i.e., A≻Bmeans Ais preferred overB. An illustration is provided in Figure 1. The first study to identify this issue (Xu et al., 2025) analyzed inconsistencies in evaluation results when GPT-4 acted as the evaluator and performed pairwise comparisons using different baseline mod- els. However, the study did not propose a method to mitigate this non-transitivity. Alarmingly, this issue is also inherited by specialized evaluators like JudgeLM, PandaLM, and Auto-J (Zhu et al., 2025; Wang et al., 2024b; Li et al., 2024a), which are trained via knowledge distillation from founda- tion models. The distillation process may inadver- tently propagate non-transitive judgment patterns to downstream evaluators. We hypothesize that the presence of low-quality training data may impair the transitivity of the pref- erences generated by the Evaluator LLM. Many pairwise comparisons—particularly those from open-ended tasks—lack clear ground truth due to their subjective nature. Even human annotators often disagree, with inter-annotator agreement as low as 65.7% (Li et al., 2023), indicating that such tasks inherently lack a universally accepted prefer- ence order. Consequently, training on ambiguous or low-quality comparisons can introduce or exac- erbate non-transitive behavior in evaluators. This highlights the necessity of filtering unreliable data prior to model training. In this work, we propose ELSPR, a simple yet effective method based on graph theory to assess and mitigate preference non-transitivity in Eval- uator LLMs. We model multi-response pairwise comparisons as tournament graphs and filter out preference data that induce the non-transitivity of overall preferences. To further quantify the over- all clarity of preference results, we introduce the use of two-dimensional structural entropy on di- rected graphs. Our analysis reveals significant non-transitivity in advanced Evaluator LLMs (with Qwen2.5-Max exhibiting 67.96%), as well as high entropy values (0.8095 for Qwen2.5-Max). Exper- imental results show that fine-tuning models on the filtered data leads to a 13.78% reduction in preference non-transitivity, a 0.0879 decrease in structural entropy, and improved alignment with human judgments—reflected in a 0.6% increase in human agreement rate and a 0.01 gain in Spearman correlation. Specifically, our contributions are:•We introduce a graph-theoretic approach to systematically investigate the non-transitivity issue in preference data generated by Eval- uator LLMs, revealing significant non- transitivity in the preference data produced by advanced LLMs. •We innovatively use the two-dimensional structural entropy of the directed graph to fur- ther evaluate the overall clarity of the prefer- ences generated by large-scale LLMs. •We propose a effective filtering methodology that leverages tournament graphs to identify and remove preference data that induces non- transitivity, ensuring the retention of only con- sistent and transitive preferences. This
https://arxiv.org/abs/2505.17691v1
en- hances the reliability of Evaluator LLMs. In addition, we released our code and datasets to facilitate further research into this issue1. 2 Related Work 2.1 LLM-as-a-Judge and Its Non-Transitive Preferences The prevailing LLM-as-a-Judge system predomi- nantly operates through pairwise comparisons, as exemplified by frameworks such as VicunaEval, AlpacaEval, and Arena-Hard(Chiang et al., 2023; Li et al., 2023, 2024b). Specifically, these systems collect responses generated by different LLMs to a given set of questions, and use advanced LLMs as judges to evaluate the preference order between two responses, assessing the relative strengths and weaknesses of different LLMs. Recent studies high- light that even advanced models like GPT-4 exhibit significant preference non-transitivity in LLM-as- a-Judge systems, which severely undermines the reliability of preference outcomes. Despite prior efforts exploring six prompt tem- plates to alleviate preference non-transitivity, im- provements have been limited (Xu et al., 2025). To address this critical issue, we investigate the under- lying conditions that give rise to non-transitivity in LLM-as-a-Judge systems and propose a novel method to mitigate such inconsistencies. 2.2 Data Selection for LLM Fine-Tuning Prior works (Chen et al., 2024b; Ge et al., 2024; Li et al., 2024c) have highlighted the importance of filtering high-quality subsets from original datasets. 1https://github.com/yy0525/ELSPR 2 Win Score A=1 B=2 C=1 D=2 So: B=D>A=CResponse AUser Query Response B Response D B>AResponse C A>C D>A B>DC>B E>A There exists cycles caused by non - transitivity preference here. There is no non -transitivity of preferences in a DAG. Which preference data induce the non -transitivity of overall preferences A B D CEA B D CE Response E E>B D>C E>D E>C Collect preference data to construct a tournament graph.B>A A>C D>A B>DC>BE>A E>B D>C E>DE>CFilter lead to the non - transitivity of the overall preferences train Non -transitive preference data is not learnedLLM -as-a-Judge is used for pairwise comparisons of multiple responses. SCCs:{ (A,B,C,D), (E) } Generate responses for multiple LLMs Reconstructed as a DAG Advanced LLM generates preference data Discard Make pairwise comparisons between all responsesFigure 2: ELSPR( Evaluator LLM training data Self-purification non-transitive Preferences via tournament graph Reconstruction) framework overview. The collected preference data are modeled as a directed graph. Strongly Connected Components (SCCs) are identified to analyze non-transitive structures, and each SCC is reconstructed into a DAG to filter out the specific preference data responsible for the overall non-transitivity. Fine-tuning with such subsets not only enhances model performance but also reduces computational costs. However, this strategy remains underex- plored in the context of the LLM-as-a-Judge task. Existing studies have primarily focused on scal- ing model size or improving prompt engineering, overlooking the potential benefits of training data quality. Our goal is to incorporate data filtering techniques into the LLM-as-a-Judge setting. We find that fine-tuning the LLM on preference data that has been filtered to induce non-transitivity can significantly reduce preference non-transitivity and improve overall preference clarity—even outper- forming the teacher model. 3 Methodology In this section, we present a graph-theoretic for- mulation of the LLM-as-a-Judge task using tour- nament graphs to analyze pairwise preferences from Evaluator LLMs. We then introduce quality analysis
https://arxiv.org/abs/2505.17691v1
criteria for training data, including non- transitivity detection via Strongly Connected Com- ponents (SCCs) and preference clarity analysis via graph entropy. Finally, we propose a data filter- ing method to mitigate non-transitive preferences. Figure 2 outlines the overall framework, with im- plementation details provided in the following sub- sections.3.1 Background The typical LLM evaluation systems rely on the transitivity assumption, that is, if the evaluation results are A≻BandB≻C, then it is assumed thatA≻C. However, evaluations in the real world often reveal non-transitivity preference cy- cles (A≻B≻C≻A), which renders traditional ranking methods ineffective (Xu et al., 2025). In order to overcome this fundamental limitation, we propose a tournament-based evaluation framework grounded in directed graphs, which facilitates the application of graph theory methods for a deeper analysis of the degree of non-transitivity in prefer- ences and the overall clarity of the preferences. Modeling preferences generated by LLM as a tournament graph. For each question qi∈Q, given the response set Ai={a1, a2, ..., a n}from nLLMs, we construct a tournament graph Gi= (Vi, Ei)through the following procedure: vertices : setVi={v1, v2, ..., v n}corresponds to responses Ai={a1, a2, ..., a n}. Edges : Defined by preferences generated by LLM: Ei=[ 1≤j,k≤n j̸=k  vk→vj, ifJ(aj, ak) =‘win’ andJ(ak, aj) =‘lose’ vj→vk, ifJ(aj, ak) =‘lose’ andJ(ak, aj) =‘win’ vj↔vk,otherwise (1) Here,J(aj, ak)∈ {‘win’,‘lose’}denotes the pair- 3 wise comparison result between answers ajandak, where ajappears before akin the prompt. Considering the common position bias in Evalu- ator LLMs (Wang et al., 2024a), we apply position swapping by comparing each response pair in both orders: J(aj, ak)andJ(ak, aj). This ensures a more robust and balanced assessment(Zheng et al., 2024). If the preferences differ across or- ders—indicating possible position bias—a bidirec- tional edge is added between the corresponding vertices to represent a ‘tie’. 3.2 Quality Analysis Criteria for LLM-as-a-Judge Training Data Based on the directed graph representation of the tournament we constructed above (Section 3.1). We propose a quality analysis criteria for the train- ing data of Evaluator LLM, which utilizes SCCs analysis to measure the degree of preference non- transitivity and employs two-dimensional structural entropy of directed graph structure entropy to eval- uate the overall clarity of preferences. 3.2.1 Analysis of Non-Transitivity in Preferences Based on SCCs The SCCs of a directed graph are maximal sub- graphs in which, for any pair of vertices, there ex- ists a directed path from one to the other and vice versa. This mutual reachability property aligns with the notion of preference cycles, which di- rectly reflects non-transitivity in the preferences of the Evaluator LLM. A directed path from vertex vitovjcan be interpreted as a preference chain aj≻ · ≻ ai, while a reverse path from vjtovi corresponds to ai≻ · ≻ aj. The presence of both paths implies conflicting preferences and thus a violation of transitivity. We use Tarjan algorithm to identify the SCCs of a directed graph. Tarjan algorithm(Tarjan, 1972) is an efficient method for finding SCCs, based on depth-first search (DFS), with a time complexity ofO(N+M), where Nis the number of ver- tices and Mis the number of edges.
https://arxiv.org/abs/2505.17691v1
Furthermore, since non-transitive relations occur between three or more vertices, we focus on components with a size greater than 2. Additionally, we filter out SCCs where there exists a bidirectional edge between ev- ery pair of vertices to eliminate cases where the relationship between vertices is a ‘tie’. For exam- ple, in the case of (A=B),(B=C),(C=A), although (A, B, C )forms an SCC, the preference relation is transitive. We denote the set of all SCCsin the directed graph GasSCCs (G),the following formula to obtain SCCs containing vertices with non-transitive relationships: Sn-t=( S∈SCCs (G) |S|>2∧ ∃vj, vk∈S, (vj↔vk)/∈E(S)) . (2) To quantify the degree of non-transitivity in the preferences generated by the Evaluator LLM, we compute the ratio of non-transitive vertices to the total number of vertices across all directed graphs: ρnon-trans =P qi∈Q|Sn-t(Gi)|P qi∈Q|Vi|, (3) where the numerator represents the total number of vertices belonging to non-transitive SCCs across all questions Q, and the denominator represents the total number of vertices across all graphs. A higher value of ρnon-trans indicates a greater prevalence of non-transitive preferences. 3.2.2 Analysis of Preference Clarity Based the Entropy of Directed Graph Structures The Shannon entropy(Shannon, 1948) is a funda- mental metric in information theory used to quan- tify the uncertainty associated with a random vari- able. It represents the expected level of uncertainty in a system: the greater the uncertainty, the higher the entropy. It is defined as: H(X) =−X p(x) logp(x), (4) where p(x)denotes the probability distribution of the event x. Structural entropy (Li and Pan, 2016) extends Shannon entropy to directed graphs, providing a measure of system uncertainty and the complex- ity of relationships within the graph. These con- cepts have been widely applied across various do- mains (Zou et al., 2024; Duan et al., 2024; Peng et al., 2024; Hou et al., 2025). We introduce the Two-Dimensional Structural Entropy of a Directed Graph to analyze the overall clarity of preferences in Evaluator LLM. Two-Dimensional Structural Entropy of a Di- rected Graph: The core idea of two-dimensional structural entropy is to partition the graph’s ver- tices into distinct communities and compute the complexity of both intra-community and inter- community relationships. This framework effec- tively quantifies the overall structural uncertainty 4 and the complexity of relationships within the graph. In this work, we innovatively adopt SCCs as the fundamental communities, motivated by proper- ties from directed graph theory. Within an SCC, cyclic and non-transitive preference relations ex- ist, while between SCCs, the preference relations are strictly transitive. This distinction allows us to quantitatively assess the overall preference clar- ity by computing the two-dimensional structural entropy of the tournament graphs constructed by Evaluator LLM. It is important to note that SCCs composed of a single vertex inherently exhibit strict transitivity. The preference relations among such single-vertex components form a linear order that does not in- crease the complexity of the preference results. For example, if the original graph contains a preference A≻B, the overall preference is clear. When new preferences B≻CandA≻Care added, the resulting linear order A≻B≻Cstill remains clear and consistent — since the vertices A,B,
https://arxiv.org/abs/2505.17691v1
and Ceach form their own singleton SCC. This clarity reflects the inherent transitivity of singleton SCCs. Based on this observation, we exclude interactions among pure singleton SCCs when computing struc- tural entropy. Instead, we retain only two types of interactions: (1) between singleton SCCs and multi- vertice SCCs, and (2) between multi-vertice SCCs. This selective inclusion ensures a more accurate and meaningful quantification of global structural complexity. Consequently, the closer the priority relations among vertices are to a linear order, the lower the resulting entropy. The definition of the two-dimensional struc- tural entropy of a directed graph based on SCC decomposition is as follows: Given a directed graph G= (V, E),n=|V|denote the num- ber of vertices. Based on the SCC decomposition SCCs (G) ={SCC 1, . . . , SCC L}. For a vertex v∈V, we use din(v)anddout(v)to represent the in-degree and out-degree of vinG, respectively. For any SCC i⊆SCCs , define its volume as the total in-degree of its vertices: v(SCC i) =X v∈SCC idin(v). (5) Accordingly, v(G)denotes the total in-degree of the entire directed graph. The two-dimensional structural entropy compu-tation formula is as follows: H2(G) =−LX j=1gj v(G)log2v(SCC j) v(G)−LX j=1(6) v(SCC j) v(G) X v∈SCC jdin(v) v(SCC j)log2din(v) v(SCC j) , where, gjdenotes the total number of incoming edges to Vjfrom external vertices. Specifically, it counts the edges between singleton and multi- vertice SCCs, as well as those among multi-vertice SCCs. It serves as a measure of the external influ- ence on Vj. Specifically, the normalized structural entropy for a graph Gis defined as: τ(G) =H2(G) logn, (7) where H2(G)represents the two-dimensional structural entropy of G, and ndenotes the num- ber of vertices in the graph. To evaluate the overall preference clarity of the Evaluator LLM across the set of questions Q, we compute the average normalized structural entropy, defined as follows: τavg=P qi∈Qτ(Gi) |Q|, (8) where τ(Gi)is the normalized structural entropy for the graph Gicorresponding to question qi, and |Q|is the total number of questions in the set Q. A higher value of τavgsignifies that the preference relations generated by the Evaluator LLM are more complex, indicating a lower level of overall prefer- ence clarity. 3.3 Filtering Strategy for Preference Data That Induces Non-Transitivity After decomposing a cyclic directed graph into mul- tiple SCCs, if the cycles within each SCC can be eliminated and each SCC can be reconstructed as a DAG, then the entire directed graph can likewise be transformed into a Directed Acyclic Graph (DAG). Building upon this observation, our framework transforms cyclic directed graphs into DAGs by converting each SCC into a DAG, preserving inter- SCC preference relations. This process ensures the global graph structure becomes acyclic without al- tering inter-component preferences. Subsequently, we derive transitive preference relations from the 5 Helpful_Base Vicuna Oasst Koala Self-instruct Dataset010000200003000040000Data Volumes23478 1456034216 2839245864 19135 1169828111 2294837433Raw CleanedFigure 3: Comparison of data volumes between "Raw" and "Cleaned" training sets across datasets. The "Cleaned" dataset’s volume is approximately 80% of the"Raw" dataset for each dataset. final DAG to filter Evaluator LLMs’ preference data, facilitating self-purification of the training data. The
https://arxiv.org/abs/2505.17691v1
detailed procedure is as follows: 1.For each SCC icompute the in-degree ein ifor each vertex vi, representing its global ‘win’ score. Vertices with higher in-degree scores are prioritized over those with lower scores. To reconstruct the internal edges within SCC i, first remove all original edges between ver- tices within SCC i. Then, for any pair vi, vj: ifein i> ein j, add a directed edge vj→vi; if ein i=ein j, add a bidirectional edge vi↔vj. 2.After processing all SCCs, the original cyclic graph is transformed into a DAG. This DAG is used to filter the training data as follows: For bidirectional edges (vi↔vj), the cor- rect preferences are recorded as J(ai, aj) = ‘tie’ andJ(aj, ai) = ‘tie’. For unidirec- tional edges (vi→vj), the correct prefer- ences are recorded as J(ai, aj) =‘lose’ and J(aj, ai) =‘win’. 3.The training data is traversed sequentially, and all data consistent with the correct preferences are added to the dataset "Cleaned". See Appendix E for detailed algorithm information. 4 Experiment Setup Dataset : In this study, we conduct experimental validation using the AlpacaEval benchmark (Li et al., 2023). AlpacaEval is specifically designed to assess the overall capabilities of LLMs in open- domain, instruction-following tasks, covering a wide range of evaluation scenarios such as reason- ing and text generation. The benchmark comprisesfive datasets: Helpful_Base, Oasst, Koala, Vicuna, and Self-Instruct. Teacher Model Selection : We evaluated the performance of Qwen2.5-Max(Qwen Team, 2025) using the 2.5k human-annotated samples provided by AlpacaEval, employing the Chain-of-Thought Comparison (CoT Comparison) (Li et al., 2023) . The detailed prompt template is provided in Ap- pendix C.1. Qwen2.5-Max demonstrated excep- tional performance in both human agreement rate, achieving an agreement level of 68.9%. Based on these results, we selected Qwen2.5-Max as our teacher model. Experiment Details: We selected 21 represen- tative LLMs from the AlpacaEval leaderboard and divided them into two groups based on the exper- imental design: 14 LLMs were used to collect preference data for constructing the training set, while the remaining 7 LLMs were employed to build the testing set for performance validation(see Appendix A for details). Throughout the experi- ment, we consistently applied the CoT Comparison prompt template for response generation, with the decoding temperature fixed at 0. Preference data for constructing the training set was generated by invoking the official Qwen2.5-Max API. The unfil- tered Raw dataset was labeled as "Raw", and sub- sequently filtered according to the steps described in Section 3.3, resulting in the "Cleaned" dataset. The data distribution across datasets is illustrated in Figure 3. For comparative analysis, we fine- tuned models using both the "Raw" and "Cleaned" training sets derived from five datasets, ultimately obtaining a total of 10 fine-tuned models. In the experiments, we used Qwen2.5-7B- Instruct as the base model and fine-tuned it us- ing the Low-Rank Adaptation (LoRA) method(Hu et al., 2022), with a rank of 8. The model was trained for 3 epochs, with a learning rate of 1×10−4 and a batch size of 16. 5 Results and Analysis In this section, we validate the effectiveness of the proposed filtering
https://arxiv.org/abs/2505.17691v1
method on five datasets. We uti- lizeρnon-trans , as defined in Equation 3, to quantify the degree of preference non-transitivity in the eval- uation of LLMs. A higher value of ρnon-trans indi- cates a greater extent of non-transitive preferences. Additionally, we employ τavg, as defined in Equa- tion 8, to assess the overall clarity of preferences, where a lower value reflects improved preference 6 ModelHelpful_Base Vicuna Oasst Koala Self-instruct ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg Stronger LLMs Qwen2.5-Max 63.68% 0.8047 75.36% 0.8448 64.29% 0.7883 71.52% 0.8296 64.97% 0.7803 Base models and Variants Qwen-Base 82.83% 0.9221 78.93% 0.8912 83.36% 0.9138 80.96% 0.9097 81.46% 0.9123 Qwen-Raw 62.02% 0.7963 57.50% 0.8034 55.78% 0.7734 64.29% 0.8159 59.69% 0.7665 Qwen-Random 60.58% 0.7899 63.57% 0.8078 58.81% 0.7711 60.35% 0.8116 57.14% 0.7644 Qwen-Cleaned (ours) 44.85% 0.6997 43.93% 0.7262 46.96% 0.6942 48.53% 0.7149 48.98% 0.6802 Table 1: Comparison of Preference Non-Transitivity andOverall Clarity for Evaluator LLMs. Qwen-Base denotes to the original Qwen2.5-7B-Instruct model, while Qwen-Raw, Qwen-Random, and Qwen-Cleaned denote models fine-tuned on the "Raw" ,"Random" , and "Cleaned" training sets, respectively. For example, Qwen- Cleaned in the Helpful_Base column reflects the performance of the model fine-tuned on the "Cleaned" training set derived from filtered Helpful_Base dataset. clarity. Furthermore, we provide a comprehensive measure of the alignment between model prefer- ences and human preferences by calculating the human agreement rate and the spearman correla- tion coefficient between the model’s evaluations and human assessments. Additionally, we investi- gate two scenarios in which Evaluator LLMs are prone to exhibit non-transitive preferences, see Ap- pendix D. 5.1 Main Results 5.1.1 Analysis of Non-Transitivity and Clarity in Preferences As shown in Table 1, across all testing sets, models fine-tuned on the "Cleaned" training set exhibit the lowest degree of preference non-transitivity and the highest clarity of preferences, surpassing even the teacher model Qwen2.5-Max. This high- lights the effectiveness of our proposed data filter- ing method. Additionally, models fine-tuned on the "Raw" dataset perform comparably to the teacher model Qwen2.5-Max, demonstrating that data dis- tillation can indeed enhance a model’s evaluation capability relative to its original performance. How- ever, due to the inherent preference non-transitivity of the teacher model, direct distillation may learn low-quality preference data, thereby limiting the ability of the distilled model to surpass the teacher model and resulting in similar performance. To validate that models fine-tuned on the "Cleaned" training set achieve lower preference non-transitivity and higher preference clarity even on "unseen" questions, we performed cross- validation by testing each fine-tuned model on five testing sets. As shown in Table 2, models fine-tuned on Cleaned data show a 13.78% aver-Training Set ρnon-trans τavg Raw Cleaned ∆ Raw Cleaned ∆ Helpful_Base 65.95% 50.33% -15.62% 0.8204 0.7173 -0.1031 Vicuna 61.56% 50.96% -10.60% 0.8063 0.7367 -0.0696 Oasst 62.61% 48.73% -13.88% 0.7966 0.7101 -0.0865 Koala 64.05% 48.65% -15.40% 0.8044 0.7177 -0.0867 Self-instruct 67.23% 53.84% -13.39% 0.8286 0.7351 -0.0935 Average 64.28% 50.50% -13.78% 0.8113 0.7234 -0.0879 Table 2: Comparison of average preference non- transitivity and average clarity between models fine-tuned on "Cleaned" and"Raw" datasets
https://arxiv.org/abs/2505.17691v1
using Qwen2.5-7B-Instruct across testing sets. The leftmost column indicates the source dataset used for training, while other columns show model average performance on five testing sets. For example, the "Raw" column under Helpful_Base shows results for unfiltered data, and the "Cleaned" column reflects filtered data. age reduction in non-transitivity and a 0.0879 de- crease in normalized structural entropy compared to those trained on Raw data, indicating clearer overall preferences. Detailed results are in Ap- pendix B.3. Notably, even advanced Evaluator LLMs (e.g., Qwen2.5-Max at 67.96%) exhibit high non-transitivity and entropy (0.8591 and 0.8095, respectively), reflecting limited preference clarity. 5.1.2 Human Agreement Rate and Spearman Correlation Analysis. We evaluate the alignment of fine-tuned Evalua- tor LLMs with 2.5k human-annotated preference labels from the AlpacaEval dataset, which con- tains around 650 questions with two candidate responses each, annotated by four human evalu- ators (Li et al., 2023). The questions are sourced from Helpful_Base, Oasst, Koala, Vicuna, Self- 7 Instruct, and Alpaca. Human Agreement Rate: This metric quanti- fies the alignment between Evaluator LLMs and human annotators’ majority preferences. For each instruction, four predictions are generated by the Evaluator LLM. Using a leave-one-out approach, each prediction is compared against the majority vote of the remaining three human annotations, ex- cluding the corresponding human annotation. The final score is computed as the average accuracy across all four comparisons per instruction. Spearman Correlation: This metric evaluates the Spearman correlation between rankings based on Evaluate LLMs’ preferences and those derived from human preferences. As shown in Table 3, models fine-tuned on the corresponding "Cleaned" training sets of Vicuna, Oasst, koala and Self-instruct exhibit higher human agreement rate compared to those fine-tuned on the "Raw" training sets, with improvements ranging from 0.6% to 1.2%. Additionally, models fine- tuned on the corresponding "Cleaned" training sets of Helpful_Base, Vicuna, and Self-instruct exhibit higher Spearman correlation compared to those fine-tuned on the "Raw" training sets. Notably, on the Helpful_Base "Cleaned" training set achieves the largest improvement, with the Spearman corre- lation increasing from 0.93 to 0.97. 5.2 Ablation Experiments Ablation of Different Base Models: We con- ducted repeated experiments using the LLaMA3.1- 8B-Instruct model as the base model. As shown in Appendix B.2, models trained on the "Cleaned" training set outperform those trained on the "Raw" training set and also surpass the advanced open- source Evaluator LLM Auto-J 13B. We further val- idated that this superiority holds true even for "un- seen" questions, with detailed results provided in Appendix B.3, further highlighting the importance of high-quality data in enhancing model perfor- mance. Ablation of Different Data Filtering Meth- ods: As shown in Figure 3, the "Cleaned" training set constitutes approximately 80% of the "Raw" training set. To validate the effectiveness of our data filtering method, we randomly filtered 20% of the training data and fine-tuned the Qwen2.5-7B- Instruct model using the remaining 80%, denoted as Qwen-Random. As shown in Table 1, the overall performance is similar to the model trained using the "Raw" training set, and showed higher pref-Training Set Human Agreement Rate Spearman Correlation Raw Cleaned ∆ Raw Cleaned ∆ Helpful_Base
https://arxiv.org/abs/2505.17691v1
66.9% 66.9% +0.0% 0.93 0.97 +0.04 Vicuna 65.2% 65.8% +0.6% 0.97 0.98 +0.01 Oasst 66.4% 67.6% +1.2% 0.93 0.93 +0.00 Koala 66.4% 66.9% +0.5% 0.98 0.98 +0.00 Self-instruct 67.8% 68.3% +0.5% 0.98 1.00 +0.02 Average 66.5% 67.1% +0.6% 0.96 0.97 +0.01 Table 3: Comparison of Human Agreement Rate and Spearman Correlation between models fine-tuned on "Cleaned" and"Raw" datasets using Qwen2.5-7B- Instruct. The structure matches Table 2. erence non-transitivity on the Vicuna testing set. This comparison underscores the effectiveness of our targeted filtering strategy in enhancing data quality. Ablation of Different Prompt Forms: We conducted additional experiments using a prompt format that includes ‘tie’ preference outcomes. The detailed prompt template is provided in Ap- pendix C.2, and the experimental setup and results are presented in Appendix B.1, further validating the effectiveness of our data filtering method. No- tably, when using prompts that include ‘tie’ pref- erence outcomes, Evaluate LLMs exhibit higher preference non-transitivity and lower overall pref- erence clarity compared to prompts without ‘tie’ outcomes. This finding is consistent with the re- sults reported in prior work (Xu et al., 2025). 6 Conclusion In this paper, we present a detailed analysis of pref- erence non-transitivity in pairwise evaluations by LLM-as-a-Judge using graph theory, while mea- suring overall clarity through the two-dimensional structural entropy of directed graphs. By filtering low-quality training data that induce non-transitive preferences, our method effectively reduces prefer- ence non-transitivity, enhances overall preference clarity, and improves alignment with human judg- ments. This principled framework contributes to more reliable automated evaluation with LLMs and offers valuable insights for future research on pref- erence consistency and robust benchmarking. Limitations and Future Work Although we have validated the effectiveness of our filtering approach on five datasets from AlpacaEval, it still cannot fully cover all real-world scenarios, 8 and further validation across a broader range of domains is needed. Additionally, while our method has successfully reduced the issue of preference in- transitivity, it has not been completely eliminated. Further efforts to minimize preference intransitiv- ity remain an important direction for future work. Lastly, our study primarily focused on pairwise comparison tasks in single-turn dialogues, explor- ing multi-turn dialogue tasks presents an intriguing avenue for future research. Ethics Statement This research focuses on analyzing and mitigating non-transitive preferences in LLMs when used as evaluators for open-ended tasks. The methodol- ogy involves processing publicly available data and conducting computational experiments using exist- ing datasets and open-source frameworks, without involving human subjects or sensitive personal in- formation. Therefore, no specific ethical approval was required for this study. References 01. AI, :, Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Guoyin Wang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, Kaidong Yu, Peng Liu, Qiang Liu, Shawn Yue, Senbin Yang, Shiming Yang, and 14 others. 2025. Yi: Open foundation models by 01.ai. Preprint , arXiv:2403.04652. Anthropic. 2023. Model card and evaluations for claude models, 2023. Anthropic. 2024. The claude 3 model family: Opus, sonnet, haiku, 2024. Guiming Hardy Chen, Shunian Chen, Ziche Liu, Feng Jiang, and Benyou Wang. 2024a. Humans or LLMs as the judge?
https://arxiv.org/abs/2505.17691v1
a study on judgement bias. In Proceed- ings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 8301–8327, Miami, Florida, USA. Association for Computational Linguistics. Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srini- vasan, Tianyi Zhou, Heng Huang, and Hongxia Jin. 2024b. Alpagasus: Training a better alpaca with fewer data. In The Twelfth International Conference on Learning Representations . Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality.Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anasta- sios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Banghua Zhu, Hao Zhang, Michael Jordan, Joseph E. Gonzalez, and Ion Stoica. 2024. Chatbot arena: An open platform for evaluating LLMs by human pref- erence. In Forty-first International Conference on Machine Learning . Liang Duan, Xiang Chen, Wenjie Liu, Daliang Liu, Kun Yue, and Angsheng Li. 2024. Structural entropy based graph structure learning for node classification. Proceedings of the AAAI Conference on Artificial Intelligence , 38(8):8372–8379. Yann Dubois, Chen Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy S Liang, and Tatsunori B Hashimoto. 2023. Alpacafarm: A simulation framework for methods that learn from human feedback. In Advances in Neural Information Processing Systems , volume 36, pages 30039–30069. Curran Associates, Inc. Peter C Fishburn. 1974. Paradoxes of voting. American Political Science Review , 68(2):537–546. Yuan Ge, Yilun Liu, Chi Hu, Weibin Meng, Shimin Tao, Xiaofeng Zhao, Mahong Xia, Zhang Li, Boxing Chen, Hao Yang, Bei Li, Tong Xiao, and JingBo Zhu. 2024. Clustering and ranking: Diversity-preserved instruction selection through expert-aligned quality estimation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Process- ing, pages 464–478, Miami, Florida, USA. Associa- tion for Computational Linguistics. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. arXiv preprint arXiv:2009.03300 . Yue Hou, He Zhu, Ruomei Liu, Yingke Su, Jinxiang Xia, Junran Wu, and Ke Xu. 2025. Structural en- tropy guided unsupervised graph out-of-distribution detection. Proceedings of the AAAI Conference on Artificial Intelligence , 39(16):17258–17266. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations . Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A. Smith, Iz Belt- agy, and Hannaneh Hajishirzi. 2023. Camels in a changing climate: Enhancing lm adaptation with tulu 2.Preprint , arXiv:2311.10702. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Preprint , arXiv:2310.06825. 9 Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra
https://arxiv.org/abs/2505.17691v1
Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gi- anna Lengyel, Guillaume Bour, Guillaume Lam- ple, Lélio Renard Lavaud, Lucile Saulnier, Marie- Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, and 7 others. 2024. Mixtral of experts. Preprint , arXiv:2401.04088. Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, and Dongyeop Kang. 2024. Bench- marking cognitive biases in large language models as evaluators. In Findings of the Association for Com- putational Linguistics: ACL 2024 , pages 517–545, Bangkok, Thailand. Association for Computational Linguistics. Angsheng Li and Yicheng Pan. 2016. Structural infor- mation and dynamical complexity of networks. IEEE Transactions on Information Theory , 62(6):3290– 3339. Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, hai zhao, and Pengfei Liu. 2024a. Generative judge for evaluating alignment. In The Twelfth International Conference on Learning Representations . Tianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Tianhao Wu, Banghua Zhu, Joseph E. Gonzalez, and Ion Stoica. 2024b. From crowdsourced data to high- quality benchmarks: Arena-hard and benchbuilder pipeline. CoRR , abs/2406.11939. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Alpacaeval: An au- tomatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval . Yunshui Li, Binyuan Hui, Xiaobo Xia, Jiaxi Yang, Min Yang, Lei Zhang, Shuzheng Si, Ling-Hao Chen, Jun- hao Liu, Tongliang Liu, Fei Huang, and Yongbin Li. 2024c. One-shot learning as instruction data prospector for large language models. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers) , pages 4586–4601, Bangkok, Thailand. Associ- ation for Computational Linguistics. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Ku- mar, and 1 others. 2022. Holistic evaluation of lan- guage models. arXiv preprint arXiv:2211.09110 . Yinhong Liu, Han Zhou, Zhijiang Guo, Ehsan Shareghi, Ivan Vulic, Anna Korhonen, and Nigel Collier. 2024. Aligning with human judgement: The role of pair- wise preference in large language model evaluators. arXiv preprint arXiv:2403.16950 . Adian Liusie, Potsawee Manakul, and Mark Gales. 2024. LLM comparative assessment: Zero-shot NLG eval- uation through pairwise comparisons using large lan- guage models. In Proceedings of the 18th Confer- ence of the European Chapter of the Association forComputational Linguistics (Volume 1: Long Papers) , pages 139–151, St. Julian’s, Malta. Association for Computational Linguistics. Meta AI. 2024a. Introducing llama 3.1: Our most capa- ble models to date. https://ai.meta.com/blog/ meta-llama-3-1/ . Meta AI. 2024b. Introducing meta llama 3: The most capable openly available llm to date. https://ai. meta.com/blog/meta-llama-3/ . OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, and othres. 2024. Gpt-4 technical report. Preprint , arXiv:2303.08774. Hao Peng, Jingyun Zhang, Xiang Huang, Zhifeng Hao, Angsheng Li, Zhengtao Yu, and Philip S. Yu. 2024. Unsupervised social bot detection via structural in- formation theory. ACM Trans. Inf. Syst. , 42(6). Qwen Team. 2024a. Hello qwen2. https://qwenlm. github.io/blog/qwen2/ . Qwen Team. 2024b. Introducing qwen1.5. https:// qwenlm.github.io/blog/qwen1.5/ . Qwen Team. 2025. Qwen2.5-max: Exploring the in- telligence of
https://arxiv.org/abs/2505.17691v1