text string | source string |
|---|---|
specifically trained for batch processing using the original RoPE. Subse- quently, we applied group position encoding to the batch processing scenario and fine-tuned the model. The results demonstrate that applying group posi- tion encoding introduces no performance degrada- tion for batch processing, confirming its compati- bility and generalization across both streaming and batch processing settings. 6 Related Work Streaming Language and Speech Transformers A typical implementation of Transformer-based streaming tasks adopts an incremental encoding strategy on the encoder side and an incremental decoding strategy on the decoder side (Ma et al., 2021, 2023; Zhang and Feng, 2023). With the rise of large language models, researchers have be- gun exploring how to adapt decoder-only architec- tures for streaming tasks. Among these approaches, batch-streaming models based on prompt struc- tures attempt to approximate offline batch process- ing by re-encoding tokens during streaming infer- ence (Agostinelli et al., 2024; Guo et al., 2024b; Koshkin et al., 2024; Wang et al., 2024). Some studies suggest that position confusion in stream- ing environments is a key factor necessitating re- encoding in LLMs (Guo et al., 2024a; Raffel et al., 2024). To address this issue, one line of research focuses on modifying the decoder-only architec- ture to enhance its adaptability to streaming tasks (Guo et al., 2024a; Tsunoo et al., 2024; Chen et al., 2024), while another emphasizes optimizing po- sitional encoding—such as the ALIBI positional 2 4 6 8 LAAL (words) 30 32 34 36 38 40BLEU Streaming Translation T ask on IWSLT-17 En-Fr Gemma2 (Ours) Phi3 (Ours) LLama (Ours) Falcon (Ours) Falcon (SimulMask-ALiBi) 2 4 6 8 LAAL (words) 18 20 22 24 26 28 30 32BLEU Streaming Translation T ask on IWSLT-17 En-De Gemma2 (Ours) Phi3 (Ours) LLama (Ours) Falcon (Ours) Falcon (SimulMask-ALiBi) DST with wait-k (DST) 500 1000 1500 2000 2500 3000 3500 4000 AL (ms) 3.5 4.0 4.5 5.0 5.5 6.0 6.5WER Streaming ASR T ask on LibriSpeech En (test-clean) CAAT Large wav2vec-S Large wav2vec-streaming with Phi3 (Our)Figure 4: The performance comparison between group position streaming LLMs with other decoder-only models. Gemma2-2B Phi3-mini-3.8B LLaMA3.1-8B 0.0 10.0 20.0 30.0 40.0 50.0 60.0BLEUBatch-processing translation task on IWSLT-17 En-Fr Position Empty Original = 0 = 0.5 = 128 = 256 = 512 Figure 5: The BLEU performance of batch-processing translation task on IWSLT-17 En-Fr dataset. encoding—to mitigate the effects of incremental position shifts during streaming decoding (Raffel et al., 2024). In contrast to simulated batch process- ing, an interleaved-streaming paradigm (Du et al., 2024; Yang et al., 2024) that adheres to temporal order has been explored, wherein input and output tokens are interleaved and encoded sequentially. While significant progress has been made in de- veloping streaming models, existing studies lack a rigorous analysis of the fundamental differences be- tween batch processing and streaming paradigms. Position Encoding in Transformers Position encoding (Raffel et al., 2020; Press et al., 2022; Su et al., 2024) is a crucial component of Trans- former models (Vaswani, 2017), designed to break the permutation-invariant nature of self-attention mechanisms. Recent studies have demonstrated that decoder-only Transformer models can still cap- ture positional information even | https://arxiv.org/abs/2505.16983v1 |
in the absence of explicit positional encoding (Shen et al., 2018). A plausible explanation is that causal attention masks enforce position-dependent token interac- tions, implicitly encoding positional information (Haviv et al., 2022; Tsai et al., 2019). Related re-search has shown that in tasks such as speech mod- eling (Likhomanenko et al., 2021) and language modeling (Haviv et al., 2022), decoder-only Trans- formers without positional encoding can achieve performance comparable to standard decoder-based Transformers. Furthermore, other studies (Kazem- nejad et al., 2024; Ruoss et al., 2023) have indi- cated that the generalization ability of Transformers without positional encoding does not degrade sig- nificantly when handling varying context lengths. While significant progress has been made in under- standing positional encoding in LLMs, existing re- search has primarily focused on static scenarios. In contrast, the role of positional encoding in stream- ing scenarios remains underexplored, where the dynamic modeling of positional information may follow different patterns and exert distinct effects. 7 Conclusion This work provides a systematic analysis of the mis- matches that arise in adapting batch-trained LLMs to streaming tasks. We identify input-attention mismatch as the primary bottleneck, while output- attention and position-ID mismatches have negligi- ble impact, challenging the prevailing assumption that position inconsistencies necessitate frequent re-encoding. To clarify this, we conduct the first in-depth analysis of position encoding in streaming settings, showing that preserving strict absolute po- sitions is unnecessary; instead, maintaining relative token order within source and target contexts is more critical. Building on the insights, we propose thegroup streaming paradigm , a simple yet effec- tive strategy that bridges the gap between stream- ing and batch modes without requiring re-encoding. The approach is model-agnostic and generalizable, achieving strong performance across both cross- lingual and cross-modal streaming tasks. Limitations This paper primarily focuses on exploring the opti- mal paradigm for streaming models and, therefore, does not delve into different waiting policies. The conclusions drawn in this study have only been validated under the wait-k policy. Additionally, our study is confined to streaming tasks in the text and audio modalities, with video streaming left for future investigation. References Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. 2024. Phi-3 technical report: A highly ca- pable language model locally on your phone. arXiv preprint arXiv:2404.14219 . Victor Agostinelli, Max Wild, Matthew Raffel, Kazi Fuad, and Lizhong Chen. 2024. Simul-LLM: A framework for exploring high-quality simultaneous translation with large language models. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) . Gerry TM Altmann and Jelena Mirkovi ´c. 2009. Incre- mentality and prediction in human sentence process- ing. Cognitive science , 33(4):583–609. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems , 33:12449–12460. Mauro Cettolo, Marcello Federico, Luisa Bentivogli, Jan Niehues, Sebastian Stüker, Katsuitho Sudoh, Koichiro Yoshino, and Christian Federmann. 2017. Overview of the iwslt 2017 evaluation campaign. In Proceedings of the 14th International Workshop on | https://arxiv.org/abs/2505.16983v1 |
Spoken Language Translation , pages 2–14. Junkun Chen, Mingbo Ma, Renjie Zheng, and Liang Huang. 2021. Direct simultaneous speech-to-text translation assisted by synchronized streaming asr. InFindings of the Association for Computational Linguistics: ACL-IJCNLP 2021 , pages 4618–4624. Peikun Chen, Sining Sun, Changhao Shan, Qing Yang, and Lei Xie. 2024. Streaming decoder-only auto- matic speech recognition with discrete speech units: A pilot study. In Proc. Interspeech 2024 , pages 4468– 4472. Yunfei Chu, Jin Xu, Xiaohuan Zhou, Qian Yang, Shil- iang Zhang, Zhijie Yan, Chang Zhou, and Jingren Zhou. 2023. Qwen-audio: Advancing universal audio understanding via unified large-scale audio- language models. arXiv preprint arXiv:2311.07919 .Qian Dong, Yaoming Zhu, Mingxuan Wang, and Lei Li. 2022. Learning when to translate for streaming speech. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022) , pages 680–694. Zhihao Du, Yuxuan Wang, Qian Chen, Xian Shi, Xiang Lv, Tianyu Zhao, Zhifu Gao, Yexin Yang, Changfeng Gao, Hui Wang, et al. 2024. Cosyvoice 2: Scalable streaming speech synthesis with large language mod- els.arXiv preprint arXiv:2412.10117 . Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Biao Fu, Kai Fan, Minpeng Liao, Yidong Chen, Xi- aodong Shi, and Zhongqiang Huang. 2024. wav2vec- s: Adapting pre-trained speech models for streaming. InFindings of the Association for Computational Linguistics ACL 2024 , pages 11465–11480. Cleotilde Gonzalez, Javier F Lerch, and Christian Lebiere. 2003. Instance-based learning in dynamic decision making. Cognitive Science , 27(4):591–635. Shoutao Guo, Shaolei Zhang, and Yang Feng. 2024a. Decoder-only streaming transformer for simultane- ous translation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Lin- guistics (ACL 2024) . Shoutao Guo, Shaolei Zhang, Zhengrui Ma, Min Zhang, and Yang Feng. 2024b. Sillm: Large language mod- els for simultaneous machine translation. arXiv preprint arXiv:2402.13036 . Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, and Omer Levy. 2022. Transformer language models without positional encodings still learn positional information. arXiv preprint arXiv:2203.16634 . Zhenyu He, Jun Zhang, Shengjie Luo, Jingjing Xu, Zhi Zhang, and Di He. 2024. Let the code llm edit itself when you edit the code. arXiv preprint arXiv:2407.03157 . Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and Siva Reddy. 2024. The impact of positional encoding on length generalization in transformers. Advances in Neural Information Processing Systems , 36. Tom Kocmi and Christian Federmann. 2023. Large language models are state-of-the-art evaluators of translation quality. arXiv preprint arXiv:2302.14520 . Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. Advances in neural information processing systems , 35:22199– 22213. Roman Koshkin, Katsuhito Sudoh, and Satoshi Naka- mura. 2024. TransLLaMa: LLM-based simultaneous translation system. In Findings of the Association for Computational Linguistics: EMNLP 2024 . Tatiana Likhomanenko, Qiantong Xu, Gabriel Syn- naeve, Ronan Collobert, and Alex Rogozhnikov. 2021. Cape: Encoding relative positions with contin- uous augmented positional embeddings. Advances in Neural Information Processing Systems , 34:16079– 16092. Dan | https://arxiv.org/abs/2505.16983v1 |
Liu, Mengge Du, Xiaoxi Li, Ya Li, and Enhong Chen. 2021. Cross attention augmented transducer networks for simultaneous translation. In Proceed- ings of the 2021 Conference on Empirical Methods in Natural Language Processing . Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2024. Visual instruction tuning. Advances in neural information processing systems , 36. Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, et al. 2019. Stacl: Simultaneous translation with implicit antici- pation and controllable latency using prefix-to-prefix framework. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics (ACL 2019) , pages 3025–3036. Xutai Ma, Yongqiang Wang, Mohammad Javad Dousti, Philipp Koehn, and Juan Pino. 2021. Streaming si- multaneous speech translation with augmented mem- ory transformer. In ICASSP 2021-2021 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 7523–7527. IEEE. Zhengrui Ma, Shaolei Zhang, Shoutao Guo, Chenze Shao, Min Zhang, and Yang Feng. 2023. Non- autoregressive streaming transformer for simultane- ous translation. arXiv preprint arXiv:2310.14883 . Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr cor- pus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP) , pages 5206–5210. IEEE. Matt Post. 2018. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771 . Ofir Press, Noah Smith, and Mike Lewis. 2022. Train short, test long: Attention with linear biases enables input length extrapolation. In International Confer- ence on Learning Representations (ICLR 2022) . Alec Radford, Jong Wook Kim, Tao Xu, Greg Brock- man, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak su- pervision. In International conference on machine learning , pages 28492–28518. PMLR.Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of machine learning research (JMLR) , 21(140):1–67. Matthew Raffel, Victor Agostinelli, and Lizhong Chen. 2024. Simultaneous masking, not prompting opti- mization: A paradigm shift in fine-tuning LLMs for simultaneous translation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP 2024) . Anian Ruoss, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Róbert Csordás, Mehdi Bennani, Shane Legg, and Joel Veness. 2023. Randomized positional encodings boost length generalization of transform- ers.arXiv preprint arXiv:2305.16843 . Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. Disan: Di- rectional self-attention network for rnn/cnn-free lan- guage understanding. In Proceedings of the AAAI conference on artificial intelligence . Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. Roformer: En- hanced transformer with rotary position embedding. Neurocomputing , 568:127063. Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. 2024. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118 . Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, and | https://arxiv.org/abs/2505.16983v1 |
Ruslan Salakhutdinov. 2019. Transformer dissection: a unified understand- ing of transformer’s attention via the lens of kernel. arXiv preprint arXiv:1908.11775 . Emiru Tsunoo, Hayato Futami, Yosuke Kashiwagi, Sid- dhant Arora, and Shinji Watanabe. 2024. Decoder- only architecture for streaming end-to-end speech recognition. In Proc. Interspeech 2024 , pages 4463– 4467. A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems (NeurIPS 2017) . Minghan Wang, Thuy-Trang Vu, Jinming Zhao, Fate- meh Shiri, Ehsan Shareghi, and Gholamreza Haffari. 2024. Simultaneous machine translation with large language models. In Proceedings of the 22nd Annual Workshop of the Australasian Language Technology Association . Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2024. Efficient streaming lan- guage models with attention sinks. In The Twelfth International Conference on Learning Representa- tions (ICLR 2024) . Yifan Yang, Ziyang Ma, Shujie Liu, Jinyu Li, Hui Wang, Lingwei Meng, Haiyang Sun, Yuzhe Liang, Ruiyang Xu, Yuxuan Hu, et al. 2024. Interleaved speech-text language models are simple streaming text to speech synthesizers. arXiv preprint arXiv:2412.16102 . Jun Zhan, Junqi Dai, Jiasheng Ye, Yunhua Zhou, Dong Zhang, Zhigeng Liu, Xin Zhang, Ruibin Yuan, Ge Zhang, Linyang Li, Hang Yan, Jie Fu, Tao Gui, Tianxiang Sun, Yu-Gang Jiang, and Xipeng Qiu. 2024. AnyGPT: Unified multimodal LLM with dis- crete sequence modeling. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) . Dong Zhang, Shimin Li, Xin Zhang, Jun Zhan, Pengyu Wang, Yaqian Zhou, and Xipeng Qiu. 2023a. Speechgpt: Empowering large language models with intrinsic cross-modal conversational abilities. In Findings of the Association for Computational Lin- guistics: EMNLP 2023 , pages 15757–15773. Hang Zhang, Xin Li, and Lidong Bing. 2023b. Video- llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858 . Shaolei Zhang and Yang Feng. 2023. Hidden markov transformer for simultaneous machine translation. arXiv preprint arXiv:2303.00257 . Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 . A Different Paradigms on Streaming Tasks The main text introduces three approaches for applying LLMs to streaming tasks: batch-streaming, interleaved-streaming, and group-streaming. Among them, batch-streaming maximally simulates the batch-processing paradigm in offline scenarios through re-encoding, with the only difference being the availability of local information in a streaming setting. Figure 1 illustrates different paradigms of LLM data processing using ASR as an example. Streaming tasks process partial input. (b) Group-streaming: position of source and target are independent.[0] [1][2] [3] [4] [2] [3] [4] [0] [1] Streaming tasks process partial input. (d) Batch-streaming: re-encode the KV and position.[0] [1] [2] [3] [0] [1][2] [3] [4] [7] [8] [9] [5] [6] Streaming tasks process partial input. (a) Batch-processing in offline task. [7] [8] [9] [5] [6] [0] [1][2] [3] [4] (c) Interleaved-streaming: encoding streaming data in arrival order. Streaming tasks process partial input. [0][1] [2] [3] [4] [5] [6] [7] [8] [9] Figure 1: An ASR example for illustration of different paradigms for LLMs processing. We clarify that re-encoding | https://arxiv.org/abs/2505.16983v1 |
refers to reprocessing all previously generated target tokens after each new source context is read, before generating the next target token. This is solely for optimizing the generation of the latest token without altering previously output content. We exclude scenarios where re-encoding continuously adjusts already output content, as the final alignment after reading the entire input would be equivalent to batch processing. In this context, re-encoding clearly holds positive value. B Model Details B.1 Model Structure Streaming Text LLM The group-streaming model, as previously introduced, is designed to enforce a strict attention constraint where historically generated tokens are prevented from attending to newly received source tokens, ensuring a clear separation between past and present information. Additionally, the model maintains independent positional encoding for both source and target tokens, preserving structural integrity while facilitating effective streaming processing. Streaming Speech LLM Figure 2 illustrates the structure of the streaming ASR model proposed in this paper. The model consists of a streaming audio encoder, an MLP projector, and our Group Positional Encoding-based Streaming LLM. The streaming audio encoder is a variant of Wav2vec2 (Baevski et al., 2020), with the following key modifications: (1) Positional Encoding Adjustment: We replace the convPE in Wav2vec2 with a causal convolution-based positional encoding (causal ConvPE) to enforce directional constraints on the information flow. (2) Structural Optimization: The Transformer Encoder in Wav2vec2 is replaced with a Transformer Decoder to ensure global unidirectional information constraints, thereby enhancing incremental encoding capability. We refer to this modified model as Wav2vec2-Streaming. While it shares some similarities with Wav2vec-S (Fu et al., 2024), the latter employs absolute sinusoidal positional encoding, whereas Wav2vec2-Stream retains causal convolution to improve temporal modeling. History Future Group-streaming Large Language Model MLP Projector Transformer Decoder Instruct Wav2vec2-Stream Current stream Future stream... ... Text response Figure 2: Illustration of our Group-streaming speech Large Language Model, where the group-streaming LLM and the streaming audio encoder are connected through an MLP projector. Additionally, we have modified Wav2vec2 within the HuggingFace Transformers framework6to enable seamless interoperability with existing LLMs. Our code and pretrained weights will be open-sourced for research and application purposes. Wav2vec2-Stream processes audio data sampled at 16 kHz, where each segment consists of 400 samples, with an 80-sample overlap between consecutive segments. This results in an embedding vector for the LLM approximately every 20 ms, ensuring a fine-grained temporal resolution for streaming speech recognition. Similar to (Chen et al., 2021; Dong et al., 2022) et al., we adopt a fixed-interval audio segmentation approach combined with the wait-k strategy for streaming tasks. In our setup, the time interval is set to 400 ms, ensuring a structured and controlled latency for real-time processing. Unlike discrete encoding models (Zhang et al., 2023a; Zhan et al., 2024), which require expanding the LLM vocabulary to support speech-text multimodality, we propose a continuous encoding-based speech LLM. In this framework, the output features of the streaming audio encoder are mapped to the LLM space through an MLP projection layer, enabling end-to-end speech understanding and generation. This design is inspired by LLaV A (Liu et al., 2024) but has been specifically optimized | https://arxiv.org/abs/2505.16983v1 |
for streaming speech tasks. B.2 Data Format In this paper, all the large language models we selected are instruction-tuned versions. To fully leverage their instruction-following capabilities, we strictly adhere to the instruction format used during their pretraining phase. Additionally, we design the data format, as shown in Figure 3 and Figure 4, to align with the specific requirements of our tasks. B.3 Training Method Training Method of Different Streaming Paradigms The main text analyzes the impact of different LLM paradigms on streaming tasks, covering training and evaluation methods for interleaved-streaming, batch-streaming, and group-streaming. This section provides a detailed explanation of the masking matrix design for different streaming paradigms and introduces the corresponding training methods. We explain these training paradigms in the context of the wait-k reading/writing policy (Ma et al., 2019). Figure 5 illustrates the attention mask under different training methods, indicating the input order, position IDs, and whether a token is included in the loss calculation. Figure 5 (a) represents the standard LLM causal mask matrix, which enables batch-processing training in offline scenarios using shifted loss computation. Figure 5 (b) also employs a causal mask matrix, but the model’s input consists of an interleaved sequence of source and target tokens, with position IDs assigned sequentially. Notably, each word may correspond to multiple tokens, where the first token is generated from the source, while the 6https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self Data format of gemma 2-Instruct Prompt: '<bos><start_of_turn>userTranslate the following English paragraph to German\n' Input: 'Just in the last two days, we got the new temperature records in January.<end_of_turn>' Output: '<start_of_turn>Erst in den letzten beiden Tagen hatten wir neue Januar-Temperaturrekorde.<end_of_turn>' Data format of Phi3-mini-Instruct Prompt: '<|system|>Translate the following English paragraph to German:<|end|>' Input: '<|user|>Just in the last two days, we got the new temperature records in January.<|end|>' Output: '<|assistant|>Erst in den letzten beiden Tagen hatten wir neue Januar-Temperaturrekorde.<|end|>' Data format of LLama-3.1-Instruct Prompt: '<|begin_of_text|><|start_header_id|>system<|end_header_id|>Translate the following English paragraph to German:<|eot_id|>' Input: '<|start_header_id|>user<|end_header_id|>Just in the last two days, we got the new temperature records in January.<|eot_id|>' Output: '<|start_header_id|>assistant<|end_header_id|>Erst in den letzten beiden Tagen hatten wir neue Januar-Temperaturrekorde.<|eot_id|>'Figure 3: Data format of text translation task. An example of translation from English to German. Data format of gemma 2-Instruct Prompt: '<bos><start_of_turn>userTranscribe the following English audio to English text\n' Input: 'SPEECH<end_of_turn>' Output: '<start_of_turn>Just in the last two days, we got the new temperature records in January.<end_of_turn>' Data format of Phi3-mini-Instruct Prompt: '<|system|>Transcribe the following English audio to English text:<|end|>' Input: '<|user|>SPEECH<|end|>' Output: '<|assistant|>Just in the last two days, we got the new temperature records in January.<|end|>' Data format of LLama-3.1-Instruct Prompt: '<|begin_of_text|><|start_header_id|>system<|end_header_id|>Transcribe the following English audio to English text:<|eot_id|>' Input: '<|start_header_id|>user<|end_header_id|>SPEECH<|eot_id|>' Output: '<|start_header_id|>assistant<|end_header_id|>Just in the last two days, we got the new temperature records in January.<|eot_id|>' Figure 4: Data format of ASR task, where the ’ SPEECH ’ is the audio embedding for input. remaining tokens are generated from the target. During loss computation, only positions that contribute to target token generation are considered. Figure 5 (c) depicts the batch-streaming mask matrix, which is structurally akin to (Raffel et al., 2024). It maintains the batch-processing input format while adopting interleaved-streaming position | https://arxiv.org/abs/2505.16983v1 |
encoding, preventing source tokens from attending to target tokens to eliminate input-attention mismatch and ensure streaming consistency. Figures 5 (d), (e), and (f) represent three different re-encoding scenarios, all of which share the same mask matrix. The core assumption of re-encoding is that as new content is continuously read at the source end, both the historical KV cache and position embedding must be updated accordingly to ensure accurate next-token prediction. Therefore, the training phase should reflect this dynamic updating mechanism. However, existing approaches employ either a causal-mask or prefix-to-prefix training methods (Raffel et al., 2024), leading to a mismatch between training and inference. Specifically, causal-masked training is inherently offline and fails to capture the continuous update of content. While prefix-to-prefix training partially simulates this process, the target token is always the most recent one at each step. As more content is read, its behavior increasingly resembles an offline setting. In streaming scenarios, however, previously generated content cannot be modified, making this approach inadequate for capturing the true nature of re-encoding. To address this discrepancy, the mask matrix design in Figures 5 (d), (e), and (f) ensures consistency between the training and inference processes, effectively aligning the training paradigm with real-world inference dynamics. Training Method of Streaming ASR Model Due to the lack of a large-scale pre-trained streaming audio encoder, our modified streaming version of Wav2vec2 requires a step-by-step training approach. We adopt a four-stage training strategy to effectively train our proposed speech large language model, (d) batch-streaming (re-encode KV)0 1 2 1 1 3 1 53 (g) group-streaming0 1 2 3 mm+1 m+2 m+30 1 2 3 4 5 6 7 (a) offline (b) interleaved- streaming2345 01 67 (c) batch-streaming (no re-encoding)0 1 2 3 1 3 5 7 (f) batch-streaming (re-encode KV & position)0 1 2 1 2 3 3 54 (e) batch-streaming (re-encode position)0 1 2 1 2 3 3 54[ [ [ [ [ [[ [[ [ [ [ [ [Source tokens Target tokens Masked tokens Loss-contributing tokens Index Loss-excluded tokens Index Label tokens Index [num] Position IDsFigure 5: Attention mask matrix of different paradigms. ensuring a smooth adaptation to streaming scenarios: 1.Stage 1: Pre-training for Feature Alignment. In the first stage, we focus on establishing a robust feature alignment between the streaming audio encoder and the LLM. We begin by freezing both Wav2vec2 and the LLM and train the MLP projector using a batch-processing task. The goal is to learn a stable feature transformation that maps the continuous speech representations from Wav2vec2 into a space that aligns with the LLM’s token embedding space. This step is crucial for minimizing the modality gap between speech and text representations, ensuring that the LLM can effectively process speech-derived embeddings in later stages. 2.Stage 2: Streaming Adaptation of Wav2vec2. We replace Wav2vec2’s ConvPE with the causal ver- sion used in Wav2vec2-Streaming, enabling directional constraints suitable for streaming processing. In this stage, we jointly train Wav2vec2-Streaming and the projector, allowing the model to adapt to incremental encoding while maintaining alignment with the LLM’s input space. 3.Stage 3: Streaming Adaptation of Wav2vec2. We replace | https://arxiv.org/abs/2505.16983v1 |
Wav2vec2’s transformer encoder with the transformer decoder from Wav2vec2-Streaming. This modification ensures that the model adheres to global unidirectional constraints. We then continue joint training of Wav2vec2-Streaming and the projector, improving the encoder’s ability to generate high-quality speech embeddings in real-time. 4.Stage 4: Fine-tuning the LLM for Streaming ASR. In the final stage, we freeze both Wav2vec2- Streaming and the projector, and fine-tune the LLM on a streaming ASR task. This step refines the LLM’s ability to generate accurate text outputs from streaming speech representations, optimizing its instruction-following capabilities while maintaining low-latency processing. C Experiments Details C.1 Hyperparameters When verifying grouped position encoding, the model parameters are configured as shown in Table 1. Notably, re-encoding introduces quadratic complexity, increasing the computational cost and resource requirements for both model training and inference. For the mismatch validation experiment, we reduce both the batch size and learning rate by half. HyperparameterText to Text (Gemma2, Phi3, LLama3.1)ASR, Stage 1 (Gemma2, Phi3)ASR, Stage 2 to 4 (Gemma2, Phi3) Precision bfloat16 bfloat16 bfloat16 Learning Rate 2e-4 2e-4 2e-4 LR Scheduler Linear Linear Linear Optimizer AdamW AdamW AdamW Warmup steps 500 1000 5000 Lora rank 32 64 64 Epochs 2 4 4 Batch size 64 32 64 Wait-k 1,3,5,7,9,11 1,3,5,7,9 1,3,5,7,9 Table 1: Fine-tuning hyperparameters of LLMs in this paper. C.2 Decoding Strategy The decoding process for streaming LLM is detailed in Algorithm 1. Algorithm 1 Streaming decoding with wait-k policy Input: Source length list S, target length list T, wait-k policy k. 1:Initialize source KV cache Scache , target KV cache Tcache , and past token KV cache Pcache as None. 2:Initialize action=read ,is_finished=false , andindex = 0. 3:Initialize next_token as the target prompt tokens, and initialize generated tokens for this round token_list as an empty list. 4:while is_finished isfalse do: 5: ifaction isread: 6: Separate Pcache toScache andTcache . 7: Read prompt and k+index words, and save hidden state to source KV cache Scache . 8: Merge Scache andTcache toPcache . 9: Setaction=write . 10: Setindex =index + 1. 11: elifaction iswrite : 12: Separate Pcache toScache andTcache . 13: Calculate and save hidden state to target KV cache Tcache . 14: Merge Scache andTcache toPcache . 15: Project next_token as Q, and calculate attention output with KV cache Pcache . 16: Predict and update the next_token based on greedy decoding. 17: ifnext_token is the end symbol: 18: Setis_finished astrue. 19: Add next_token totoken_list . 20: iftoken_list forms a complete word: 21: Print the word. 22: Setaction asread, reset token_list as an empty list. 23:end while 24:Return: The predict words. D More Details about Group Position D.1 Relative Distance of Group Position Let the source tokens be X= [x0, x1, . . . , x M−1]and target tokens be Y= [y0, y1, . . . , y N−1], where the position IDs are posx= [0,1, . . . , M −1]andposy= [ϕ, ϕ+ 1, . . . , ϕ +N−1], respectively. In batch-processing mode, the starting position ID on the target side is given by ϕ=M. In batch-streaming 0 10 20 30 40 50 60 Position ID0 | https://arxiv.org/abs/2505.16983v1 |
10 20 30 40 50 60Position ID Relative Distance Matrix of batch-processing 011 23 35011 23 35 Position ID0 11 23 35 0 11 23 35Position ID Relative Distance Matrix of group-streaming = 0 0 10 20 30 40 50 60 Position ID0 10 20 30 40 50 60Position ID Relative Distance Matrix of group-streaming = 10 011 23 35011 23 35 Position ID0 11 23 35 0 11 23 35Position ID Relative Distance Matrix of group-streaming = 20 60 40 20 0204060 60 40 20 0204060 60 40 20 0204060 60 40 20 0204060Figure 6: Relative distance matrix of batch-processing mode and group-streaming mode. mode, the starting position ID on the target side is given by ϕ= 0. Define the rotary matrix as R(m) =diag(R1(m), R2(m), . . . , R d/2(m)), where mis the position id, dis the model dimension, and Ri(m) =cos(mθi)−sin(mθi) sin(mθi) cos( mθi) , θ i= 10000−2i/d. (1) For the original rotary position embedding (RoPE) (Su et al., 2024), positional information is incorpo- rated into each token’s Query ( q) and Key ( k) through a rotation matrix. This process can be expressed as qr n=R(n)qnandkr m=R(m)km, where nandmdenote the respective position IDs. Then, the attention mechanism in RoPE incorporates the rotationally transformed queries and keys, leading to the attention score computation as follows: Attention (n, m) =qr nTkr m=qT nRT(n)R(m)km=qT nR(m−n)km. (2) For any two positions nandmwithin the sequence, their position encoding depends solely on R(m−n), meaning it is determined by their relative distance m−n. When kmandqnboth belong to either source tokens or target tokens, the relative distance is given by ∆ =m−n. In this case, the positional encoding results in batch-processing and batch-streaming remain identical. When kmandqnbelong to source tokens and target tokens, respectively, the relative distance is given by ∆ =ϕ+j−m, where jdenotes the position of qnas the j-th token on the target side. In this case, the positional encoding results in batch-processing and batch-streaming depend on ϕ. For batch processing, ϕ=M−1indicates that the target tokens are farther from the source starting token and closer to the source ending token. In contrast, for batch-streaming, ϕ= 0means the target starting token is closer to the source starting token and farther from the source ending token. This aligns with the sequential information arrival order in streaming scenarios, making it more suitable for capturing relative positional changes in streaming settings. Figure 6 illustrates the variation in relative distances under batch-processing and batch-streaming settings. In batch-processing mode, which is typically used in offline scenarios, position IDs are assigned sequentially. Tokens near the diagonal exhibit local positional relationships with smaller relative distances, whereas tokens farther from the diagonal have increasingly larger relative distances, reflecting their positional separation. In batch-streaming mode, the relative positional relationships among tokens within the source and target sequences remain unchanged. However, the relative distance between target and source tokens is influenced by the parameter ϕ, shifting accordingly as ϕincreases. Taking ϕ= 0as an example, in a streaming scenario, the target token with position ID 0 first interacts with the source token at position ID 0, | https://arxiv.org/abs/2505.16983v1 |
resulting in a relative distance of 0. This alignment effectively models the sequential nature of data accumulation in streaming settings, ensuring that the position encoding adapts dynamically to the progressive arrival of information. D.2 Why Group Position Avoids Confusion Research by (Shen et al., 2018; Haviv et al., 2022; Tsai et al., 2019) have shown that decoder-only models can learn implicit positional information. In decoder-only architectures, source tokens and target tokens attend to different contexts. As a result, even if their position IDs overlap, the model can still distinguish between source and target based on the content they attend to. As shown in Equation 3: Attention (n, cache ) =X i=0qr nTkir=X i=0qT nRT(n)R(i)ki=X i=1qT nR(i−n)ki = Pn i=0qT nsR(i−n)kis, q nsis source, PM−1 i=0qT ntR(i−n)kis+Pn i=0qT ntR(i−n)kit, qntis target.(3) In both cases, the query token has an ID of n, but since it attends to different content, the model can still distinguish between the source and the target. D.3 Potential Edge of Group Position The model can learn the position offset ϕthrough simple fine-tuning, so typical values of ϕdo not significantly impact performance. However, when ϕbecomes extremely large, it may lead to discrepancies with the model’s pretraining distribution due to the limited context length used during pretraining. Therefore, a reasonable range for ϕshould ensure that the maximum relative distance, specifically, between the last target token and the first source token, does not exceed the model’s pretraining context length. For example, Gemma2-2B-Instruct was pretrained with a context length of 8k, which suggests that the maximum suitable value of ϕis around 6k, as shown in Table 2. Model Wait-k m=0 m=512 m=4k m=5k m=6k m=7k m=8k m=10k m=50k Gemma2-2B-Instruct (8k)5 40.76 40.68 40.70 40.51 40.21 39.83 39.73 39.52 39.37 9 40.91 40.89 40.85 40.81 40.73 40.11 39.97 39.78 39.56 Table 2: BLEU performance of Gemma2-2B-Instruct (8k) under different memory sizes mand wait- ksettings. E Full Results of Text Translation Task This section provides additional results to validate the impact of different initial position IDs on the target side in streaming translation tasks. The results cover three different large language models and two different translation tasks. The full results of the text translation task are shown in Table 3, which includes the accuracy metric BLEU and the latency metric LAAL. Dataset Wait-kGemma2-2b-Instruct (Target start id ϕ) 0 0.5 128 256 512 ∆ En-Fr5 40.76 (5.21) 40.76 (5.21) 40.70 (5.21) 40.57 (5.20) 40.68 (5.21) 0.19 (0.01) 7 40.92 (7.18) 40.92 (7.18) 40.85 (7.17) 40.91 (7.18) 40.92 (7.18) 0.07 (0.01) 9 40.91 (9.14) 40.91 (9.14) 40.90 (9.13) 40.88 (9.13) 41.01 (9.13) 0.09 (0.01) 11 41.10 (11.09) 41.10 (11.09) 41.14 (11.09) 40.96 (11.09) 41.05 (11.09) 0.18 (0.00) En-De5 30.84 (4.62) 30.84 (4.62) 30.90 (4.63) 30.80 (4.62) 30.95 (4.56) 0.15 (0.07 ) 7 31.47 (6.63) 31.47 (6.63) 31.44 (6.63) 31.57 (6.63) 31.67 (6.59) 0.23 (0.04) 9 31.73 (8.66) 31.73 (8.66) 31.87 (8.65) 31.91 (8.65) 31.88 (8.65) 0.18 (0.01) 11 31.95 (10.70) 31.95 (10.70) 31.98 (10.69) 31.95 (10.69) 31.89 (10.69) 0.09 (0.01) Dataset Wait-kPhi3-mini-Instruct (Target start id ϕ) 0 0.5 128 256 512 ∆ En-Fr5 39.89 (5.45) 39.89 | https://arxiv.org/abs/2505.16983v1 |
(5.45) 39.91 (5.44) 40.06 (5.41) 39.87 (5.44) 0.19 (0.03) 7 40.57 (7.38) 40.57 (7.38) 40.53 (7.37) 40.72 (7.39) 40.71 (7.39) 0.19 (0.02) 9 41.31 (9.28) 41.31 (9.28) 41.04 (9.29) 41.35 (9.27) 41.44 (9.27) 0.20 (0.02) 11 41.92 (11.17) 41.92 (11.17) 42.03 (11.17) 41.94 (11.17) 41.93 (11.17) 0.11 (0.00) En-De5 30.92 (4.65) 30.92 (4.65) 30.76 (4.64) 30.81 (4.65) 30.86 (4.65) 0.16 (0.01) 7 31.94 (6.65) 31.94 (6.65) 31.78 (6.64) 31.84 (6.64) 31.78 (6.64) 0.16 (0.01) 9 32.18 (8.69) 32.18 (8.69) 32.10 (8.68) 32.21 (8.69) 32.09 (8.68) 0.12 (0.01) 11 32.26 (10.71) 32.26 (10.71) 32.23 (10.73) 32.23 (10.73) 32.28 (10.73) 0.10 (0.02) Dataset Wait-kLLaMA3.1-8b-Instruct (Target start id ϕ) 0 0.5 128 256 512 ∆ En-Fr5 40.11 (5.23) 40.11 (5.23) 40.10 (5.22) 39.93 (5.23) 39.92 (5.23) 0.19 (0.01) 7 40.30 (7.19) 40.30 (7.19) 40.32 (7.19) 40.35 (7.20) 40.31 (7.19) 0.03 (0.01) 9 40.15 (9.17) 40.15 (9.17) 40.32 (9.16) 40.34 (9.17) 40.35 (9.17) 0.20 (0.01) 11 40.53 (11.11) 40.53 (11.11) 40.47 (11.11) 40.58 (11.11) 40.63 (11.10) 0.16 (0.01) En-De5 30.33 (4.58) 30.33 (4.58) 30.21 (4.57) 30.37 (4.58) 30.34 (4.58) 0.16 (0.01) 7 31.23 (6.54) 31.23 (6.54) 31.18 (6.54) 31.16 (6.54) 31.25 (6.53) 0.09 (0.01) 9 31.80 (8.63) 31.80 (8.63) 31.83 (8.63) 31.76 (8.62) 31.89 (8.62) 0.13 (0.01) 11 32.04 (10.56) 32.04 (10.56) 31.98 (10.55) 32.07 (10.56) 32.08 (10.56) 0.10 ( 0.00) Table 3: Performance comparison of different models with various wait-k policies and target start IDs. ∆represents the range of variation in BLEU scores and LAAL scores when the start id of target token takes different values. We use bold to indicate the smallest variation. Underline represents the largest variation. F Model Efficiency This section compares the computational cost and throughput between re-encoding and our grouped- streaming approach. We conduct a case study on the En–Fr streaming translation task using a filtered subset of the dataset that contains 7.3k sentence-level examples with controlled lengths, amounting to approximately 32k tokens.All experiments are conducted using the Phi-3 Mini Instruct model. Table 4 summarizes the inference time and throughput under different Wait-k settings, highlighting the drastic efficiency gains brought by removing re-encoding. The results in the figure show that the proposed grouped-streaming paradigm eliminates the need for re-encoding, resulting in significant throughput improvements: over 5 ×speedup under the wait-9 setting and more than 11 ×speedup under wait-5, compared to the re-encoding baseline. Wait-k Inference mode Time consumption Throughput 5with re-encoding 59.54 h 1.79 tokens/s without re-encoding 4.38 h ↓92.6% 20.24 tokens/s ×11.3 9with re-encoding 28.87 h 3.70 tokens/s without re-encoding 4.04 h ↓86.1% 21.93 tokens/s ×5.9 Table 4: Comparison of inference efficiency under different Wait-k values and re-encoding modes. G Visualization G.1 Attention Distribution Figure 7 illustrates the absolute values of the attention matrix, representing the attention magnitude of target tokens to both the input and output. In the left figure, the most attended column corresponds to the attention sink (Xiao et al., 2024), whereas in the right figure, the attention sink has been removed. The absolute attention map highlights each token’s attention to historical tokens but makes it difficult to assess how different tokens distribute their attention toward | https://arxiv.org/abs/2505.16983v1 |
a specific token. To better emphasize the distribution of target tokens’ attention toward a given token, we normalize the attention map along columns and apply a gamma transformation to enhance and amplify the relationships. Mathematically, given an attention matrix A, where Ai,jrepresents the attention weight from token jto token i, we normalize each column as follows: A′ i,j= Ai,j−mini{Ai,j} max i{Ai,j} −mini{Ai,j}! . (4) 0.00.20.40.60.8 0.000.020.040.060.080.100.12 Figure 7: The absolute values of the attention matrix, with the left figure incorporating the attention sink, while the right figure depicts the matrix after the removal of the attention sink. G.2 Example of Streaming Decoding Figure 8 is an example of streaming decoding process. Input: '<bos><start_of_turn>userTranslate the following English paragraph to German\nJust in the last two' Output: '<start_of_turn>Erst' Input: '<bos><start_of_turn>userTranslate the following English paragraph to German\nJust in the last two days,' Output: '<start_of_turn>Erst in' Input: '<bos><start_of_turn>userTranslate the following English paragraph to German\nJust in the last two days, we' Output: '<start_of_turn>Erst in den' Input: '<bos><start_of_turn>userTranslate the following English paragraph to German\nJust in the last two days, we got' Output: '<start_of_turn>Erst in den letzten' Input: '<bos><start_of_turn>userTranslate the following English paragraph to German\nJust in the last two days, we got the' Output: '<start_of_turn>Erst in den letzten beiden' Input: '<bos><start_of_turn>userTranslate the following English paragraph to German\nJust in the last two days, we got the new' Output: '<start_of_turn>Erst in den letzten beiden Tagen' Input: '<bos><start_of_turn>userTranslate the following English paragraph to German\nJust in the last two days, we got the new temperature' Output: '<start_of_turn>Erst in den letzten beiden Tagen hatten' Input: '<bos><start_of_turn>userTranslate the following English paragraph to German\nJust in the last two days, we got the new temperature records' Output: '<start_of_turn>Erst in den letzten beiden Tagen hatten neue' Input: '<bos><start_of_turn>userTranslate the following English paragraph to German\nJust in the last two days, we got the new temperature records in' Output: '<start_of_turn>Erst in den letzten beiden Tagen hatten neue Januar-' Input: '<bos><start_of_turn>userTranslate the following English paragraph to German\nJust in the last two days, we got the new temperature records in January' Output: '<start_of_turn>Erst in den letzten beiden Tagen hatten neue Januar-Temperaturrekorde' Input: '<bos><start_of_turn>userTranslate the following English paragraph to German\nJust in the last two days, we got the new temperature records in January.<end_of_turn>' Output: '<start_of_turn>Erst in den letzten beiden Tagen hatten neue Januar-Temperaturrekorde.<end_of_turn>' Figure 8: English to German streaming translation. The bold indicate the most recently read or generated content. | https://arxiv.org/abs/2505.16983v1 |
arXiv:2505.16984v1 [cs.LG] 22 May 2025UFT: Unifying Supervised and Reinforcement Fine-T uning Mingyang Liu1, Gabriele Farina1, Asuman Ozdaglar1 1LIDS, EECS, Massachusetts Institute of Technology 1{liumy19,gfarina,asuman}@mit.edu Abstract Post-training has demonstrated its importance in enhancing the reasoning capabilities of large language models (LLMs). The primary post-training methods can be categorized into supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT). SFT is efficient and well- suited for small language models, but it may lead to overfitting and limit the reasoning abilities of larger models. In contrast, RFT generally yields better generalization but depends heavily on the strength of the base model. To address the limitations of SFT and RFT, we propose Unified Fine-Tuning (UFT), a novel post-training paradigm that unifies SFT and RFT into a single, integrated process. UFT enables the model to effectively explore solutions while incorporating informative supervision signals, bridging the gap between memorizing and thinking underlying existing methods. Notably, UFT outperforms both SFT and RFT in general, regardless of model sizes. Furthermore, we theoretically prove that UFT breaks RFT’s inherent exponential sample complexity bottleneck, showing for the first time that unified training can exponentially accelerate convergence on long-horizon reasoning tasks. “学而不思则罔,思而不学则殆。” "Learning without thinking leads to confusion; thinking without learning is perilous." ——孔子 (Confucius) 1 Introduction When humans learn a new subject, we typically practice with problem sets (thinking) and try to understand the solutions when we encounter difficulties (memorizing). There are also counterparts in fine-tuning LLMs, which is •Supervised Fine-T uning (SFT). Memorizing the collected reasoning trace (solution) by maximizing the log-likelihood of it. •Reinforcement Fine-T uning (RFT). Exploring the reasoning space of LLM and improving the performance according to the signal from a verifier of the final answer (thinking). However, unlike humans, learning and thinking are disentangled during the training of language models. Specifically, prior work (DeepSeek-AI et al., 2025; Zhou et al., 2023; Muennighoff et al., 2025; Liu et al., 2025; Zeng et al., 2025) typically applies either SFT or RFT throughout the fine-tuning phase, or applies RFT only after SFT completes (cf. Figure 1). The choice of the proper fine-tuning algorithm depends on the LLM’s capacity and the task’s complexity. Specifically, when the LLM is weak, SFT typically works 1Source code available at https://github.com/liumy2010/UFT. 1 Question: How to get 24 with (3,5,7,13)?Solution: 5×13=65 7+65=72 72÷3=24 Question: How to get 24 with (3,5,7,13)?Solution: 5×13=65 7+65=72 72÷3=24SFT RFT SFT-RFT UFT (ours) Question: How to get 24 with (3,5,7,13)?Solution: 5×13=65 7+65=72 72÷3=24 Question: How to get 24 with (3,5,7,13)?Solution: 5×13=65 7+65=72 Question: How to get 24 with (3,5,7,13)?Solution: 5×13=65 Question: How to get 24 with (3,5,7,13)?Solution: Question: How to get 24 with (3,5,7,13)?Solution: Question: How to get 24 with (3,5,7,13)?Solution: 0 100 200 300 400 500 StepsAnnotation UsageAnnotation Usage of Different Algorithms SFT RFT SFT-RFT UFT 1Figure 1: (top left, top right, middle, bottom). The illustration of SFT, RFT, SFT-RFT, and UFT, respectively. SFT-RFT refers to applying RFT after an initial SFT stage (DeepSeek-AI et al., 2025; Zeng et al., 2025). (Top, center). shows the annotation usage of different algorithms over training. Curves are slightly shifted for better visibility. better since the LLM | https://arxiv.org/abs/2505.16984v1 |
cannot explore the correct answer during reinforcement learning (Pan et al., 2025), due to the sparse reward caused by the verifier-based reward model. On the other hand, when the LLM is strong, RFT generalizes better (Xie et al., 2025; Chu et al., 2025). To get the best of both worlds, we propose Unified Fine-Tuning (UFT), which unifies SFT and RFT and enriches the reinforcement learning signal with supervised feedback, enabling the model to acquire new knowledge during fine-tuning more efficiently. In Figure 1, SFT-RFT refers to the common practice of initiating reinforcement learning from a supervised fine-tuned model, as widely adopted in the literature (DeepSeek-AI et al., 2025; Zeng et al., 2025). As shown in Figure 1 (top left), SFT uses full annotations (solutions) throughout training, whereas RFT does not use any annotations at all (Figure 1, top right). Similarly, SFT-RFT begins with SFT using full annotations, but once the RFT phase starts, it discards all annotations and relies entirely on exploration. In contrast, our method, UFT, offers a smooth transition 2 0 200 400 Steps0.00.20.40.6AccuracyCountdown 0 200 400 Steps0.20.30.4MATH(3,4,5) 0 200 400 Steps0.10.20.3LogicAverage Accuracy of ’Qwen2.5-0.5/1.5/3B’ Base SFT RFT SFT-RFT R3UFT 1Figure 2: Presentation for different algorithms’ accuracy when trained on Countdown (Wikipedia contributors, 2025), MATH(3,4,5) (level 3-5 only) (Hendrycks et al., 2021; Zeng et al., 2025), and the Knights and Knaves logic puzzle (Logic) (Xie et al., 2025). Accuracy is averaged over Qwen2.5 models of sizes 0.5B, 1.5B, and 3B (Qwen et al., 2025). Base refers to the model without fine-tuning, and R3is the curriculum reinforcement learning baseline (Xi et al., 2024). The figure shows that UFT outperforms both SFT and RFT, while the relative performance of SFT and RFT varies depending on task complexity. from SFT to RFT, preserving the annotation signal early on and gradually reducing it as the model becomes capable of self-guided reasoning. The most relevant work to UFT is Learning Reasoning through Reverse Curriculum Reinforcement Learning ( R3) (Xi et al., 2024), which proposes a curriculum learning method that concatenates the problem with a slice of the solution (hint, cf. Figure 4 left). While R3treats hints primarily as exploration aids, UFT further integrates them as part of the supervision signal. This unification enables reinforcement learning not just to search, but to learn from existing solutions, effectively raising the performance ceiling imposed by the model’s pretraining capacity (cf. Figure 2). A detailed comparison with related work is postponed to Appendix A. Figure 2 shows the accuracy of different algorithms over time, while the training set is Countdown (Wikipedia contributors, 2025; Pan et al., 2025), MATH(3,4,5) (levels 3–5 only) (Hendrycks et al., 2021; Zeng et al., 2025), and the Knights and Knaves logic puzzle (Logic) (Xie et al., 2025). Base refers to the model before fine-tuning, and R3represents the curriculum reinforcement learning baseline (Xi et al., 2024). As shown in the figure, UFT generally outperforms all other algorithms. Furthermore, we provide the evaluation on various benchmarks, and the results are shown in Table 2. Moreover, we theoretically prove that RFT (DeepSeek-AI et al., 2025; Zeng et al., | https://arxiv.org/abs/2505.16984v1 |
2025; Liu et al., 2025) suffers from an inherent sample complexity bottleneck, which is exponential in the length of the reasoning. In contrast, the unified training paradigm in UFT can improve the sample complexity to a polynomial dependence on the reasoning length, which is an exponential improvement over RFT. 1.1 Contribution We state the contribution of this paper in the following. 1.Integration of Supervision and Reward Signal. UFT provides a general framework that integrates the supervision from SFT and reward from RFT into a single training paradigm. UFT blends reward optimization with log-likelihood maximization on hints (partial solution), and smoothly transitions from fully supervised to fully reinforcement learning. Such integration allows models to explore 3 and learn simultaneously, addressing the trade-off between memorization (SFT) and generalization (RFT) in a principled way. 2.Theoretical Justification. We provide a theoretical analysis of UFT, proving it achieves polynomial sample complexity dependence on reasoning length, compared to the exponential complexity required by standard RFT. This result formally establishes the efficiency gains from unifying learning (cf. Section 4). 3.Empirical Validation Across Model Scales and Tasks. We evaluate the algorithms by training Qwen2.5-0.5/1.5/3B (Qwen et al., 2025) and Llama3.2-1/3B (Grattafiori et al., 2024) on Countdown (Wikipedia contributors, 2025; Pan et al., 2025), MATH (Hendrycks et al., 2021), and the Knights and Knaves logic puzzle (Logic) (Xie et al., 2025). UFT consistently outperforms previous methods, showing robustness across domains and models (cf. Section 5). 2 Preliminaries Notation. For any integer n>0, let[n]:={1, 2,· · ·,n}and∆n:= x∈[0, 1]n:∑n i=1xi=1 be the n−1-dimenional probability simplex. For any two distribution x,y∈∆n, let KL(x∥y):=∑n i=1xilogxi yi denote the KL-divergence between xand y. For any discrete set S, let|S|be its cardinality. 3, 5, 7, 13?=24 5×13=65 7−3=4 4+65=6965−4=617+65=72 72÷3=2472∗3=2163+5=8 8−7=1 13+1=1413÷1=138+13=21 21+7=2821÷7=3 Figure 3: An illustration of the Countdown game, where the goal is to obtain 24 by applying basic arithmetic operations ( +,−,×,÷) to the numbers (3, 5, 7, 13 ). The green path represents the correct solution. Search Tree. The problem-solving process can be represented as a search tree , as illustrated in Figure 3. Except for the leaf nodes, each node (also referred to as a state —we use the terms node and state interchangeably) in the search tree has Bchildren, where Bis the branching factor . Each child represents a different next token (or next sentence) to be generated, so a path from the root to a leaf node corresponds to a complete solution to the problem. The tree has a height of H, with the root at height 0, and each node’s height equal to its parent’s height plus one. LetShdenote the set of nodes with height h∈{0, 1,· · ·,H}andS:=SH h=0Sh. Note that |S0|=1since it only contains the root sroot, and|Sh+1|=B· |S h|. Therefore, there are ∑H h=0Bh=BH+1−1 B−1nodes in total. Once reaching a leaf node s∈ S H, the model will receive reward R(s)∈[0, 1]. A policy can be written as π:SH−1 h=0Sh→∆B, where π(a|s)is the probability of selecting the athchild of s. For any state-action pairs (s,a)∈ S × [B], letT(s,a)∈ S be the child at the | https://arxiv.org/abs/2505.16984v1 |
branch aof state s, andT(s,a)=∅ fors∈ S H. The value function of policy πis written as Vπ:S → [0, 1]. We write sh0=s,(sh)H h=h0∼π as the trajectory starting from sand sampled according to π,i.e.,ah∼π(· |sh),sh+1=T(sh,ah). For 4 any h0∈{0, 1,· · ·,H}and s∈ S h0, we define Vπ(s):=Esh0=s,(sh)H h=h0∼π[R(sH)], which is the expected reward obtained by following policy πstarting from node s. Letπ∗∈argmaxπVπ(sroot)denote the optimal (deterministic) policy that achieves the highest expected reward1. Let V∗:=Vπ∗(sroot)be the expected reward of the optimal policy π∗. Since π∗is deterministic, let s∗ 0,a∗ 0,s∗ 1,a∗ 1,· · ·,s∗ H represent the path from the root to a leaf node by following π∗, where s∗ 0=sroot. 3 Unified Fine-T uning (UFT) In this section, we introduce the two key features of UFT: (i) an exploration mechanism guided by hint, which improves sample efficiency by mitigating the sparse reward problem common in rule-based reinforcement learning (DeepSeek-AI et al., 2025); and (ii) a hybrid training objective that combines reinforcement learning with a log-likelihood term on hints, which provides a more informative learning signal and enables the model to acquire knowledge more effectively during fine-tuning. 3.1 Exploration with Hint Although RFT is beneficial for training large models (DeepSeek-AI et al., 2025), several recent studies (Pan et al., 2025) report that small models often fail to reason effectively, as they may never explore the correct answer even once due to the sparse reward. Additionally, other work has found that RFT’s final performance is constrained by base models’ capabilities (Gandhi et al., 2025). To address the sparse reward issue, UFT guides exploration using a hint, that is, trajectory sampling starts from the concatenation of the problem description and a hint, which is a partial solution to the problem (cf. Figure 4). In this way, models will explore the correct answer more frequently. RFT can be modeled as the task of finding a path from the root of the problem-solving tree to a leaf node that represents the correct answer. As shown in Figure 3, RFT needs to identify the green path. However, the problem-solving tree for real-world tasks, such as math problems, typically contains an enormous number of nodes, making it difficult for an LLM to discover the correct path through exploration alone. To make matters worse, under the rule-based reward model proposed in DeepSeek-AI et al. (2025), only a small fraction of the leaf nodes correspond to correct answers, resulting in the well-known sparse reward problem (Ladosz et al., 2022). We address this challenge by concatenating the problem with a partial solution, referred to as the hint, to guide the model towards the correct answer. Figure 4 (left) provides an example of UFT’s prompt. 3.1.1 Hint Length Sampling Since we are ultimately interested in the LLM’s performance when the hint length is zero, the hint must be gradually shortened during training. A natural idea is to subtract the hint length by a constant amount regularly, which is referred to as the staged reinforcement learning (Xi et al., 2024). However, because solutions typically consist of no more than 10 sentences, changes in | https://arxiv.org/abs/2505.16984v1 |
hint length can cause a significant distribution shift, leading to unstable training (cf. Figure 4, right). To avoid distribution shift during training, Xi et al. (2024) samples the hint length uniformly from all possible values throughout the training. However, relying on hints throughout training introduces a significant distribution mismatch between training and evaluation. This often leads to performance 1There exists at least one deterministic optimal policy, and we choose such a policy as π∗. 5 UFT Prompt Question Aconversation between User andAssistant .Theuser asks aquestion, andtheAssistant solves it.The assistant first thinks about thereasoning process inthemind andthen provides theuser with theanswer . User :Julie isreading a120- page book. Yesterday, shewas able toread 12pages and today, sheread twice asmany pages asyesterday .Ifshewants toread half ofthe remaining pages tomorrow, how many pages should sheread? Show your work in <think> </think> tags.And return thefinal answer in<answer> </answer> tags, for example <answer> 12</answer>. Assistant :Letmesolve thisstep bystep. <think> Maila read 12x2=<<12*2= 24>>24 pages today . Soshewasable toread atotal of12+24=<<12+ 24=36>>36 pages since yesterday . There are120-36=<<120- 36=84>>84 pages lefttoberead.Hint Since shewants toread half oftheremaining pages tomorrow, then sheshould read 84/2=<<84/ 2=42>> 42pages . #### 42</think>Solution 0 100 200 300 400 500 Steps0.00.20.40.60.81.0Average ScoreAverage Score on Training Batch (Qwen2.5-0.5B) UFT Stage 1Figure 4: (left). An illustration of the UFT prompt. We adopt the prompting template from TinyZero (Pan et al., 2025), which is similar to that used in Deepseek-R1 (DeepSeek-AI et al., 2025). The hint consists of a slice of the full solution. During training, the question prompt and the hint are concatenated and fed to the model. (right). An illustration of the training curve of Qwen2.5-0.5B. Stage and UFT keep zero hint since step 300. collapse at test time, where no hints are available. To address this, UFT employs a smoothed reduction of hint length to zero, which (i) avoids drastic distribution shifts and (ii) better aligns the training distribution with the evaluation distribution. 0 100 200 300 400 500 Steps0.00.10.20.30.40.5AccuracyAccuracy Trained on ’Countdown’ with Qwen2.5-0.5B UFT R3 SFT-RFT SFT RFT (cosine) 1 0 100 200 300 400 500 Steps0.00.10.20.30.40.5AccuracyAccuracy Trained on ’Countdown’ with Llama-3.2-1B UFT R3 SFT-RFT SFT RFT (cosine) 1 Figure 5: An ablation study of different hint length schedulers. RFT (cosine) refers to reinforcement learning with our cosine annealing hint length scheduler proposed in this section.Specifically, we maintain a variable p∈[0, 1], representing the propor- tion of the solution revealed to the LLM as a hint. The value of pgradu- ally descends during training accord- ing to cosine annealing (cf. (B.1)) (Loshchilov and Hutter, 2017). Let lbe the random variable indicating the hint length, and let Lbe the total length of the solution ( e.g., number of sentences). By definition, we require l∈{0, 1,· · ·,L}andE[l]=p·L, so that the expected hint length matches the proportion p. To achieve this, we sample l∼Binomial (L,p)from a Bi- nomial distribution2. It is straightforward to verify that E[l] =L·E[c1]=p·L. Compared to stage-wise hint length reduction, UFT provides a smoother transition from long to short hints. The training curves of these algorithms are shown in Figure 4 (right). We | https://arxiv.org/abs/2505.16984v1 |
can see that the training curve of UFT is smoother and converges faster than that of the staged reinforcement learning. Note that staged reinforcement learning and UFT do not use any hint since step 300. As shown in Figure 5, although RFT (cosine), which is RFT equipped with the cosine annealing hint 2Pr(l=l0)=(L l0)pl0(1−p)L−l0for any l0∈{0, 1,· · ·,L}. In other words, lis the number of heads obtained when tossing L independent coins, each landing heads with probability p. 6 length scheduler, outperforms R3(uniform sampling), it is still worse than SFT-RFT. Furthermore, for Llama-3.2-1B, RFT (cosine) is even worse than SFT alone. This implies that the model’s performance is hindered by its knowledge gained through pretraining (Gandhi et al., 2025), which motivates the second modification of UFT introduced in Section 3.2, an additional log-likelihood term in the objective function. 3.2 Objective Function Modification The hinted RFT only enables LLMs to explore the correct solution more frequently, but remains inefficient at injecting new knowledge into the LLMs. This inefficiency arises because each sampled trajectory provides limited information, essentially a signal (correct/incorrect), which provides far less information than the supervision signal in SFT. In contrast, SFT enables more efficient knowledge acquisition, but suffers from poor generalization (Xie et al., 2025; Zeng et al., 2025). To get the best of both worlds, UFT introduces an additional log-likelihood term to the objective function of RFT, allowing the model to learn from the informative supervision signal and still benefit from the generalization of RFT. For notational simplicity, let s0=sroot,(sh,ah)H−1 h=0∼πdenote the shorthand for ah∼π(· |sh)and sh+1=T(sh,ah),i.e.,(sh,ah)H−1 h=0represents a trajectory sampled according to πstarting at sroot. Formally, letJvalue (sh,ah)H−1 h=0 denote the objective function associated with the expected reward3. Then, let β>0 be the hyperparameter controlling the KL divergence, we have JRFT=E s0=sroot , sh,ahH−1 h=0∼π" Jvalue sh,ahH−1 h=0 −βH−1 ∑ h=0KL π(· |sh)∥πref(· |sh)# (3.1) JUFT=El,s0=sroot sh,ahl−1 h=0∼π∗, sh,ahH−1 h=l∼π" Jvalue sh,ahH−1 h=l −βH−1 ∑ h=lKL π(· |sh)∥πref(· |sh) −βl−1 ∑ h=0KL π∗(· |sh)∥π(· |sh)# (3.2) Compared to the objective function of GRPO, UFT adds an additional term β∑l−1 h=0KL(π∗(· |sh)∥π(· |sh)), the KL divergence between the optimal policy and the current policy. Compared to Jvalue, this term explicitly guides the policy towards optimality, and thus results in a faster convergence rate. We remark that the optimal policy π∗is unknown and we cannot compute β∑l−1 h=0KL(π∗(· |sh)∥π(· |sh)) directly. However, thanks to the annotations contained in the dataset, we have access to a trajectory sampled according to π∗,i.e.,(s∗ h,a∗ h)H−1 h=0∼π∗, which can be used to estimate the KL-divergence. Accord- ing to the definition of KL-divergence, minimizing KL π∗(· |s∗ h)∥π(· |s∗ h) is equivalent to minimizing ∑B ah=1π∗(ah|s∗ h)log1 π(ah|s∗ h)(omit terms irrelevant to π), and log1 π(a∗ h|s∗ h)is an unbiased estimator of it, since a∗ h∼π∗(· |s∗ h). Therefore, (3.2) can be equivalently written as JUFT=El,sl=s∗ l, sh,ahH−1 h=l∼π" Jvalue sh,ahH−1 h=l −βH−1 ∑ h=lKL π(· |sh)∥πref(· |sh) +βl−1 ∑ h=0logπ(a∗ h|s∗ h)# . (3.3) Therefore, the UFT objective (3.3) can be interpreted as (i) maximizing the expected reward while (ii) staying | https://arxiv.org/abs/2505.16984v1 |
close to the reference policy and (iii) memorizing the hint by maximizing the log-likelihood of producing the hint. Remark 3.1. The name of Unified Fine-Tuning (UFT) comes from the fact that when p≡0for all steps during training, (3.3) is equivalent to RFT, since β∑l−1 h=0logπ(a∗ h|s∗ h) = 0. When p≡1, then Jvalue (sh,ah)H−1 h=l −β∑H−1 h=lKL π(· |sh)∥πref(· |sh) =0, so that (3.3) degenerates to SFT. An illustra- tion can be found in Figure 1 (top middle). 3In GRPO (Shao et al., 2024), we have Jvalue (sh,ah)H−1 h=0 :=1 H∑H−1 h′=0minnπ(ah′|sh′) πold(ah′|sh′)bAh′,clip π(ah′|sh′) πold(ah′|sh′), 1−ϵ, 1+ϵ bAh′o , where πis the current policy, πoldis the policy at the previous step, bAh′is the estimated advantage value in GRPO. 7 It is noteworthy that after adopting the additional log-likelihood term, UFT’s performance matches that of SFT-RFT for small models (cf. Figure 5). This suggests that UFT improves the ceiling of RFT by enabling the model to acquire new knowledge during post-training. 4 Theoretical Justification In this section, we provide a theoretical justification for UFT. First, we show that the lower bound of RFT’s sample complexity grows exponentially ( O(BH)) as the tree height (reasoning length) increases. Second, we show that UFT may find the solution within a polynomial number of samples ( O BH5logB ), representing an exponential improvement of tree height Hin sample complexity. Next, we define the sub-optimality gap in reward, which is the difference between the rewards for correct and incorrect solutions. Definition 4.1 (Sub-Optimality Gap) .There is a sub-optimality gap ∆>0between the reward of optimal and suboptimal nodes. Formally, for any leaf node s∈ S Hwith reward R(s)<max s′∈SHR(s′), we have R(s)≤max s′∈SHR(s′)−∆. (4.1) In this paper, there are only three possible outcomes for R(s),i.e., no reward (incorrect format), format reward, and accuracy reward. Therefore, the sub-optimality gap ∆=(accuracy reward )−(format reward )=1.0−0.1=0.9. (4.2) Next, we will give the lower bound on the RFT’s sample complexity to achieve 50% pass@1 success rate4. Theorem 4.2 (Lowerbound) .For any integers H≥1,B≥2, and any RFT algorithm, there exists a problem with height Hand branching factor B, that satisfies the following: to achieve a 50% pass@1 success rate, the algorithm needs to explore at least BH 4(4.3) nodes in SH. Moreover, when there are multiple nodes in SHrepresenting the correct solutions, e.g., K≥1, any algorithm needs to explore at leastBH 4Knodes in SH. The proof constructs a set of problems with different correct solutions, which cannot be distinguished before exploring sufficient nodes in SH. The details can be found in Appendix C. Furthermore, the traditional lower bounds in reinforcement learning (Jin et al., 2018; Domingues et al., 2021) are built on the stochastic transitions of the Markov decision process, but the search tree’s transition is deterministic, which requires a different construction. Theorem 4.2 implies that when the reward is sparse, such as when Kis a constant, learning the optimal policy takes a number of iterations exponential in the height of the tree. This also justifies why long reasoning is generally difficult (Chai et al., 2025; Chen et al., 2025). In the following, we | https://arxiv.org/abs/2505.16984v1 |
will show that UFT exponentially improves the sample complexity. The full algorithm can be found in Algorithm 2. Theorem 4.3 (Informal) .When βis small enough, Algorithm 2 obtains a 50% pass@1 success rate when the algorithm explores O BH5(logB)2 ∆2! (4.4) nodes in SH. 4The probability of reaching the correct answer when sampling a single trajectory. 8 0 200 400 Steps0.00.20.4AccuracyCountdown 0 200 400 Steps0.10.2MATH(3,4,5) 0 200 400 Steps0.00.10.2LogicAccuracy Trained with ’Qwen2.5-0.5B’ Base SFT RFT SFT-RFT R3UFT 1Figure 7: An illustration of the accuracy on the test dataset of Qwen2.5-0.5B. Base is the base model without fine-tuning. R3(Xi et al., 2024) trained the model with RFT and a uniform distribution over all hint lengths. SFT-RFT refers to training a supervised fine-tuned model with RFT, and UFT is our algorithm. The formal version is deferred to Appendix E. Note that the 50% pass@1 in both Theorem 4.2 and Theorem 4.3 can be arbitrarily adjusted, and it only affects the sample complexity by a constant factor. From Theorem 4.3, we observe that the dependence on His reduced from BHtoH5, representing an exponential improvement enabled by the use of hints. Moreover, ∆2in the denominator implies that the difference between accuracy reward and format reward should be large for fast convergence, which is also supported by empirical studies (Shao et al., 2024; Pan et al., 2025; Zeng et al., 2025). 5 Experiments In this section, we present the experimental results of UFT. We demonstrate several key properties of UFT: (i) When the model is small ( ≤1B) and SFT outperforms RFT, UFT’s performance matches that of SFT. (ii) When the model is large ( ∼3B) and RFT outperforms SFT due to better generalization, UFT’s performance matches that of RFT (and sometimes even outperforms it, cf. Table 2). 0 100 200 300 400 500 Steps0.00.20.40.60.81.0Success RateCumulative Average Exploration Success Rate on Logic RFT UFT SFT-RFT R3 1 Figure 6: Qwen2.5-0.5B’s cumu- lative average success rate for ex- ploring the correct answer at each step when trained on Logic.In experiments, we train Qwen2.5-0.5B, Qwen2.5-1.5B, Qwen2.5-3B (Qwen et al., 2025), Llama-3.2-1B, and Llama-3.2-3B (Grattafiori et al., 2024) on Countdown (Wikipedia contributors, 2025; Pan et al., 2025), MATH(3,4,5) (only level 3-5 included) (Hendrycks et al., 2021; Zeng et al., 2025), and the Knights and Knaves logic puzzle (Logic) (Xie et al., 2025). 5.1 The Memorization of UFT As shown in Figure 7, we can see that when the model is small, the improvement from RFT is marginal, since the model rarely explores the correct answer. As shown in Figure 6, when training Qwen2.5-0.5B on Logic, RFT rarely explores the correct answer, while UFT finds it at every single timestep. Compared to R3, where hints are also applied, UFT outperforms it since UFT (i) gradually shifts the distribution toward a hint length of zero, and (ii) maximizes the 9 0 200 400 Steps0.00.20.40.6AccuracyCountdown 0 200 400 Steps0.30.40.5MATH(3,4,5) 0 200 400 Steps0.20.40.6LogicAccuracy Trained with ’Qwen2.5-3B’ Base SFT RFT SFT-RFT R3UFT 1Figure 8: An illustration of the accuracy on test dataset of Qwen2.5-3B. Base refers to the base model without fine-tuning. log-likelihood on | https://arxiv.org/abs/2505.16984v1 |
hints to encode information about the solution in gradients. The proximity between the performance of UFT and SFT-RFT also supports the conclusion that UFT helps the model to memorize the solution when the model’s initial capacity is not enough to solve it. 5.2 The Generalization of UFT As shown in Figure 8, when the model is larger and its prior knowledge gained from pertaining is enough for reasoning, UFT generalizes well as RFT. In contrast, SFT and SFT-RFT are worse, since SFT leads to overfitting. These experiments show that UFT will automatically adapt to model size and enjoy the advantage of both SFT and RFT. As shown in Figure 8, when the model is larger and its prior knowledge gained from pretraining is sufficient for reasoning, UFT generalizes well as RFT. In contrast, SFT and SFT-RFT perform worse, since SFT leads to overfitting. These experiments show that UFT automatically adapts to model size and benefits from the advantages of both SFT and RFT. 5.3 UFT Helps LLMs Learn New Knowledge In Gandhi et al. (2025), it was found that Llama-3.2-3B’s improvement through RFT is marginal compared to that of Qwen2.5-3B. This is because Llama gains less reasoning-related knowledge from pertaining, e.g., backtracking and subgoal setting. In Figure 9, we can see that UFT significantly improves the performance of Llama-3.2. In Countdown, even Llama-3.2-1B outperforms Llama-3.2-3B fine-tuned by RFT after the same number of steps (250 steps). This supports the claim that UFT introduces new knowledge to the model, whereas RFT only helps the model utilize its existing knowledge (Yue et al., 2025). 6 Conclusion and Limitations This paper proposes a novel fine-tuning framework, UFT, which unifies SFT and RFT. Empirically, we show that UFT outperforms both SFT and RFT in general. Specifically, by adopting UFT, small models tend to memorize while large models generalize. Theoretically, we prove that UFT achieves exponential speed-up compared to RFT. However, throughout the paper, we use only the human-annotated solutions in the dataset and GRPO as the reinforcement learning algorithm. In the future, it would be interesting 10 0 100 200 Steps0.00.20.4AccuracyCountdown 0 100 200 Steps0.000.050.10AccuracyMATH(3,4,5) 0 100 200 Steps0.00.10.2AccuracyLogicAverage Accuracy of ’Llama-3.2-1/3B’ RFT (1B) UFT (1B) RFT (3B) UFT (3B) RFT (3B, Gandhi et al. [2025]) 1Figure 9: The comparison of Llama-3.2-1B/3B’s behavior in Countdown/MATH/Logic when applying RFT/UFT. In Countdown, the dotted line is the accuracy of Llama-3.2-3B after 250 steps RFT reported in Gandhi et al. (2025) . to explore the incorporation of advanced SFT and RFT techniques into UFT. For instance, using long chain-of-thoughts generated by large models (Muennighoff et al., 2025; Gandhi et al., 2025) for SFT, and choosing other reinforcement learning algorithms such as REINFORCE++ (Hu, 2025) and DAPO (Yu et al., 2025) as the reinforcement learning algorithm for UFT. 7 Acknowledgement The authors would like to thank Jacob Andreas, Chanwoo Park, and Kaiqing Zhang for their valuable discussions. The authors would also like to thank the support of Siebel Scholarship and NSF Award CCF-2443068. References Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. On the theory of policy | https://arxiv.org/abs/2505.16984v1 |
gradient methods: Optimality, approximation, and distribution shift. Journal of Machine Learning Research , 22(98): 1–76, 2021. Yekun Chai, Haoran Sun, Huang Fang, Shuohuan Wang, Yu Sun, and Hua Wu. Ma-rlhf: Reinforcement learning from human feedback with macro actions. International Conference on Learning Representations (ICLR) , 2025. Qiguang Chen, Libo Qin, Jinhao Liu, Dengyun Peng, Jiannan Guan, Peng Wang, Mengkang Hu, Yuhang Zhou, Te Gao, and Wanxiang Che. Towards reasoning era: A survey of long chain-of-thought for reasoning large language models. arXiv preprint arXiv:2503.09567 , 2025. Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, and Yi Ma. Sft memorizes, rl generalizes: A comparative study of foundation model post-training. arXiv preprint arXiv:2501.17161 , 2025. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, 11 Zhihong Shao, Zhuoshu Li, Ziyi Gao, and Aixin Liu et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. Dongsheng Ding, Kaiqing Zhang, Tamer Basar, and Mihailo Jovanovic. Natural policy gradient primal- dual method for constrained markov decision processes. Annual Conference on Neural Information Processing Systems (NeurIPS) , 2020. Omar Darwiche Domingues, Pierre Ménard, Emilie Kaufmann, and Michal Valko. Episodic reinforcement learning in finite mdps: Minimax lower bounds revisited. 2021. Kanishk Gandhi, Ayush Chakravarthy, Anikait Singh, Nathan Lile, and Noah D Goodman. Cognitive behaviors that enable self-improving reasoners, or, four habits of highly effective stars. arXiv preprint arXiv:2503.01307 , 2025. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, and Arun Rao et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. Annual Conference on Neural Information Processing Systems (NeurIPS) , 2021. Jian Hu. Reinforce++: A simple and efficient approach for aligning large language models. arXiv preprint arXiv:2501.03262 , 2025. Chi Jin, Zeyuan Allen-Zhu, Sebastien Bubeck, and Michael I Jordan. Is q-learning provably efficient? Annual Conference on Neural Information Processing Systems (NeurIPS) , 2018. Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. Interna- tional Conference on Machine Learning (ICML) , 2002. Pawel Ladosz, Lilian Weng, Minwoo Kim, and Hyondong Oh. Exploration in deep reinforcement learning: A survey. Information Fusion , 85:1–22, 2022. Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. International Conference on Learning Representations (ICLR) , 2024. Mingyang Liu. On solving larger games: Designing new algorithms adaptable to deep reinforcement learning. Master’s thesis, Massachusetts Institute of Technology, 2025. Mingyang Liu, Asuman E. Ozdaglar, Tiancheng Yu, and Kaiqing Zhang. The power of regularization in solving extensive-form games. International Conference on Learning Representations (ICLR) , | https://arxiv.org/abs/2505.16984v1 |
2023. Mingyang Liu, Gabriele Farina, and Asuman Ozdaglar. A policy-gradient approach to solving imperfect- information games with iterate convergence. arXiv preprint arXiv:2408.00751 , 2024. Zichen Liu, Changyu Chen, Wenjun Li, Tianyu Pang, Chao Du, and Min Lin. There may not be aha moment in r1-zero-like training — a pilot study. https://oatllm.notion.site/oat-zero , 2025. Notion Blog. Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. International Conference on Learning Representations (ICLR) , 2017. 12 Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Meiqi Guo, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, et al. Improve mathematical reasoning in language models by automated process supervision. arXiv preprint arXiv:2406.06592 , 2024. Jincheng Mei, Chenjun Xiao, Csaba Szepesvari, and Dale Schuurmans. On the global convergence rates of softmax policy gradient methods. International Conference on Machine Learning (ICML) , 2020. Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 , 2025. Jiayi Pan, Junjie Zhang, Xingyao Wang, Lifan Yuan, Hao Peng, and Alane Suhr. Tinyzero. https://github.com/Jiayi-Pan/TinyZero, 2025. Accessed: 2025-01-24. An Yang Qwen, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, and Keming Lu et al. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2025. Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, and Aviral Kumar. Rewarding progress: Scaling automated process verifiers for llm reasoning. International Conference on Learning Representations (ICLR) , 2025. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. arXiv preprint arXiv: 2409.19256 , 2024. Taiwei Shi, Yiyang Wu, Linxin Song, Tianyi Zhou, and Jieyu Zhao. Efficient reinforcement finetuning via adaptive curriculum learning. arXiv preprint arXiv:2504.05520 , 2025. Mingyang Song, Mao Zheng, Zheng Li, Wenjie Yang, Xuan Luo, Yue Pan, and Feng Zhang. Fastcurl: Curriculum reinforcement learning with progressive context extension for efficient training r1-like reasoning models. arXiv preprint arXiv:2503.17287 , 2025. Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. arXiv preprint arXiv:2312.08935 , 2023. Liang Wen, Yunke Cai, Fenrui Xiao, Xin He, Qi An, Zhenyu Duan, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, et al. Light-r1: Curriculum sft, dpo and rl for long cot from scratch and beyond. arXiv preprint arXiv:2503.10460 , 2025. Wikipedia contributors. Countdown (game show). https://en.wikipedia.org/wiki/Countdown_(game_ show) , 2025. Zhiheng Xi, Wenxiang Chen, Boyang Hong, Senjie Jin, Rui Zheng, Wei He, Yiwen Ding, Shichun Liu, Xin Guo, Junzhe Wang, et al. Training large language models | https://arxiv.org/abs/2505.16984v1 |
for reasoning through reverse curriculum reinforcement learning. International Conference on Machine Learning (ICML) , 2024. Tian Xie, Zitian Gao, Qingnan Ren, Haoming Luo, Yuqian Hong, Bryan Dai, Joey Zhou, Kai Qiu, Zhirong Wu, and Chong Luo. Logic-rl: Unleashing llm reasoning with rule-based reinforcement learning. arXiv preprint arXiv:2502.14768 , 2025. 13 Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 , 2025. Lifan Yuan, Wendi Li, Huayu Chen, Ganqu Cui, Ning Ding, Kaiyan Zhang, Bowen Zhou, Zhiyuan Liu, and Hao Peng. Free process rewards without process labels. arXiv preprint arXiv:2412.01981 , 2024. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825 , 2023. Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint arXiv:2504.13837 , 2025. Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892 , 2025. Han Zhong, Zikang Shan, Guhao Feng, Wei Xiong, Xinle Cheng, Li Zhao, Di He, Jiang Bian, and Liwei Wang. Dpo meets ppo: Reinforced token optimization for rlhf. arXiv preprint arXiv:2404.18922 , 2024. Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. Annual Conference on Neural Information Processing Systems (NeurIPS) , 2023. 14 A Related Work In this section, we introduce related work about SFT, RFT, and curriculum learning for reasoning. Supervised Fine-T uning (SFT) for Reasoning. Different SFT methods for enhancing reasoning capability usually differ in the source of the collected reasoning trace. Zeng et al. (2025) uses traditional SFT, i.e., learning from the human-annotated problem solutions. In contrast, Gandhi et al. (2025); Muennighoff et al. (2025) utilize long chain-of-thoughts solutions generated by some large models, such as Claude and Deepseek-R1 (DeepSeek-AI et al., 2025). On the other hand, Yuan et al. (2023); Xie et al. (2025) utilizes rejection sampling fine-tuning. Specifically, the model will generate multiple reasoning traces, and the one that leads to the correct answer is selected for further fine-tuning. In this paper, we use human annotations as the SFT data (traditional SFT), as it is sufficient for our purpose and keeps the focus on our main contribution (unifying SFT and RFT). Reinforcement Fine-T uning (RFT) for Reasoning. RFT for reasoning can be categorized into process supervision and outcome supervision. Process supervision assigns a reward to each step of a long reasoning trace (Lightman et al., 2024), which evaluates whether each step is correct or not. The main drawback of process supervision is that it is costly to prepare step-by-step feedback data. On the other hand, outcome supervision assigns a single reward to the entire | https://arxiv.org/abs/2505.16984v1 |
trace (DeepSeek-AI et al., 2025; Zeng et al., 2025; Yu et al., 2025), e.g., whether the trace yields the correct answer to a math problem. Furthermore, Wang et al. (2023); Yuan et al. (2024); Zhong et al. (2024); Luo et al. (2024); Setlur et al. (2025) learn a step-by-step reward model from a collection of reasoning traces with outcome rewards, which avoids the cost of preparing step-by-step data. In this paper, due to the efficiency and simplicity of outcome supervision, we focus on the comparison with RFT using outcome supervision. Curriculum Learning for Reasoning. Existing curriculum reinforcement learning for reasoning mainly focuses on utilizing a collection of problems with varying difficulties (Wen et al., 2025; Shi et al., 2025; Song et al., 2025). These methods train the model with problems of gradually increasing difficulty, where the difficulty is determined by predefined criteria, such as the length of the successful reasoning trace (Song et al., 2025) or the success rate of baseline models (Shi et al., 2025; Wen et al., 2025). However, such methods fail when the problems in the dataset are homogeneous in difficulty. In contrast, Xi et al. (2024) proposes a curriculum learning method that concatenates the problem with a slice of the solution (hint). The difficulty is determined by the hint length. However, Xi et al. (2024) uses a uniform distribution over all possible hint lengths, which misaligns with the distribution of interest (zero hint length). On the other hand, UFT designs a hint length scheduler that smoothly reduces the hint length to zero. Furthermore, UFT adds an additional log-likelihood term for the hint in the objective function, which helps the model to acquire new knowledge more efficiently and increases the ceiling of reinforcement learning (cf. Figure 5). B Experiment Details In this section, we introduce the details of the experiments, including the pseudo-code of UFT (Ap- pendix B.1), the hyperparameters used (Appendix B.2), and additional experiment results (Appendix B.3). 15 Algorithm 1: Unified Fine-Tuning Hyperparameters: KL-penalty coefficient β, total number of steps T, number of steps with hint Thint, low/high probability plow/phighfor hint sampling, and hint length L Input: Reference policy parameter θθθref Initialization: θθθ(0)←θθθref 1fort=0, 1,· · ·,T−1do 2 Sample a batch of problems B 3D ←{} 4 for(Q,S,A)∈ B do // For each (question, solution, answer) pair 5 ift<Thintthen 6 p(t)←plow+1 2 phigh−plow 1+cost+1 Thintπ (B.1) // Cosine annealing, π≈3.14159 is the Pi constant 7 Sample l(t)∼Binomial min{L, len(S)},p(t) 8 else 9 l(t)=0 10 end 11 D ← D ∪n Q+S[:l(t)]o // Concatenate the question with the partial solution (hint) and add to D 12 end 13 Run reinforcement learning algorithm on Dwith the objective function (3.3) 14end B.1 Algorithm This section presents the pseudo-code of UFT in Algorithm 1. In lines 4-9: we sample the hint length for each (question, solution, answer) pair in the sampled data batch B. In lines 11-13, we concatenate the question with the partial solution of length l(t)and feed it into a reinforcement learning algorithm (such as GRPO), with the objective function (3.3). B.2 Cost and Implementation Details The project costs | https://arxiv.org/abs/2505.16984v1 |
roughly $10,000 GPU hours. The experiment is based on VERL (Sheng et al., 2024) and TinyZero (Pan et al., 2025). The hyperparameters for training on different datasets are listed in Table 1. The omitted hyperparameters follow the default values of VERL (Sheng et al., 2024). B.3 Additional Results Figure 10 shows the response of the model trained via different algorithms. For Qwen2.5-0.5B, UFT’s response aligns with the solution better than RFT’s. For Qwen2.5-3B, UFT generates a longer reasoning trace and presents skills such as verification (Gandhi et al., 2025), while SFT-RFT does not. 16 Data Training Batch Size 256 Validation Batch Size 1312 Mini-batch Size 64 Hint Length 5 Training β 0.001 T 500 Thint 300 Number of Rollouts 4 Context Window (Prompt)Countdown: 256 MATH(3,4,5): 1024 Logic: 1024 Context Window (Response) 1024 plow0.05 phigh0.95 SFT Epochs 5 Reward Accuracy Reward 1.0 Format Correctness Reward 0.1 Incorrect Reward 0.0 Table 1: The hyperparameters for training on different datasets. The other parameters follow the default parameters of VERL (Sheng et al., 2024). Table 2 shows the accuracy results across different datasets. For clarity, we report the average accuracy over models trained on three datasets: Countdown, MATH(3,4,5), and Logic. For smaller models such as Qwen2.5-0.5B, SFT-RFT achieves an accuracy of 7.28%, compared to only 3.25% for RFT. In contrast, UFT achieves 9.45% accuracy, outperforming both. For larger models such as Qwen2.5-3B, SFT-RFT achieves 17.34% accuracy, which is significantly lower than RFT’s 32.15%. However, UFT still performs competitively, reaching 30.93% and closely matching RFT. In summary, UFT combines the strengths of both SFT and RFT. When the model is small and memorization plays a key role, UFT matches or exceeds SFT’s performance. When the model is large and generalization becomes more important, UFT benefits similarly to RFT, achieving comparable accuracy. C Proof of Theorem 4.2 Theorem 4.2 (Lowerbound) .For any integers H≥1,B≥2, and any RFT algorithm, there exists a problem with height Hand branching factor B, that satisfies the following: to achieve a 50% pass@1 17 Qwen2.5 -0.5B Question Use39,84,29toget94 <think> 84-29=55 39+55=94 Therefore, thefinal equation is39+(84-29)</think> <answer> 39+(84-29)</answer>Solution <think> 84+39=123 123-29=94 <answer> (84+39-29)</answer>UFT Response<think> 39+84-29</think> <answer> (39+84)-29</answer>RFT Response Qwen2.5 -3B Question Use95,32,45,21toget87 <think> 45-21=24 32-24=8 95-8=87 Therefore, thefinal equation is95-(32-(45-21)) </think> <answer> 95-(32-(45-21)) </answer>SFT-RFT Response <think> 95-45=50 50-32=18 21+18=39(too low) 32+21=53 53+50=103(too high) 32+(95-45)=32+50=82(not what wewant) (95+45)-(32+21)=140-53=87(this works) </think> Sothefinal equation thatequals 87is: <answer> (95+45)-(32+21)</answer>UFT ResponseFigure 10: Responses of Qwen2.5-0.5/3B trained by different algorithms. success rate, the algorithm needs to explore at least BH 4(4.3) nodes in SH. Moreover, when there are multiple nodes in SHrepresenting the correct solutions, e.g., K≥1, any algorithm needs to explore at leastBH 4Knodes in SH. Proof. Proving the lower bound of exploration is equivalent to the following. Find the maximum T>0, such that any algorithm will fail to learn the optimal policy with probability at least 0.5within T explorations. Consider the (BH K)possible trees, each associated with a distinct subset of SHof size K, where that subset represents the correct solution for that specific tree. At the beginning, we pick an instance from all those possible trees uniformly at | https://arxiv.org/abs/2505.16984v1 |
random. During each exploration, the algorithm requests the reward at a node in SH. Let s(1),s(2),. . .,s(T)be the leaf node reached at timestep 1, 2,. . .T, which are random variables depending on the randomness of the algorithm. Let S∗ H:= s∈ S H:R(s) =max s′∈SHR(s′) be the set of nodes representing correct solutions. Note that given the construction of the instances, |S∗ H|=K. Then, the probability of reaching one of the correct solutions in S∗ His Prn s(t)oT t=1∩ S∗ H̸=∅ =T ∑ t=1Pr s(t)∈ S∗ H|n s(s)ot−1 s=1∩ S∗ H=∅ Prn s(s)ot−1 s=1∩ S∗ H=∅ ≤T ∑ t=1Pr s(t)∈ S∗ H|n s(s)ot−1 s=1∩ S∗ H=∅ . Given that we pick S∗ Huniformly at random, Pr s(t)∈ S∗ H|n s(s)ot−1 s=1∩ S∗ H=∅ =|S∗ H| BH−t+1. Therefore, Prn s(t)oT t=1∩ S∗ H̸=∅ ≤T ∑ t=1|S∗ H| BH−t+1. 18 Model Algorithm MATH(3,4,5) AIME24 AMC Countdown Logic MATH500 Minerva Olympiad GSM8k Avg. Base 3.03 0.00 0.00 0.00 0.00 1.73 0.74 0.30 7.66 1.55 SFT 4.92 0.00 1.61 11.20 1.87 2.13 2.08 1.33 13.07 4.46 RFT 3.78 0.00 3.21 8.30 0.00 2.47 3.80 2.57 3.87 3.25 Qwen2.5-0.5B SFT-RFT 8.69 0.00 3.61 17.45 7.07 2.07 4.41 2.12 16.45 7.28 R39.86 0.00 6.43 9.99 4.20 3.33 5.02 3.11 20.09 7.36 UFT 13.18 0.00 6.83 17.15 4.87 5.40 5.76 2.77 24.59 9.45 Base 24.51 3.33 4.82 0.20 2.20 18.27 4.41 5.48 60.96 14.29 SFT 12.47 0.00 5.62 13.48 5.33 6.40 4.53 2.62 29.74 9.36 RFT 24.77 2.22 9.24 27.86 3.00 10.53 6.86 6.47 45.69 16.08 Qwen2.5-1.5B SFT-RFT 15.72 1.11 6.83 20.51 11.13 5.00 4.41 4.59 30.02 11.70 R328.12 2.22 13.65 23.57 11.47 14.93 7.48 9.43 49.79 18.65 UFT 34.08 3.33 14.86 24.54 10.07 20.87 8.33 9.68 66.46 22.23 Base 31.45 0.00 13.25 3.81 5.60 24.53 4.78 7.70 57.85 17.13 SFT 24.32 0.00 10.04 15.07 10.20 16.80 5.27 5.19 45.54 15.25 RFT 45.74 4.44 24.90 34.08 30.33 31.27 12.25 15.65 80.84 32.15 Qwen2.5-3B SFT-RFT 26.50 1.11 9.64 17.61 19.60 14.07 5.76 6.77 48.22 17.34 R344.01 2.22 21.29 27.12 24.80 28.00 10.91 14.57 70.20 28.02 UFT 47.04 3.33 29.32 31.38 26.07 29.73 12.99 14.17 74.63 30.93 Base 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.08 0.01 SFT 1.07 0.00 0.80 13.41 3.67 0.00 0.74 0.25 1.87 2.49 RFT 0.94 0.00 2.41 0.00 0.00 0.47 0.49 0.84 1.42 0.80 Llama-3.2-1B SFT-RFT 0.42 0.00 0.00 18.68 8.33 0.00 1.23 0.20 0.48 3.29 R31.53 0.00 1.61 9.90 0.13 0.33 2.94 0.99 1.49 2.20 UFT 1.17 0.00 0.00 17.87 7.40 0.07 2.82 0.74 1.14 3.52 Base 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 SFT 2.54 0.00 0.40 14.68 6.13 0.00 1.72 0.54 7.08 3.85 RFT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05 0.00 0.01 Llama-3.2-3B SFT-RFT 3.16 0.00 2.41 16.05 8.87 0.07 3.92 0.89 5.76 4.79 R32.93 0.00 3.21 17.55 9.93 0.87 3.06 1.04 5.16 5.03 UFT 1.24 0.00 1.20 17.64 6.60 1.13 1.10 0.30 4.12 3.72 Table 2: Average performance of Qwen2.5-0.5/1.5/3B and Llama-3.2-1/3B across all three training datasets, Countdown, MATH(3,4,5), and Logic. When T≤BH 4|S∗ H|, we have Prn s(t)oT t=1∩ S∗ H̸=∅ ≤T ∑ | https://arxiv.org/abs/2505.16984v1 |
t=1|S∗ H| BH−t+1(i) ≤T ∑ t=12|S∗ H| BH=2|S∗ H|T BH≤1 2. (i)uses the fact that t≤T≤BH 4|S∗ H|≤BH 2. Therefore, withinBH 4|S∗ H|exploration, the algorithm will fail to find the correct answer with probability at least 0.5. D Extended Theoretical Justifications In this section, we introduce some additional notations in Appendix D.1 and then present the theoretically sound UFT in Appendix D.2. D.1 Extended Preliminaries Notation. For any vector x∈Rn, let xibe its ithelement and ∥x∥pbe the Lp-norm, where ∥x∥denotes theL2-norm by default. For any two vectors x,y∈Rn, let⟨x,y⟩:=∑n i=1xi·yidenote their inner product. 19 Softmax Parameterized Policy. Algorithm 2 assumes the policy follows softmax parameterization. Formally, the policy πθθθis controlled by θθθ∈R|S|× B, such that for any s∈ S and a∈[B], πθθθ(a|s):=exp(θ(s,a)) ∑B a′=1exp(θ(s,a′)). (D.1) The softmax-parameterized policy is also widely adopted in the literature (Mei et al., 2020; Agarwal et al., 2021; Ding et al., 2020) to sidestep the complexities of analyzing non-convex neural networks and to keep the focus on the learning algorithm itself. D.2 Theoretically Sound UFT The full algorithm is shown in Algorithm 2. In lines 2-3: we sample the hint length and a trajectory starting from the hint. In lines 6-10, we estimate Q-values by sampling an additional trajectory for each state-action pair, which can greatly reduce the variance of sampling. In lines 13-14, we compute the objective function and update the parameters by gradient ascent. In lines 16-17, we estimate the expected reward of each intermediate policy and return the best one. Note that Algorithm 2 differs slightly from the UFT shown in Algorithm 1. While Algorithm 1 leaves the choice of the reinforcement learning algorithm unspecified, Algorithm 2 explicitly defines the trajectory rolling mechanism and update rule for concrete theoretical analysis. Further, Algorithm 2 assumes a softmax-parameterized policy, whereas Algorithm 1 imposes no constraints on the policy network architecture. E Proof of Theorem 4.3 In this section, for notational simplicity, we use π(t)to denote πθθθ(t)for any t∈{0, 1,· · ·,T}. Moreover, for any t∈[T], we define eA(t−1)(s,a) =eQ(t−1)(s,a) =0for those nodes soff the sampled path s(t) hH h=l(t)at timestep t. Theorem E.1 (Formal) .Consider Algorithm 2. When β≤∆ 12(H+1)2(logB+2∥θθθref∥∞), the pass @ 1 accuracy Pr πθθθ(et∗)(pass @ 1 )of policy πθθθ(et∗)satisfies Pr πθθθ(et∗)(pass @ 1 )≥0.5, (E.1) when T= (H+1)2 logB+2 θθθref ∞+7 ∆/12 2 (E.2) and explores no more than (BH+N)Tleaf nodes in SH. Proof. The update rule can be divided into two steps: (i) Use the concentration bound to get a high- probability bound onD Qπ(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E (cf. Appendix E.1); (ii) Convert the difference in each node to the V∗−Vπ(t−1)(sroot)by the regret decomposition lemma (cf. Appendix E.2); (iii) Convert the bound on expected reward to success rate (cf. Appendix E.3). 20 Algorithm 2: Theoretically Sound Unified Fine-Tuning Hyperparameters: Learning rate η, KL-penalty coefficient β, and total number of steps T Input: Reference policy parameter θθθref Initialization: θθθ(0)←θθθref 1fort=0, 1,· · ·,T−1do 2 Sample l(t)∼Uniform (0, 1, 2, · · ·,H−1,H) // In fact, any distribution with full support on {0, 1, 2, · · ·,H−1,H}is fine. We choose the uniform distribution for | https://arxiv.org/abs/2505.16984v1 |
simplicity 3 Sample trajectory s(t) hH h=l(t)∼πθθθ(t), where s(t) l(t)=s∗ l(t) 4 forh=l(t),l(t)+1,· · ·H−1do 5 fora=1, 2,· · ·,Bdo // Group sampling 6 Sample trajectory s(t),a h′H h′=h+1∼πθθθ(t)starting from s(t),a h+1=T(s(t) h,a) 7 eQ(t) s(t) h,a ← R s(t),a H 8 end 9 fora=1, 2,· · ·,Bdo 10 eA(t) s(t) h,a ←eQ(t) s(t) h,a −∑B a=1πθθθ(t) a|s(t) h eQ(t) s(t) h,a 11 end 12 end //eA(t)(s,·)≡0for any soff the trajectory s(t) hH h=l(t) 13 J(t)←H−1 ∑ h=l(t)B ∑ a=1πθθθ(t) a|s(t) h eA(t) s(t) h,a −βH−1 ∑ h=l(t)KL πθθθ(t) · |s(t) h ∥πθθθref · |s(t) h +βl(t)−1 ∑ h=0logπθθθ(t)(a∗ h|s∗ h) 14 θθθ(t+1)←θθθ(t)+η∇πJ(t)(D.2) 15end 16Estimate eVπθθθ(t) (sroot) =1 N∑N n=1R es(t),n H by sampling trajectories es(t),n 0=srootand es(t),n hH h=0∼πθθθ(t), where N=72 log (14(T+1)) ∆2 17et∗=argmaxt∈{0,1,···,T}eVπθθθ(t) (sroot) Return: πθθθ(et∗) E.1 Concentration Bound For any height h∈{0}∪[H−1], state s∈ S h, and action a∈[B], we can define the Q-value of the state-action pair (s,a)∈ S × [B]when following policy πas Qπ(s,a):=Esh=s,(sh′)H h′=h∼π[R(sH)]. (E.3) 21 Then, for any s∈ S \ S Hand t∈[T], we have Eh eQ(t−1)(s,a)i =Pr s∈n s(t) hoH h=l(t) ·Qπ(t−1)(s,a), (E.4) where the expectation is taken over the probability of sampling trajectories in Algorithm 2. Next, we will introduce Lemma 5.3 in Liu et al. (2024). Proposition E.2. LetM,eM≥0be the constants such that f(t)(x)−f(t)(x′) ≤Mand ef(t)(x)−ef(t)(x′) ≤ eMfor any t∈[T]and x,x′∈ C, where Cis a convex set. If for any x∈ C, we have Eh ef(t)(x)|ef(1),ef(2),· · ·,ef(t−1)i =f(t)(x), and x(t)is deterministically influenced by ef(1),ef(2),· · ·,ef(t−1), then for any δ∈(0, 1)and x∈ C, we have Pr T ∑ t=1 f(t)(x)−f(t)(x(t)) ≤T ∑ t=1 ef(t)(x)−ef(t)(x(t)) + M+eMr 2Tlog1 δ! ≥1−δ. For any h<Hand s∈ S h, let f(t)(x) = Pr s∈n s(t−1) hoH h=l(t−1)D Qπ(t−1)(s,·),xE , where f(t):∆B→ [0, 1]since each element of Q(t−1)(s,·)is bounded by [0, 1]by definition. Therefore, Min Proposition E.2 is1. Similarly, let ef(t)(x) =D eQ(t−1)(s,·),xE and we have eM=1. Therefore, by (E.4), Proposition E.2, and Lemma E.3, for any δ∈(0, 1), with probability at least 1 −δ, we have T ∑ t=1Pr s∈n s(t) hoH h=l(t)D Qπ(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E ≤T ∑ t=1D eQ(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E +2r 2Tlog1 δ (i)=T ∑ t=1D eA(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E +2r 2Tlog1 δ. (i)is because D eA(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E =D eQ(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E +B ∑ a=1π(t−1)(a|s)eQ(t−1)(s,a)B ∑ a=1 π∗(a|s)−π(t−1)(a|s) =D eQ(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E . By the update rule of Algorithm 2, we have the following lemma. Lemma E.3. Consider Algorithm 2. For any node s∈ S \ S H, we have T ∑ t=1D eA(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E ≤1 η+βT KL π∗(· |s)∥πθθθref(· |s) +2ηT. 22 The proof is postponed to Appendix E.4. Lemma E.3 gives us an upper bound on the accumulated difference between our policy π(t−1)and the optimal policy π∗. Therefore, T ∑ t=1Pr s∈n s(t) hoH h=l(t)D Qπ(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E ≤T ∑ t=1D eA(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E +2r 2Tlog1 δ ≤1 η+βT KL π∗(· |s)∥πθθθref(· |s) +2ηT+2r 2Tlog1 δ. E.2 Difference Decomposition Letµπ(s)be the probability of reaching state sfrom the root by following policy π. Hence, µπ(sroot) =1. For any s∈ S \ S Hand action a∈[B],µπ(T(s,a))can be recursively defined | https://arxiv.org/abs/2505.16984v1 |
as µπ(T(s,a))=µπ(s)·π(s,a). (E.5) In the following, we will introduce Lemma E.4, which is a special case of the regret decomposition lemma (Lemma 5.1) in Liu et al. (2023). Specifically, it is the regret decomposition lemma for a two-player zero-sum extensive-form game without chance nodes5, and the second player’s action sets at all nodes are of size 1. Lemma E.4. For any sequence of policies π(1),π(2),· · ·,π(T)and policy π, we have T ∑ t=1 Vπ(sroot)−Vπ(t)(sroot) =∑ s∈S\S Hµπ(s)T ∑ t=1D Qπ(t)(s,·),π(· |s)−π(t)(· |s)E . Lemma E.4 can also be viewed as the performance difference lemma in reinforcement learning (Kakade and Langford, 2002) for a tree-shape Markov decision process. For completeness, we also provide the proof at the end of this section. By letting π(t)=π(t−1)for any t∈[T]andπ=π∗, we have T ∑ t=1 V∗−Vπ(t−1)(sroot) =∑ s∈S\S Hµπ∗(s)T ∑ t=1D Qπ(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E (i)=∑ s∈{s∗ 0,s∗ 1,···,s∗ H−1}µπ∗(s)T ∑ t=1D Qπ(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E =∑ s∈{s∗ 0,s∗ 1,···,s∗ H−1}T ∑ t=1µπ∗(s) Pr s∈n s(t) hoH h=l(t)Pr s∈n s(t) hoH h=l(t) ·D Qπ(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E . 5Chance nodes represent the randomness of the game, such as rolling a dice. 23 (i)uses the fact that π∗is deterministic such that µπ∗(s)>0only when s∈ s∗ 0,s∗ 1,· · ·,s∗ H . Since s(t) l(t)is sampled from s∗ 0,s∗ 1,· · ·,s∗ H uniformly, for any s∈ s∗ 0,s∗ 1,· · ·,s∗ H , we have Pr s∈n s(t) hoH h=l(t) ≥Pr s=s(t) l(t) =1 H+1. Therefore,µπ∗(s) Pr s∈n s(t) hoH h=l(t)≤H+1 and we have T ∑ t=1 V∗−Vπ(t−1)(sroot) ≤H−1 ∑ h=0µπ∗(s∗ h) Pr s∗ h∈n s(t) hoH h=l(t) 1 η+βT KL π∗(· |s∗ h)∥πθθθref(· |s∗ h) +2ηT+2r 2Tlog1 δ! ≤(H+1)H−1 ∑ h=0 1 η+βT KL π∗(· |s∗ h)∥πθθθref(· |s∗ h) +2ηT+2r 2Tlog1 δ! . Next, we can bound KL π∗(· |s∗ h)∥πθθθref(· |s∗ h) by the following lemma. Lemma E.5. For any h∈{0, 1,· · ·,H−1}, we have KL π∗(· |s∗ h)∥πθθθref(· |s∗ h) ≤logB+2 θθθref ∞. The proof is postponed to Appendix E.4. Therefore, by taking η=1√ T, we have T ∑ t=1 V∗−Vπ(t−1)(sroot) ≤(H+1)2 logB+2 θθθref ∞√ T+2√ T+2r 2Tlog1 δ! +βT(H+1)H−1 ∑ h=0KL π∗(· |s∗ h)∥πθθθref(· |s∗ h) . Because V∗−Vπ(t−1)(sroot)≥0for any t∈[T], according to pigeon hole principle, there must exist t∗∈{0, 1, . . . , T}such that V∗−Vπ(t∗)(sroot) ≤(H+1)2 logB+2 θθθref ∞ +2+2q 2 log1 δ √ T +β(H+1)H−1 ∑ h=0KL π∗(· |s∗ h)∥πθθθref(· |s∗ h) . 24 For any ϵ>β(H+1)∑H−1 h=0KL π∗(· |s∗ h)∥πθθθref(· |s∗ h) , it takes (H+1)2 logB+2 θθθref ∞+2+2q 2 log1 δ ϵ−β(H+1)∑H−1 h=0KL π∗(· |s∗ h)∥πθθθref(· |s∗ h) 2 iterations to satisfy V∗−Vπ(t∗)(sroot)≤ϵ. Recall that ∆>0is the sub-optimality gap. By picking ϵ=∆ 6,δ=1 8, and β≤∆ 12(H+1)2(logB+2∥θθθref∥∞), to getϵaccuracy with probability 1 −δ, we need T= (H+1)2 logB+2 θθθref ∞+7 ∆/12 2 iterations, which implies T≤ O H4(logB)2 ∆2 . Since O(B·H)leaf nodes are explored at each iteration, the number of leaf nodes explored during training is O(B·H·T)≤ O BH5(logB)2 ∆2 . E.3 Compute Probability To find t∗, we need to estimate Vπθθθ(t) for all t∈{0, 1,· · ·,T}by sampling trajectories. By sampling a trajectory from πθθθ(t), | https://arxiv.org/abs/2505.16984v1 |
we can get a random variable from Bernoulli Prcond π(t)(pass @ 1 ) representing whether the trajectory reaches the correct solution. Then, by Hoeffding’s inequality, by sampling N trajectories, we have Pr eVπθθθ(t) (sroot)−Vπθθθ(t) (sroot) ≤∆ 12 ≤2 exp −N∆2 72 (i)=1 7(T+1). (E.6) (i)is by definition of Nin Algorithm 2. By union bound, for any t∈{0, 1,· · ·,T}, eVπθθθ(t) (sroot)−Vπθθθ(t) (sroot) ≤ ∆ 12holds with probability at least 1 −T+1 7(T+1)=6 7. Therefore, Vπθθθ(et∗) (sroot)≥eVπθθθ(et∗) (sroot)−∆ 12≥eVπθθθ(t∗) (sroot)−∆ 12 ≥Vπθθθ(t∗) (sroot)−∆ 6≥V∗−ϵ−∆ 6=V∗−∆ 3. Recall that Prπ(et∗)(pass @ 1 )is the pass @ 1 accuracy of policy π(et∗). In the following, we will use Prcond as a shorthand of Pr · |Vπ(et∗)(sroot)≥V∗−∆ 3 . Prcond π(et∗)(pass @ 1 )=Prcond s0=sroot,(sh)H h=0∼π(et∗) R(sH) = max s′ H∈SHR(s′ H)! =Prcond s0=sroot,(sh)H h=0∼π(et∗)(R(sH) =V∗). 25 Furthermore, V∗−∆ 3≤Vπ(et∗)(sroot) =Es0=sroot,(sh)H h=0∼π(et∗)[R(sH)] ≤Prcond s0=sroot,(sh)H h=0∼π(et∗)(R(sH) =V∗)V∗ + 1−Prcond s0=sroot,(sh)H h=0∼π(et∗)(R(sH) =V∗) (V∗−∆). By combining all pieces together, we have Prcond π(et∗)(pass @ 1 )V∗+ 1−Prcond π(et∗)(pass @ 1 ) (V∗−∆) ≥V∗−∆ 3, which implies that Prcond π(et∗)(pass @ 1 )≥2 3. Finally, Prπ(et∗)(pass @ 1 ) ≥Prcond π(t∗)(pass @ 1 )Pr Vπ(t∗)(sroot)≥V∗−ϵ Pr Vπ(et∗)(sroot)≥Vπ(t∗)(sroot)−∆ 6 ≥2 3(1−δ)6 7=1 2. E.4 Omitted Proofs Lemma E.3. Consider Algorithm 2. For any node s∈ S \ S H, we have T ∑ t=1D eA(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E ≤1 η+βT KL π∗(· |s)∥πθθθref(· |s) +2ηT. Proof. We will introduce the following one-step analysis of the update rule first. Lemma E.6. For any node s∈ S \ S Hand t∈[T], we have ηD eA(t−1)(s,·),π∗(· |s)−π(t)(· |s)E ≤KL π∗(· |s)∥π(t−1)(· |s) −KL π∗(· |s)∥π(t)(· |s) −KL π(t)(· |s)∥π(t−1)(· |s) +ηβKL π∗(· |s)∥πθθθref(· |s) . The proof is presented later in this section. Therefore, ηD eA(t−1)(s,·),π∗(· |s)−π(t)(· |s)E ≤KL π∗(· |s)∥π(t−1)(· |s) −KL π∗(· |s)∥π(t)(· |s) −KL π(t)(· |s)∥π(t−1)(· |s) +ηβKL π∗(· |s)∥πθθθref(· |s) . 26 By adding ηD eA(t−1)(s,·),π(t)(· |s)−π(t−1)(· |s)E on both sides, we have ηD eA(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E (i) ≤KL π∗(· |s)∥π(t−1)(· |s) −KL π∗(· |s)∥π(t)(· |s) −KL π(t)(· |s)∥π(t−1)(· |s) +ηD eA(t−1)(s,·),π(t)(· |s)−π(t−1)(· |s)E +ηβKL π∗(· |s)∥πθθθref(· |s) . By Hölder’s inequality, we have D eA(t−1)(s,·),π(t)(· |s)−π(t−1)(· |s)E ≤ eA(t−1)(s,·) ∞· π(t)(· |s)−π(t−1)(· |s) 1 ≤2η eA(t−1)(s,·) 2 ∞+1 8η π(t)(· |s)−π(t−1)(· |s) 2 1 (i) ≤2η+1 4ηKL π(t)(· |s)∥π(t−1)(· |s) . (i)uses eA(t−1)(s,·) ∞≤1 and Pinsker’s inequality. Therefore, ηD eA(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E ≤KL π∗(· |s)∥π(t−1)(· |s) −KL π∗(· |s)∥π(t)(· |s) +2η2+ηβKL π∗(· |s)∥πθθθref(· |s) . By telescoping, we have ηT ∑ t=1D eA(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E ≤KL π∗(· |s)∥π(0)(· |s) −KL π∗(· |s)∥π(T)(· |s) +2η2T+ηβKL π∗(· |s)∥πθθθref(· |s) T (i) ≤KL π∗(· |s)∥π(0)(· |s) +2η2T+ηβKL π∗(· |s)∥πθθθref(· |s) T. (i)uses the non-negativity of KL-divergence. By dividing ηon both sides, we have T ∑ t=1D eA(t−1)(s,·),π∗(· |s)−π(t−1)(· |s)E ≤1 ηKL π∗(· |s)∥π(0)(· |s) +2ηT+βKL π∗(· |s)∥πθθθref(· |s) T (i)=1 ηKL π∗(· |s)∥πθθθref(· |s) +2ηT+βKL π∗(· |s)∥πθθθref(· |s) T. (i) is because π(0)(· |s) =πθθθref(· |s)by the initialization of Algorithm 2. Lemma E.4. For any sequence of policies π(1),π(2),· · ·,π(T)and policy π, we have T ∑ t=1 Vπ(sroot)−Vπ(t)(sroot) =∑ s∈S\S Hµπ(s)T ∑ t=1D Qπ(t)(s,·),π(· |s)−π(t)(· |s)E . 27 Proof. The | https://arxiv.org/abs/2505.16984v1 |
lemma can be proved by induction. When H=1, Lemma E.4 holds since Qπ(t)(sroot,a) = R(T(sroot,a))=Qπ(sroot,a)for any action a∈[B]and t∈[T]. Therefore, ∑ s∈S\S HT ∑ t=1µπ(s)D Qπ(t)(s,·),π(· |s)−π(t)(· |s)E =T ∑ t=1µπ(sroot) ⟨Qπ(sroot,·),π(· |sroot)⟩−D Qπ(t))(sroot,·),π(t)(· |sroot)E =T ∑ t=1 Vπ(sroot)−Vπ(t)(sroot) . For any two nodes s,s′, we write s⊑s′ifsis an ancestor of s′in the search tree. Consider when Lemma E.4 holds for any search tree of height H≤H0. Then, for H=H0+1, we have ∑ s∈S\S HT ∑ t=1µπ(s)D Qπ(t)(s,·),π(· |s)−π(t)(· |s)E =T ∑ t=1µπ(sroot)D Qπ(t)(sroot,·),π(· |sroot)−π(t)(· |sroot)E +B ∑ a=1∑ s∈S\S H: T(sroot,a)⊑sT ∑ t=1µπ(s)D Qπ(t)(s,·),π(· |s)−π(t)(· |s)E . Then, according to the induction hypothesis, for any a∈[B], since the subtree rooted at T(sroot,a)is a tree of height H0, we have ∑ s∈S\S H: T(sroot,a)⊑sT ∑ t=1µπ(s)D Qπ(t)(s,·),π(· |s)−π(t)(· |s)E =π(a|sroot)T ∑ t=1 Vπ(T(sroot,a))−Vπ(t)(T(sroot,a)) . Moreover, by definition, we have Qπ(t)(sroot,a) =Vπ(t)(T(sroot,a)). Therefore, ∑ s∈S\S HT ∑ t=1µπ(s)D Qπ(t)(s,·),π(· |s)−π(t)(· |s)E =T ∑ t=1µπ(sroot)D Qπ(t)(sroot,·),π(· |sroot)−π(t)(· |sroot)E +B ∑ a=1π(a|sroot)T ∑ t=1 Vπ(T(sroot,a))−Vπ(t)(T(sroot,a)) =T ∑ t=1B ∑ a=1 π(a|sroot)−π(t)(a|sroot) Vπ(t)(T(sroot,a)) +Vπ(sroot)T−B ∑ a=1π(a|sroot)T ∑ t=1Vπ(t)(T(sroot,a)) =T ∑ t=1 Vπ(sroot)−Vπ(t)(sroot) . 28 Therefore, Lemma E.4 also holds when H=H0+1 and thus we can conclude the proof. Lemma E.6. For any node s∈ S \ S Hand t∈[T], we have ηD eA(t−1)(s,·),π∗(· |s)−π(t)(· |s)E ≤KL π∗(· |s)∥π(t−1)(· |s) −KL π∗(· |s)∥π(t)(· |s) −KL π(t)(· |s)∥π(t−1)(· |s) +ηβKL π∗(· |s)∥πθθθref(· |s) . Proof. Lethbe the height of s. There are three possibilities on ∇π(· |s)J(t−1): (I)eA(t−1)(s,·) +βlogπ(t−1)(· |s)− βlogπθθθref(· |s) +β1; (II) A one-hot vector with only index a∗ hbeβ π(t−1)(· |s); (III) 0. Then, we will show that (D.2) is equivalent to the following in different cases. Lemma E.7. For any t∈{1, 2,· · ·,T},h∈{0, 1,· · ·,H−1}, and node s∈ S h, (D.2) is equivalent to the following, π(t)(· |s) = argmin π(· |s)∈∆BD −eA(t−1)(s,·),π(· |s)E +βKL π(· |s)∥πθθθref(· |s) +1 ηKL π(· |s)∥π(t−1)(· |s) (I) π(t)(· |s) = argmin π(· |s)∈∆BD −∇π(· |s)J(t−1),π(· |s)E +1 ηKL π(· |s)∥π(t−1)(· |s) , (II, III) where (I), (II), (III) stand for the cases when ∇π(· |s)J(t−1)= eA(t−1)(s,·) +βlogπ(t−1)(· |s)−βlogπθθθref(· |s) +β1(I) A one-hot vector with only index a∗ hbeβ π(t−1)(· |s)(II) 0. (III) Then, we will introduce a special case of Lemma 3.0.3 from Liu (2025). Lemma E.8. For any node s, vector g∈RB,η>0,β0≥0, policy x(0)∈∆B, and reference policy xref∈∆B, let x(1)=argmin x∈∆B ⟨g,x⟩+β0KL x∥xref +1 ηKL x∥x(0) . Then, for any x(2)∈∆B, we have ηβ0KL x(1)∥xref −ηβ0KL x(2)∥xref +ηD g,x(1)−x(2)E (E.7) ≤KL x(2)∥x(0) −(1+ηβ0)KL x(2)∥x(1) −KL x(1)∥x(0) . Consider (I) first. For any node s∈ S \ S Hand t∈[T], by taking x(2)=π∗(· |s),x(1)=π(t)(· |s),x(0)= π(t−1)(· |s),xref=πθθθref(· |s),g=−eA(t−1)(s,·)andβ0=β, we have ηβKL π(t)(· |s)∥πθθθref(· |s) −ηβKL π∗(· |s)∥πθθθref(· |s) +ηD eA(t−1)(s,·),π∗(· |s)−π(t)(· |s)E ≤KL π∗(· |s)∥π(t−1)(· |s) −(1+ηβ)KL π∗(· |s)∥π(t)(· |s) −KL π(t)(· |s)∥π(t−1)(· |s) . 29 Further, by the non-negativity of KL-divergence, we have ηD eA(t−1)(s,·),π∗(· |s)−π(t)(· |s)E ≤KL π∗(· |s)∥π(t−1)(· |s) −KL π∗(· |s)∥π(t)(· |s) −KL π(t)(· |s)∥π(t−1)(· |s) +ηβKL π∗(· |s)∥πθθθref(· |s) . Consider (II). For any node s∈ S \ S Hand t∈[T], by taking x(2)=π∗(· |s),x(1)=π(t)(· |s),x(0)= π(t−1)(· |s),xref=πθθθref(· |s),g=−∇π(· | https://arxiv.org/abs/2505.16984v1 |
|s)J(t−1)andβ0=0 in Lemma E.8, we have ηD ∇π(· |s)J(t−1),π∗(· |s)−π(t)(· |s)E ≤KL π∗(· |s)∥π(t−1)(· |s) −KL π∗(· |s)∥π(t)(· |s) −KL π(t)(· |s)∥π(t−1)(· |s) . Moreover, D ∇π(· |s)J(t−1),π∗(· |s)−π(t)(· |s)E =βπ∗(a∗ h|s∗ h)−π(t)(a∗ h|s∗ h) π(t−1)(a∗ h|s∗ h) (i) ≥0 (ii)=D eA(t−1)(s,·),π∗(· |s)−π(t)(· |s)E . (i)uses the fact that π∗(a∗ h|s∗ h) =1 and (ii) uses eA(t−1)(s,·) =0by definition. Therefore, D eA(t−1)(s,·),π∗(· |s)−π(t)(· |s)E ≤KL π∗(· |s)∥π(t−1)(· |s) −KL π∗(· |s)∥π(t)(· |s) −KL π(t)(· |s)∥π(t−1)(· |s) . (E.8) For (III), which is soff the sampled trajectory at step t−1, by definition we have eA(t−1)(s,·) =0. Then, D ∇π(· |s)J(t−1),π∗(· |s)−π(t)(· |s)E =0=D eA(t−1)(s,·),π∗(· |s)−π(t)(· |s)E , and (E.8) also holds. Lemma E.5. For any h∈{0, 1,· · ·,H−1}, we have KL π∗(· |s∗ h)∥πθθθref(· |s∗ h) ≤logB+2 θθθref ∞. Proof. For any h∈{0}∪[H−1], since π∗is deterministic, let a∗ hbe the action such that π∗(a∗ h|s∗ h) =1. Then, KL π∗(· |s∗ h)∥πθθθref(· |s∗ h) =B ∑ a=1π∗(a|s∗ h)logπ∗(a|s∗ h) πθθθref(a|s∗ h)=log1 πθθθref(a∗ h|s∗ h). By definition, we have πθθθref(a∗ h|s∗ h) =exp θref s∗ h,a∗ h ∑B a=1exp θref s∗ h,a≥exp − θθθref ∞ Bexp θθθref ∞=exp −2 θθθref ∞ B. Therefore, KL π∗(· |s∗ h)∥πθθθref(· |s∗ h) ≤log B·exp 2 θθθref ∞ =logB+2 θθθref ∞. 30 Lemma E.7. For any t∈{1, 2,· · ·,T},h∈{0, 1,· · ·,H−1}, and node s∈ S h, (D.2) is equivalent to the following, π(t)(· |s) = argmin π(· |s)∈∆BD −eA(t−1)(s,·),π(· |s)E +βKL π(· |s)∥πθθθref(· |s) +1 ηKL π(· |s)∥π(t−1)(· |s) (I) π(t)(· |s) = argmin π(· |s)∈∆BD −∇π(· |s)J(t−1),π(· |s)E +1 ηKL π(· |s)∥π(t−1)(· |s) , (II, III) where (I), (II), (III) stand for the cases when ∇π(· |s)J(t−1)= eA(t−1)(s,·) +βlogπ(t−1)(· |s)−βlogπθθθref(· |s) +β1(I) A one-hot vector with only index a∗ hbeβ π(t−1)(· |s)(II) 0. (III) Proof. The Lagrangian ofD −eA(t−1)(s,·),π(· |s)E +βKL x∥xref +1 ηKL π(· |s)∥π(t−1)(· |s) is L π(t)(· |s) :=D −eA(t−1)(s,·),π(t)(· |s)E +βKL π(t)(· |s)∥πθθθref(· |s) +1 ηKL π(t)(· |s)∥π(t−1)(· |s) +λ B ∑ a=1π(t)(a|s)−1! . For any action a∈[B], by setting∂L(π(t)(· |s)) ∂π(t)(a|s)=0, we have −eA(t−1)(s,·) +βlog π(t)(a|s) πθθθref(a|s)! +β+1 ηlog π(t)(a|s) π(t−1)(a|s)! +1 η+λ=0, which implies that π(t)(a|s) =exp −ηβ−1−ηλ+ηeA(t−1)(s,·)+ηβlog πθθθref(a|s) +log(π(t−1)(a|s)) 1+ηβ! . By further setting∂L(π(t)(· |s)) ∂λ=0, we have B ∑ a=1π(t)(a|s) =1. Therefore, by combining all pieces together, we have π(t)(a|s)(i)=exp ηeA(t−1)(s,·) +ηβlog πθθθref(a|s) +log π(t−1)(a|s) 1+ηβ /Z ∝exp ηeA(t−1)(s,·) +ηβlog πθθθref(a|s) +log π(t−1)(a|s) 1+ηβ ∝expη 1+ηβeA(t−1)(s,·) +ηβ 1+ηβθref(s,a) +1 1+ηβθ(t−1)(s,a) . In(i),Z=∑B a=1exp ηeA(t−1)(s,·)+ηβlog πθθθref(a|s) +log(π(t−1)(a|s)) 1+ηβ! . For (II), (III), the proof can be concluded by setting β=0and changing eA(t−1)(s,·)to∇π(· |s)J(t−1). 31 | https://arxiv.org/abs/2505.16984v1 |
arXiv:2505.16986v1 [cs.CL] 22 May 2025T1: A Tool-Oriented Conversational Dataset for Multi-Turn Agentic Planning Amartya Chakraborty∗, Paresh Dashore∗, Nadia Bathaee∗, Anmol Jain∗, Anirban Das ,Shi-Xiong Zhang ,Sambit Sahu ,Milind Naphade ,Genta Indra Winata∗ Capital One {amartya.chakraborty, paresh.dashore, nadia.bathaee}@capitalone.com {anmol.jain, genta.winata}@capitalone.com Abstract Large Language Models (LLMs) have demonstrated impressive capabilities as intelligent agents capable of solving complex problems. However, effective plan- ning in scenarios involving dependencies between API or tool calls-particularly in multi-turn conversations-remains a significant challenge. To address this, we introduce T1, a tool-augmented, multi-domain, multi-turn conversational dataset specifically designed to capture and manage inter-tool dependencies across diverse domains. T1enables rigorous evaluation of agents’ ability to coordinate tool use across nine distinct domains (4 single domain and 5 multi-domain) with the help of an integrated caching mechanism for both short- and long-term memory, while supporting dynamic replanning-such as deciding whether to recompute or reuse cached results. Beyond facilitating research on tool use and planning, T1also serves as a benchmark for evaluating the performance of open-source language models. We present results powered by T1-A GENT , highlighting their ability to plan and reason in complex, tool-dependent scenarios. 1 Introduction Leveraging external tools using Large Language Models (LLMs) to solve diverse conversational tasks has emerged as a promising direction in the development of intelligent agents [ 15]. Despite recent advances, task oriented llm based dialogue systems perform poorly over long conversational context [ 3], and there remains a lack of comprehensive resources for training and evaluating multi- turn, multi-domain conversational agents that emphasizes on complex user needs. Existing datasets primarily focus on single-turn conversations for planning tasks—such as executing APIs or code [ 12, 20]—and do not capture realistic multi-turn scenarios that require reasoning over long contexts, coordination across multiple tools, and adherence to complex constraints. Tool calling in realistic scenarios often involves interdependent tools, where the correctness and efficiency of task completion depend heavily on the order and context in which tools are invoked. An effective agent must therefore understand when, which, and in what sequence to call tools in order to successfully complete complex tasks. To address this gap, we introduce T1, a new dataset and evaluation framework for assessing agent performance in multi-turn dialogues, with a particular focus on tool usage and reasoning about inter-tool dependencies. The dataset spans multiple domains and features complex, goal-oriented interactions between users and a travel assistant. Alongside the dataset, we propose T1-A GENT , an agent designed to interpret nuanced user intents and generate executable code using a predefined set of tools. T1is specifically designed to evaluate the ability of LLM-based agents to plan tool use ∗The authors contributed equally. Preprint. Figure 1: Illustrative example from the T1dataset. This example showcases a multi-domain scenario involving both flights and hotels, where the user is planning a trip and attempting to book relevant services. The dialogue is constructed by retrieving entities from a knowledge base, and tool calls are executed using a predefined toolbox, simulating realistic, tool-augmented agent behavior. effectively and leverage a caching mechanism to efficiently reuse previously retrieved information. To support this, we incorporate dedicated tools | https://arxiv.org/abs/2505.16986v1 |
for accessing and managing the cache. Our contributions can be summarized as follows: •We introduce T1, a comprehensive multi-turn dataset consisting of 13.5k dialogues designed to evaluate tool-using, LLM-based agents across nine key domains—comprising four single- domain and five multi-domain settings. The dataset covers a wide range of interaction scenarios, including single-domain, mixed-domain, and fully multi-domain conversations. It incorporates 14 distinct tools, enabling realistic and fine-grained assessment of agent capabilities in complex, tool-driven dialogue tasks. •To enhance the complexity and realism of the evaluation, the dataset includes cross-domain tasks and interdependent tool calls, requiring agents to reason about tool selection and execution order within context. This evaluation framework assesses the ability of LLM- based agents to think critically, reason effectively, and make context-aware decisions. •We evaluate our dataset using an LLM-based agent, T1-A GENT , a code-generation system built on open-source language models and equipped with a caching mechanism for improved performance. This architecture enables scalable evaluation and provides a robust, efficient framework for tool-using agents. • We will publicly release our code and dataset to facilitate future research. 2 T1 Dataset T1is a dataset specifically designed to evaluate LLM-based agents on tool usage and complex planning tasks over multi-turn conversational context. This dataset simulates multi-turn conversa- tions spanning both four single-domain and five multi-domain settings: flights ,restaurants ,hotels , attractions ,flights-hotels ,hotels-restaurants ,hotels-attractions ,flights-hotels-attractions , and flights- hotels-restaurants . Planning tasks are formulated as code, where function calls to external tools are used to accomplish specific goals. 2 Figure 2: T1generates data by populating delexicalized entities with corresponding entries from the knowledge base. 2.1 Tasks and Notations We define a dialogue Das an alternating sequence of assistant and user turns: D={A1, U1, A2, U2, . . . , A n}, (1) where each Airepresents an assistant turn and Uia user turn. The dialogue always starts with an assistant turn and proceeds in a strictly alternating order. We provide a set of tools T, where each tool t∈ T encapsulates logic to perform a specific function. These tools can be categorized as follows: •Domain-specific tools : Designed to handle operations tied to a particular application domain. •Interdependent tools : Used to identify or reason about dependencies across domains. •Generic tools : Domain-independent or auxiliary utilities applicable across tasks. Dataset Deployed Human Annotated Execution Result Multi-turn Context Multi-Domain Tools Tool Planning Evaluation Planning Tool Planning APIBank [4] ✓ ✓ ✓ ✓ APIBench [8] ✓ GAIA [6] ✓ ✓ GTA [18] ✓ ✓ ✓ ✓ m&m’s [5] ✓ ✓ ✓ ✓ ToolBench [11] ✓ ✓ Toolformer [12] ✓ ✓ ✓ TravelPlanner [20] ✓ ✓ ✓ ✓ T1 ✓ ✓ ✓ ✓ ✓ Table 1: Comparison of datasets for the LLM-based agent systems. 2.2 Dataset Construction We construct our dataset by manually collecting data from Wikipedia to gather entities and metadata. For this task, we define four domains: flights, hotels, restaurants, and attractions. Additionally, we compile a list of 128 airports and 321 cities within the United States, along with up to 15 neighborhoods for each city. Some information—such as airline names, hotel names, and | https://arxiv.org/abs/2505.16986v1 |
hotel star ratings—is synthesized to avoid inaccuracies and to prevent the LLMs from relying on its internal knowledge of real-world named entities. 2.2.1 Ontology We have a total of 5 ontologies, one for each of the four defined domains and another one for city. For each ontology, we curated a list of key attributes that would be relevant such as the airline or 3 Single-domain Multi-domain Common Total Flights Hotels Restaurants Attractions 2 Domains 3 Domains # Attributes 35 43 21 7 N/A N/A N/A 106 # Tools 2 2 2 2 1 N/A 5 14 # Dialogues 1.5k 1.5k 1.5k 1.5k 4.5k 3k N/A 13.5k Avg. # turns 8.2 8.0 9.0 6.0 11.3 9.7 N/A N/A Table 2: Dataset statistics detailing the number of attributes, tools, dialogues, and the average number of turns, categorized by each individual domain, multi-domain, and common categories. "N/A" indicates that the metric is not applicable in the given context. number of layovers for a flight, the cuisine of a restaurant or the number of stars or customer rating for a hotel. For each of these attributes, we defined the possible values and then used the ontology to generate synthetic data for flights, hotels and restaurants which would subsequently be used by the tools we defined. Flights. The ontology includes the airline, flight class (economy, business and first), number of layovers ranging between 0-2 stops, the duration of a layover ranging from 1-6 hours and a list of the possible airports to depart from and arrive to. To see the full table, go to Table 15. Hotels. The ontology includes the number of stars ranging between 1 to 5 stars, the customer rating of the hotel ranging from 1.0 to 5.0, the cost of the hotel as well as whether or not the hotel includes a number of amenities such as the presence of a gym or pool. To see the full table, go to Table 16. Restaurants. The ontology includes the type of cuisine, the customer rating of the hotel ranging from 1.0 to 5.0, the price per person and whether or not the restaurant served any particular dietary options such as vegetarian, vegan, or halal. To see a full list of dietary restrictions, taken into consideration, go to Table 17. Attractions. The ontology includes the type of attraction which is one of the following: touristy, culinary, historical, scenic, social, art, cultural, guided, and sporting. Cities. The ontology includes a list of cities in the United States of America (US) that was collected through Wikipedia. For each city, we then extracted up to 15 neighborhoods as well as the approximate geographical coordinates of each neighborhood. 2.2.2 Knowledge Bases Attractions. From the list of 321 cities, we collect up to 15 attractions for 85 cities through the usage of Llama-3.3 70B Instruct and conduct quality assurance by human annotators to ensure the data correctness. We also collect the city neighborhood for each attraction as well as the geographical coordinates. In total, there are 728 attractions that were collected from the 85 cities for this dataset. Flights. For | https://arxiv.org/abs/2505.16986v1 |
flights, there are a total of 128 airports that are used to generate synthetic flight data. Each flight generated has an airline that was randomly selected from the ontology and the start and end airport are randomly selected from the list of airports in the ontology. Additionally, the departure time is randomly generated, however the arrival time is computed based on the geographical distance between the departure and arrival airports assuming the average speed of the flight to be 450 miles per hour. 480,410 synthetic flights were generated as part of this dataset Hotels. Hotels are generated for all 321 cities in the ontology. For a particular city, a neighborhood is assigned and additionally, synthetic latitude and longitude coordinates are generated for each hotel in a city as the coordinates would be used for distance computing purposes. Each hotel is also provided a star and a synthetic customer rating that would be correlated to the amenities offered by the establishment as well as the price per night. 47,589 hotels were generated as part of this dataset. Restaurants. Restaurants are generated for all 321 cities in the ontology. Just like for hotels, in a particular city, a neighborhood is assigned to a restaurant and additionally, synthetic latitude 4 and longitude coordinates are generated. Each restaurant also is given a user rating which was synthetically generated, a cuisine provided by the ontology as well as whether particular dietary options are supported and the average cost per person. 17,975 restaurants are generated as part of this dataset. 2.2.3 Data Annotation and Quality Assurance To ensure high-quality and natural data, we employ five human annotators, with each data sample reviewed by both an annotator and a quality assurance (QA) reviewer. The annotators were selected to represent a diverse set of background and perspectives, while maintaining a high technical bar. All annotators have at least a Master’s degree in Computer Science and demonstrated proficiency in Python, enabling them to handle complex annotation tasks requiring logical reasoning and scripting. The QA specialist also has a strong background in programming. Annotators are assigned a category of templates and are responsible for writing the appropriate code using the tools defined for this project. Afterward, the QA reviewer evaluates the annotated code and provides feedback, which is used to make necessary corrections and improvements. 2.2.4 Dialogue Generation We create a total of nine data categories, as discussed in Section 4.1. The data construction follows a three-step process: first, we generate templates with placeholder values; second, we annotate the templates with code using the provided tools; and third, we programmatically fill in the placeholder values for each template. Generating Dialogue Templates. For each of the 9 dialogue categories, we generate 60 dialogue templates using Llama-3.3 70B Instruct. These templates are then reviewed and refined by human annotators to ensure accurate dialogue flow, as well as high coherence and fluency. The model is prompted to generate synthetic dialogues for both single and multi-domain. Each template consists of a conversation between the assistant and user. To learn more about the prompt used, go to | https://arxiv.org/abs/2505.16986v1 |
Appendix I and J. Additionally, the templates consist of placeholders for attributes such as a city or neighborhood name, cuisine of a restaurant, rating of a hotel or the type of an attraction. As the dataset is related to the area of travel, some placeholder values are tied to a particular city such as the neighborhood ( <CITY_x_NEIGHBORHOOD_x> , airport name ( <CITY_x_AIRPORT_x> ), hotel name (<CITY_x_HOTEL_NAME_x> ) and restaurant name ( <CITY_x_RESTAURANT_NAME_x> ). Template Lexicalization. Within each dialogue category, 25 dialogues are generated with the placeholder values filled with the actual values present within the ontology. In addition to the dialogues, the ground-truth code which is part of the annotation has the placeholder values replaced with actual values. The dataset is split into three partitions: training, validation and test. To limit data contamination among these three partitions, the cities that are to be used for filling in the placeholder are also assigned to a particular partition only to be used there. Hence, for example if the city of Boston is assigned to the training partition, it will never be present in any of the validation or test dialogue conversations. Additionally, for fields such as the check-in and checkout dates for hotels or departure and return flight dates, particular methods are taken to ensure that any dates or times follow chronological order and there is no instance where a check-in date at a hotel would be after the checkout date. Generated Dialogue and Ground Truth Validation. Once the placeholders are filled for both the dialogues and ground truth code samples, each block of code is executed to identify any potential errors in annotation including an improper use of a placeholder or a syntax error. The validation assists the team in correcting any annotations and ensures that the resulting code is runnable and correct. 3 T1-Agent We build an LLM powered T1-A GENT to evaluate and simulate our T1agentic dataset and measure its performance across three tasks: information seeking, parameter extraction, and tool calling. 5 3.1 Information Seeking Each tool has a mandatory set of parameters that must be provided before running the tool successfully. Information seeking is the task of gathering this mandatory set of parameters for any of the respective tools. We want to evaluate the capability of the agent to understand both the intent of the user’s query and which parameters to ask the user about as a follow-up. Figure 1 shows an example of how information seeking is used by the agent. When the user inquires about a flight from New York to San Jose and a hotel in San Jose, the agent understands that the user would like to search for flights but needs to provide the departure date of the flight and the check-in date of the hotel. Thus, the agent infers that in followup discussion, it needs to ask for these details from the user. 3.2 Parameter Extraction After the agent understands the tools to call and the necessary parameters to collect, it works to extract those parameters from the user dialogue. From Figure 1, when | https://arxiv.org/abs/2505.16986v1 |
the user mentions that they want a flight from New York to San Jose for 3 nights, the agent is able to extract the starting and ending airport city for the flight as well as the number of nights to stay at the hotel. However, neither search_flights orsearch_hotels can be called yet since the departure date of the flight and the check-in and check-out date of the hotel have not yet been provided. 3.3 Tool Calling Once the agent understands the necessary tool(s) to call and is provided the necessary parameters for each tool, the agent will make calls to each tool. In Figure 1, once the user provides the departure date for the flight to San Jose, the agent calls the search_flights tool with the departure and arrival cities as well as the departure date. Next, before the agent can call the search_hotels tool, it must first compute the check-out date for the hotel. The check-in date provided is May 9 and the user provided the context that the duration of the hotel stay is 3 nights. Hence, the adjust_date tool is called to compute the check-out date of the hotel. Once this is calculated, the agent will call the search_hotels tool with the check-in and check-out dates as well as the city of the hotel. Additionally, both the flight and hotels that came up in the results will be saved to the cache, for further possible usage. 3.4 Data Caching We introduce a data caching mechanism that enables T1-A GENT to reuse the outputs of earlier tool calls when handling subsequent user requests. This reduces redundant computation and improves response efficiency in multi-turn interactions. After each user turn, any search results retrieved by a tool are cached for potential reuse in later turns. The save_to_cache tool is used to store these results. As illustrated in Figure 1, after the user provides the departure date for a flight to San Jose, the agent calls both search_flights andsearch_hotels , and then stores the results using thesave_to_cache tool. Later, when the user requests flights from a specific airline, the agent retrieves the cached flight results using the get_results_from_cache tool and filters them using thefilter_flights tool based on the user’s airline preference. This caching approach helps avoid unnecessary API calls by reusing existing search results and applying filtering when appropriate. Example: Refining a Flight Search. Suppose a previous user query fetched flights from NYC to Boston, and the result was cached with the key "flights_nyc_bos" . Later, the user asks to see only flights priced under $500. The LLM, using the cache summary, generates the following plan: flights = get_results_from_cache(key="flights_nyc_bos") affordable_flights = filter_flights(prior_result=flights, budget=500) This illustrates how the LLM composes new logic by combining previously retrieved results with downstream tools, without repeating expensive API operations. Our approach enables more efficient, coherent, and stateful plan generation in realistic, multi-turn assistant conversations. 6 4 Experimental Setup 4.1 Datasets Each category contains 60 dialogue templates and a pool of 54 cities used to populate placeholder values. We split both the templates and cities into 25% for training, | https://arxiv.org/abs/2505.16986v1 |
9% for validation, and 66% for testing, resulting in 15 training templates, 5 validation templates, and 40 test templates per category. Similarly, 13 cities are allocated for training, 4 for validation, and 37 for testing. Dialogues are randomly sampled within each partition. After partitioning, a script fills all placeholders in each template using the corresponding cities from the assigned split, ensuring no data leakage or overlap of templates and entities between the training and test sets. Each template is instantiated into 25 unique dialogues after substituting the placeholders in the dialogue templates with entities defined in the ontology, yielding a total of 1,500 fully generated dialogues. 4.2 Domain Adaptation using SFT We perform a simple instruction tuning with the train dataset in order to showcase that the performance on such complex conversational tool calling can be improved over zero or few shot prompting. We train a Llama 3.1 8B Instruct model for one epoch on the training dataset. The training dataset is structured as a list of (prompt, completions) pairs. The SFT thereafter follows standard next token prediction with cross-entropy loss. We use LoRA [ 2] on 8 A100 SXM gpus instead of full finetuning. For reproducibility, we use the widely adopted Huggingface TRL library [ 17] and include the exact command to replicate the training in Section P. 4.3 Inference Procedure During inference, for each user turn in a dialogue, the T1-A GENT generates executable Python code that fulfills the user request at that point in the conversation. Before generating any code, the agent will check an execution cache to determine if a similar request has been previously resolved. If a cached result is available, the agent will write code that fetches and reuses the cached object to prevent redundant computation and tool invocation. To do this, the agent constructs a prompt that includes the conversation history along with the current user turn. Instead of including the full execution cache—which can be large and token-intensive—the prompt incorporates a summary of the cached results. These summaries are generated using a deterministic, rule-based function that transforms each cached result into a concise representation. This approach significantly reduces the token load in the prompt, making it feasible to include relevant past results without exceeding model input limits. This design enables the agent to consider past outcomes and reuse relevant information during code generation. Even in cases where the user’s current query differs slightly from earlier queries, the agent is able to fetch a prior result from the cache and use that as a starting point for the current user turn. By introducing summarization at the planning stage , we shift caching responsibilities from the tool-execution layer to the agent’s decision-making process. Our approach allows the agent to selectively reuse and adapt cached outputs, leading to improved latency and broader generalization. The generated code is executed in a sandboxed environment, and the cache is updated with the new results on each user turn. Our dataset also includes the corresponding ground truth code and post-execution cache, which are used for performance evaluation. 5 Results and Analysis | https://arxiv.org/abs/2505.16986v1 |
In this section, we present the performance of various LLMs on both single-domain and multi-domain tasks. Single-domain tasks involve conversations focused exclusively on one domain—such as flights, hotels, restaurants, or attractions. In contrast, multi-domain tasks involve interactions spanning multiple domains, such as a user requesting both flight bookings and nearby hotel recommendations within the same dialogue. 7 5.1 Overall Results Table 3 presents the overall performance of Llama 3.1 8B Instruct in a few-shot in-context learning setting. The model performs significantly worse on multi-domain tasks compared to single- domain ones. Additionally, much of the generated code—particularly in multi-domain scenarios—is not executable. Despite the inclusion of few-shot examples, the model struggles with accurate tool invocation. Parameter matching is even more challenging, as it requires the model not only to identify the correct tool but also to extract the appropriate parameter values from the user’s utterance. 5.2 Analysis The Impact of Domain Adaptation on Performance. Figure 3 presents the tool call F1 and parameter matching F1 scores. Overall, the SFT models outperform the base model and even surpass the performance of the 70B model by a substantial margin across all domains. Figure 3: Left: Tool Call F1 performance and Right: Parameter Matching F1 performance. Domain Tool Call Parameter Matching Code Exec. Information Seeking Cache Rate Summary Acc. Prec. Rec. F1 Acc. Prec. Rec. F1 Acc. SacreBLEU BERTScore EM Single-Domain Flights 25.12 30.90 57.34 40.16 15.57 17.34 60.30 26.94 26.24 14.41 82.36 36.63 Hotels 42.86 49.89 75.25 60.00 17.57 23.59 40.80 29.89 48.13 47.01 80.38 33.70 Restaurants 40.40 52.97 63.00 57.55 31.47 36.07 71.12 47.87 56.99 28.91 75.68 35.49 Attractions 33.03 48.49 50.90 49.66 14.11 25.11 24.36 24.73 40.40 N/A N/A 43.87 Multi-Domain. F: Flights, H: Hotels, R: Restaurants, A: Attractions F-H 33.09 40.89 63.44 49.73 16.11 19.28 49.48 27.75 38.43 26.54 82.00 21.85 H-R 51.06 59.21 78.77 67.61 26.62 33.52 56.37 42.05 51.82 28.60 84.07 9.95 H-A 38.75 45.46 72.39 55.85 16.11 20.02 45.12 27.74 48.88 26.03 81.77 32.75 F-H-A 27.76 36.24 54.24 43.45 16.21 21.18 40.88 27.90 28.76 14.15 78.58 24.37 F-H-R 29.04 34.65 64.19 45.00 18.53 21.77 55.44 31.26 36.79 33.28 82.84 27.47 Table 3: Overall results using Llama 3.1 8B Instruct with few-shot in-context learning. Model Performance. Our qualitative evaluation shows that the LLaMA 3.3 70B Instruct model consistently delivers the strongest performance across tasks. In contrast, the LLaMA 3.1 8B Instruct model, without any fine-tuning, struggles significantly. It often fails to utilize avail- able cache effectively and generates an incorrect code. Impact of Fine-Tuning. While the 70B and the non-fine-tuned 8B models were prompted to generate answers with reasoning, a fine-tuned variant—despite not being instructed to provide reasoning—achieved performance comparable to the 70B Instruct model. This indicates the strong potential of task-specific fine-tuning in enhancing model performance. Few-Shot Evaluation. As shown in Figure 4, we evaluate model performance on the flights domain under 0-shot, 5-shot, and 13-shot settings. Performance is notably poor in the 0-shot setting, with improvements observed in 5-shot and 13-shot configurations—though the gains plateau beyond 5 shots. Qualitative analysis suggests that without sufficient context, models continue to | https://arxiv.org/abs/2505.16986v1 |
struggle with generalization in complex, multi-domain scenarios. 8 Figure 4: Few-shot performance on Flight domain. Need for Complex Evaluation. Even with the state-of-the-art model, LLaMA 3.3 70B Instruct , which performs well on standard code generation tasks, we observe that it continues to struggle in complex, multi-turn scenarios involving advanced planning. The T1dataset is designed to fill this gap by serving as a benchmark for evaluating model performance in such challenging settings. 6 Related Work 6.1 Large Language Model Agents LLM-based agents have emerged as foundational components in AI systems, capable of performing complex, multi-step tasks through reasoning, memory integration, and tool use. These agents often combine a pre-trained LLM with structured modules such as long-term memory, tool calling capabilities, and self-reflective feedback loops. Frameworks such as AutoGPT [ 21], and AgentGPT allow LLMs to autonomously decompose user goals into subgoals and execute them sequentially using external APIs. More structured systems like HuggingGPT [ 13], CrewAI, and AutoGen [ 19] facilitate collaboration between multiple LLM agents, each specializing in roles such as planning, execution, or critique. Despite significant progress, planning within task-oriented dialogue systems—particularly over long horizons—remains a fundamental challenge. Previous paradigms such as plan-observe-execute (e.g., ReAct [ 22], ADaPT [ 10]) enable the model to interleave tool calls with reasoning steps. However, most frameworks focus on linear execution paths where each step invokes a single tool. These approaches often lack the ability to manage inter-tool dependencies, reuse intermediate results, or revise plans based on partial tool failures. Further, a lot of these systems performs single turn planning, i.e. the user of the system submits a request with all the information in the first turn. Multi-turn conversational planning is a nascent field. For example, in multi-turn workflows like conversational trip planning, agents need to collect information across multiple steps, then plan and coordinate across tools like flight search, visa information, and calendar APIs—something most current systems struggle to handle. 6.2 Tool-Based Agents and Benchmarks Tool usage extends the scope of LLM capabilities beyond language modeling [ 16,12] to real-world actionability, enabling interactions with APIs, web tools, and external software. To evaluate this ability, several benchmarks have been proposed, including APIBank [ 4], Tau-Bench [ 23], GAIA [ 6], ALFWorld [ 14], GTA [ 18], TravelPlanner [ 20], and ToolBench [ 11]. Although these benchmarks each emphasize different strengths, such as breadth of API coverage, realism, or reasoning complex- ity—they often treat tool use as a series of isolated atomic actions without requiring coordinated planning between multiple tools. Further, there In contrast, our work introduces T1, a tool-driven benchmark designed to evaluate LLM agents in multi-turn, multi-domain dialogue settings with inter-tool dependencies. T1features an integrated caching mechanism that supports both short- and long-term memory of tool call results, allowing agents to make intelligent decisions about whether to replan or reuse cached outputs. Unlike prior benchmarks, it tests an agent’s ability to perform dynamic replanning, handle branching workflows, and compose tools in a realistic, dialog-driven environment. Thus, T1not only challenges existing tool-use agents but also provides a diagnostic sandbox for evaluating the reasoning capabilities | https://arxiv.org/abs/2505.16986v1 |
of open-source LLMs under realistic constraints. 9 7 Conclusion We introduce T1, a comprehensive dataset for evaluating planning, reasoning, and tool-usage in LLM-based agents through complex, multi-turn dialogues. By introducing inter-tool dependencies, dynamic replanning, and caching, it supports rigorous assessment across single- and multi-domain settings. Experiments with T1-A GENT highlight both the strengths and limitations of open-source LLMs: while the LLaMA 3.3 70B Instruct model performs best, the non-fine-tuned LLaMA 3.1 8Bstruggles with caching and code generation. A fine-tuned 8B variant matches 70B performance, emphasizing the value of task-specific tuning. Despite improvements with few-shot learning, models still face generalization challenges in multi-domain scenarios. References [1]M. Abdin, S. Agarwal, A. Awadallah, V . Balachandran, H. Behl, L. Chen, G. de Rosa, S. Gu- nasekar, M. Javaheripi, N. Joshi, et al. Phi-4-reasoning technical report. arXiv preprint arXiv:2504.21318 , 2025. [2]E. J. Hu, Y . Shen, P. Wallis, Z. Allen-Zhu, Y . Li, S. Wang, L. Wang, W. Chen, et al. Lora: Low-rank adaptation of large language models. ICLR , 1(2):3, 2022. [3]P. Laban, H. Hayashi, Y . Zhou, and J. Neville. Llms get lost in multi-turn conversation. arXiv preprint arXiv:2505.06120 , 2025. [4]M. Li, Y . Zhao, B. Yu, F. Song, H. Li, H. Yu, Z. Li, F. Huang, and Y . Li. Api-bank: A comprehensive benchmark for tool-augmented llms. arXiv preprint arXiv:2304.08244 , 2023. [5]Z. Ma, W. Huang, J. Zhang, T. Gupta, and R. Krishna. m & m’s: A benchmark to evaluate tool-use for m ulti-step m ulti-modal tasks. In European Conference on Computer Vision , pages 18–34. Springer, 2024. [6]G. Mialon, C. Fourrier, T. Wolf, Y . LeCun, and T. Scialom. Gaia: a benchmark for general ai assistants. In The Twelfth International Conference on Learning Representations , 2023. [7] N. Muennighoff, Z. Yang, W. Shi, X. L. Li, L. Fei-Fei, H. Hajishirzi, L. Zettlemoyer, P. Liang, E. Candès, and T. Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 , 2025. [8]S. G. Patil, T. Zhang, X. Wang, and J. E. Gonzalez. Gorilla: Large language model connected with massive apis. Advances in Neural Information Processing Systems , 37:126544–126565, 2024. [9]M. Post. A call for clarity in reporting bleu scores. In Proceedings of the Third Conference on Machine Translation: Research Papers , pages 186–191, 2018. [10] A. Prasad, A. Koller, M. Hartmann, P. Clark, A. Sabharwal, M. Bansal, and T. Khot. Adapt: As- needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 , 2023. [11] Y . Qin, S. Liang, Y . Ye, K. Zhu, L. Yan, Y . Lu, Y . Lin, X. Cong, X. Tang, B. Qian, et al. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789 , 2023. [12] T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, E. Hambro, L. Zettlemoyer, N. Cancedda, and T. Scialom. Toolformer: Language models can teach themselves to use tools. Advances in Neural Information Processing Systems , 36:68539–68551, 2023. [13] Y . Shen, K. Song, X. Tan, D. Li, W. Lu, and Y . Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face. Advances | https://arxiv.org/abs/2505.16986v1 |
in Neural Information Processing Systems , 36:38154–38180, 2023. [14] M. Shridhar, X. Yuan, M.-A. Côté, Y . Bisk, A. Trischler, and M. Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. arXiv preprint arXiv:2010.03768 , 2020. 10 [15] Q. Tang, Z. Deng, H. Lin, X. Han, Q. Liang, B. Cao, and L. Sun. Toolalpaca: Generalized tool learning for language models with 3000 simulated cases. arXiv preprint arXiv:2306.05301 , 2023. [16] C. Team. Chameleon: Mixed-modal early-fusion foundation models. arXiv preprint arXiv:2405.09818 , 2024. [17] L. von Werra, Y . Belkada, L. Tunstall, E. Beeching, T. Thrush, N. Lambert, S. Huang, K. Ra- sul, and Q. Gallouédec. Trl: Transformer reinforcement learning. https://github.com/ huggingface/trl , 2020. [18] J. Wang, M. Zerun, Y . Li, S. Zhang, C. Chen, K. Chen, and X. Le. GTA: A benchmark for general tool agents. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. URL https://openreview.net/forum?id=akEt8QAa6V . [19] Q. Wu, G. Bansal, J. Zhang, Y . Wu, B. Li, E. Zhu, L. Jiang, X. Zhang, S. Zhang, J. Liu, et al. Autogen: Enabling next-gen llm applications via multi-agent conversation. arXiv preprint arXiv:2308.08155 , 2023. [20] J. Xie, K. Zhang, J. Chen, T. Zhu, R. Lou, Y . Tian, Y . Xiao, and Y . Su. Travelplanner: A benchmark for real-world planning with language agents. arXiv preprint arXiv:2402.01622 , 2024. [21] H. Yang, S. Yue, and Y . He. Auto-gpt for online decision making: Benchmarks and additional opinions. arXiv preprint arXiv:2306.02224 , 2023. [22] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y . Cao. React: Synergizing rea- soning and acting in language models. In International Conference on Learning Representations (ICLR) , 2023. [23] S. Yao, N. Shinn, P. Razavi, and K. Narasimhan. tau-bench: A benchmark for tool-agent-user interaction in real-world domains. arXiv preprint arXiv:2406.12045 , 2024. [24] T. Zhang, V . Kishore, F. Wu, K. Q. Weinberger, and Y . Artzi. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675 , 2019. 11 A Limitations In this work, we focus on constructing and introducing a new dataset as a benchmark for evaluating agentic workflows in multi-turn conversational dialogue settings, with an emphasis on tool calling for planning. Our evaluations are limited to open-source models, as proprietary models are not included due to resource constraints. We use Llama 3.1 8B Instruct, Llama 3.3 70B Instruct, S1.1 32B, and Phi4 Reasoning Plus. We will release T1dataset publicly and hope it will encourage future research that includes evaluations on proprietary models as well. B Additional Results B.1 Detailed Results for Llama 3.1 8B Instruct with SFT and Llama 3.3 70B Instruct Table 4 shows the results for Llama 3.1 8B Instruct with Domain Adaptation and Table 5 shows the results for Llama 3.3 70B Instruct. Domain Tool Call Parameter Matching Code Exec. Rate Information Seeking Cache Summary Acc. Prec. Rec. F1 Acc. Prec. Rec. F1 Acc. SacreBLEU BERTScore EM Single-Domain Flights 83.51 89.98 92.08 91.02 58.02 66.38 82.18 73.44 85.26 39.19 86.44 62.07 Hotels 78.18 84.87 90.84 87.75 60.40 67.29 85.53 | https://arxiv.org/abs/2505.16986v1 |
75.32 72.13 48.30 80.91 71.35 Restaurants 72.76 80.40 88.44 84.23 57.49 66.69 80.64 73.00 95.16 95.14 98.63 65.32 Attractions 90.50 97.87 92.33 95.02 75.54 91.34 81.37 86.07 98.15 N/A N/A 83.00 Multi-Domain. F: Flights, H: Hotels, R: Restaurants, A: Attractions F-H 80.27 91.31 86.91 89.05 57.76 74.51 71.99 73.23 89.25 24.04 80.63 57.01 H-R 82.02 91.03 89.23 90.12 68.44 77.95 84.86 81.26 83.67 29.12 87.58 64.42 H-A 62.30 72.67 81.35 76.77 59.22 68.73 81.05 74.39 71.41 27.87 82.80 65.70 F-H-A 77.19 87.50 86.76 87.13 59.63 74.16 75.27 74.71 81.15 22.65 83.45 52.76 F-H-R 71.53 86.05 80.91 83.40 54.39 71.99 68.99 70.45 82.47 46.32 84.40 53.88 Table 4: Overall Results using Llama 3.1 8B Instruct after SFT. Domain Tool Call Parameter Matching Code Exec. Rate Information Seeking Cache Summary Acc. Prec. Rec. F1 Acc. Prec. Rec. F1 Acc. SacreBLEU BERTScore EM Single-Domain Flights 58.42 78.26 69.75 73.76 32.11 40.51 60.76 48.61 83.32 20.25 82.90 53.80 Hotels 92.42 96.03 96.10 96.07 75.33 80.85 91.69 85.93 98.55 46.98 80.27 75.00 Restaurants 94.23 96.63 97.43 97.03 85.51 93.62 90.80 92.19 99.54 37.90 79.95 90.78 Attractions 79.14 92.16 84.86 88.36 71.17 87.04 79.60 83.16 63.70 N/A N/A 78.59 Multi-Domain. F: Flights, H: Hotels, R: Restaurants, A: Attractions F-H 43.48 53.73 69.52 60.61 22.28 25.94 61.24 36.44 95.02 27.39 83.42 37.15 H-R 60.50 70.91 80.46 78.29 50.94 60.08 76.99 67.50 99.60 22.97 84.11 38.43 H-A 64.91 68.93 91.75 78.72 59.24 66.59 84.29 74.40 97.01 30.02 82.71 57.03 F-H-A 60.46 70.77 80.59 75.36 44.27 51.19 76.58 61.37 94.81 19.16 82.27 44.29 F-H-R 52.97 62.22 78.07 69.25 42.93 50.66 73.76 60.07 91.30 40.45 85.62 45.37 Table 5: Overall Results using Llama 3.3 70B Instruct. B.2 Detailed Results for Reasoning Models Additionally, we also experiment with medium-sized reasoning models to see how they perform. We work with S1.1 32B [ 7] and Phi-4-reasoning-plus 14B [ 1]. The decoding configuration for both models includes a maximum token limit of 4,000, a temperature of 0.1, and max token length is 7,000. We host our models using vLLM in a A100 40GB x 8 node. Between these two models, the S1.1 model mostly outperforms the Phi-4-reasoning-plus model when it comes to Tool Call and Parameter Matching metrics. C Performance Comparison Between Models We observe that the S1.1 model performance performs reasonably well especially compared to the Llama 3.3 70B model that we observe to produce the best results. The Phi-4-reasoning-plus model did well with some domains but overall it does not perform as well as the S1.1 and Llama 3.3 70B models. Figure 5 shows the performance comparison between models. 12 Domain Tool Call Parameter Matching Code Exec. Rate Information Seeking Cache Summary Acc. Prec. Rec. F1 Acc. Prec. Rec. F1 Acc. SacreBLEU BERTScore EM Single-Domain Flights 77.59 87.87 86.90 87.38 49.53 58.04 77.15 66.25 57.20 29.65 83.12 57.46 Hotels 89.33 95.33 93.41 94.36 65.27 70.52 89.77 78.99 63.95 18.07 79.56 57.23 Restaurants 87.77 92.65 94.35 93.49 71.59 77.69 90.11 83.44 69.82 7.16 69.91 69.22 Attractions 65.76 80.16 78.54 79.34 44.92 53.04 74.59 62.00 72.57 N/A N/A 63.87 Multi-Domain. F: Flights, H: Hotels, R: | https://arxiv.org/abs/2505.16986v1 |
Restaurants, A: Attractions F-H 69.87 78.42 86.51 82.27 42.38 50.39 72.71 59.53 66.89 9.72 78.43 47.49 H-R 59.10 65.62 85.60 74.29 51.91 58.54 82.10 68.35 49.72 19.69 85.40 32.55 H-A 63.46 67.81 90.83 77.65 65.22 72.53 86.60 78.95 44.39 21.41 81.95 56.64 F-H-A 68.41 75.27 88.25 81.24 50.21 61.04 73.88 66.85 60.86 27.00 82.48 50.42 F-H-R 67.22 73.80 88.30 80.40 58.97 71.66 76.91 74.19 61.47 10.08 78.32 53.74 Table 6: Overall Results using S1.1 32B. Domain Tool Call Parameter Matching Code Exec. Rate Information Seeking Cache Summary Acc. Prec. Rec. F1 Acc. Prec. Rec. F1 Acc. SacreBLEU BERTScore EM Single-Domain Flights 36.66 67.30 44.61 53.66 15.74 56.10 17.95 27.20 72.76 13.91 81.60 39.73 Hotels 83.96 91.74 90.82 91.28 53.22 63.09 77.30 69.47 93.05 46.95 80.47 46.52 Restaurants 64.13 83.60 73.36 78.14 47.10 61.89 66.34 64.04 93.49 28.27 69.74 50.53 Attractions 35.78 61.67 46.02 52.71 22.90 66.92 25.82 37.27 66.25 N/A N/A 50.79 Multi-Domain. F: Flights, H: Hotels, R: Restaurants, A: Attractions F-H 57.79 88.12 62.67 73.25 29.91 59.20 37.68 46.05 83.37 27.96 83.02 32.53 H-R 54.07 70.71 69.67 70.18 35.42 50.91 53.79 52.31 96.04 23.19 84.83 21.72 H-A 65.81 78.69 80.08 79.38 48.58 61.78 69.45 65.39 96.04 27.92 82.97 53.52 F-H-A 37.83 62.18 49.14 54.90 32.73 64.59 39.89 49.32 84.49 15.42 81.00 32.33 F-H-R 46.79 72.75 56.74 63.75 33.99 59.56 44.18 50.73 91.86 36.30 84.63 37.65 Table 7: Overall Results using Phi-4-reasoning-plus. Figure 5: Above: Tool Call F1 performance and Below: Parameter Matching F1 performance. D Error Analysis When we look closer at the errors found during inferencing, we first break down the errors into a total of 4 categories: •Validation Error : Errors related to improper arguments being passed to the defined tools. •Variable Not Defined : Error indicating code uses an underfined variable. 13 •Index Out Of Range : Error indicating generated code involves an index out of range, usually corresponding to the list data structure in Python. •Other : Other errors present within the generated code. Domain # Turns # Validation Error # Variable Not Defined # Index Out Of Range # Other Single-Domain Flights 8,200 737 830 0 468 Hotels 8,000 417 218 0 820 Restaurants 7,900 513 14 1 183 Attractions 5,975 507 1 0 243 Multi-Domain. F: Flights, H: Hotels, R: Restaurants, A: Attractions F-H 11,700 834 1087 0 408 H-R 10,950 1532 18 1 522 H-A 11,000 1024 167 0 308 F-H-A 9,775 1163 990 0 575 F-H-R 8,575 812 761 3 495 Total - 7539 4086 5 4022 Table 8: Overall Error Analysis Results using Llama 3.1 8B Instruct. Domain # Turns # Validation Error # Variable Not Defined # Index Out Of Range # Other Single-Domain Flights 8,200 408 176 42 26 Hotels 8,000 1,099 16 0 0 Restaurants 7,900 209 5 0 2 Attractions 5,975 18 9 0 8 Multi-Domain. F: Flights, H: Hotels, R: Restaurants, A: Attractions F-H 11,700 454 149 0 22 H-R 10,950 692 185 0 17 H-A 11,000 1248 315 0 1 F-H-A 9,775 789 124 9 27 F-H-R 8,575 541 109 0 101 Total - 5458 1088 | https://arxiv.org/abs/2505.16986v1 |
51 204 Table 9: Overall Error Analysis Results using Llama 3.1 8B SFT Instruct. Domain # Turns # Validation Error # Variable Not Defined # Index Out Of Range # Other Single-Domain Flights 8,200 518 2 30 0 Hotels 8,000 41 0 0 0 Restaurants 7,900 29 0 0 0 Attractions 5,975 4 2 0 1 Multi-Domain. F: Flights, H: Hotels, R: Restaurants, A: Attractions F-H 11,700 158 28 0 0 H-R 10,950 14 2 0 1 H-A 11,000 48 33 0 48 F-H-A 9,775 148 6 7 3 F-H-R 8,575 330 5 1 7 Total - 1290 78 38 60 Table 10: Overall Error Analysis Results using Llama 3.3 70B Instruct. Overall, we notice that validation errors are consistently present within the results of all of the models we evaluated. D.1 Llama 3.3 70B Instruct The Llama 3.3 70B instruct model contains the fewest number of overall errors among the five models evaluated with a total of 1,466. This supports the idea that the larger number of parameters helps the 14 Domain # Turns # Validation Error # Variable Not Defined # Index Out Of Range # Other Single-Domain Flights 8,200 559 16 3 40 Hotels 8,000 92 1 39 27 Restaurants 7,900 20 7 0 28 Attractions 5,975 11 27 0 76 Multi-Domain. F: Flights, H: Hotels, R: Restaurants, A: Attractions F-H 11,700 24 34 114 63 H-R 10,950 54 895 102 34 H-A 11,000 3 1071 43 23 F-H-A 9,775 89 579 25 49 F-H-R 8,575 85 241 7 28 Total - 937 2871 333 368 Table 11: Overall Error Analysis Results using S1.1. Domain # Turns # Validation Error # Variable Not Defined # Index Out Of Range # Other Single-Domain Flights 8,200 103 7 1 1 Hotels 8,000 277 0 0 1 Restaurants 7,900 259 0 0 19 Attractions 5,975 0 0 0 0 Multi-Domain. F: Flights, H: Hotels, R: Restaurants, A: Attractions F-H 11,700 212 2 0 7 H-R 10,950 212 0 0 1 H-A 11,000 162 0 0 6 F-H-A 9,775 120 3 0 5 F-H-R 8,575 153 4 0 32 Total - 1498 16 1 72 Table 12: Overall Error Analysis Results using Phi-4-reasoning-plus. model better understand the task as well passing in the proper parameters to the tools to generate high quality code. D.2 Llama 3.1 8B Instruct The Llama 3.1 8B Instruct model consists of the most number of total errors among all 4 categories which were tracked. The only category it performs well in was the Index Out Of Range error where there are only 5 instances. D.3 Llama 3.1 8B Instruct SFT The Llama 3.1 8B Instruct SFT does drastically improve on the number of Validation ,Variable Not Defined andOther categorized errors compared to the original model. This indicates that the model is able to better learn about calling the tools and writing higher quality code than the original model. D.4 S1.1 Overall, the S1.1 model performs the best when it comes to Validation errors with a total of 937 errors. However, it also consists of the highest amount of | https://arxiv.org/abs/2505.16986v1 |
Index Out of Range errors and the 2nd highest amount of Variable Not Defined errors. D.5 Phi-4-reasoning-plus The Phi-4-reasoning-plus has 1498 validation errors which is the third most among the 5 models. The model does perform reasonably well with the other error categories and has the fewest number of Index Out Of Range andVariable Not Defined errors with 1 and 16 respectively. 15 E Model and Prompt Configuration We use the Llama 3.1 8B Instruct and Llama 3.3 70B Instruct models, and fine-tune Llama 3.1 8B Instruct to evaluate the effectiveness of domain adaptation on the T1dataset. We evaluate all the models in a few-shot setting, where several example turns are included in the prompt to guide the model’s behavior. The full prompts used for inference are provided in Section M and Section N. The decoding configuration for both models includes a maximum token limit of 4000, top-k sampling of 10, and a temperature of 0.1. F Evaluation Protocol We evaluate the model’s performance at every user turn by comparing the model’s output with the ground truth across the following facets. Tool Call. We evaluate the correctness of each tool call from the generated code against the ground truth code by comparing the number of times each tool is called using four metrics: accuracy, precision, recall, and F1. Parameter Matching. For each tool call in the ground truth, we identify the corresponding tool call in the generated code with the same name and the highest parameter overlap. We then calculate the accuracy, precision, recall, and F1 for the identified match. Parameter values are standardized to enable robust comparison; for example, lists are treated as sets to make them insensitive to ordering. Certain tool calls such as save_to_cache ,get_results_from_cache , and seek_information are excluded from this evaluation, as they depend on external artifacts (e.g., intermediate dataframes) and variable keys or identifiers that cannot be reliably matched. Similarly, certain parameter names (e.g., prior_results ) are also excluded. Code Execution Success Rate. We calculate the percentage of instances in which the model gener- ates a code when expected and the code is executable without any errors in a sandbox environment. Handling Information Seeking. If both the model output and the ground truth include seek_information , we evaluate the similarity of the strings inside the function using Sacre- BLEU [ 9] and BERTScore F1 [ 24], capturing both sub-word overlap and semantic similarity of the requested information. Cache Summary. To assess how well the model serves complete requests, we compare the exe- cution cache summary results of the generated code and ground truth using exact match (EM) to determine if the model-generated solution is functionally equivalent to the ground truth. G More Information on T1 Dataset Table 13 shows the attributes for each domain. Table 14 shows the tools used for each domain. 16 Domain Attributes Flight airline, flight_id, start_airport, start_airport_latitude, start_airport_longitude, start_airport_code, end_airport, end_airport_latitude, end_airport_longitude, end_airport_code, economy_class_option_present, business_class_option_present, first_class_option_present, distance_miles, duration_minutes, departure_time, arrival_time, number_of_layovers, first_layover_airport, Hotel hotel_name, city, state, neighborhood, latitude, longitude, rating, stars, max_occupancy, gym_present, pool_present, price_per_night, num_rooms_available, breakfast_included, smoking_allowed, air_conditioning_present, heating_present, free_wifi_included, airport_shuttle_present, | https://arxiv.org/abs/2505.16986v1 |
is_pet_friendly, has_spa_services, has_room_service, has_beach_access, has_fitness_class, has_laundry_service, has_valet_parking, has_balcony, has_rooftop_bar, has_inroom_Kitchen, has_kids_club, has_meeting_rooms, has_electric_vehicle_charging, has_hot_tub, has_sauna, has_free_parking, is_wheelchair_accessible, has_skiing_lodging, has_ocean_view_rooms_present, has_city_view_rooms_present, start_date_available, end_date_available Attraction city, state, name, type, latitude, longitude, neighborhood Restaurant restaurant_name, city, state, neighborhood, latitude, longitude, rating, price, price_per_person, has_nut_allergy_options, has_dairy_allergy_options, has_shell_fish_allergy_options, has_tomato_allergy_options, has_nightshade_allergy_options, has_gluten_free_options, has_vegetarian_options, has_vegan_options, has_kosher_options, has_halal_options, cuisine Table 13: Domains and Attributes. Domain Tools Flights search_flights, filter_flights Hotels search_hotels, filter_hotels Restaurants search_restaurants, filter_restaurants Attractions search_attractions, filter_attractions Multi-domain search_nearest Common save_to_cache, get_results_from_cache, sort_results, seek_information, adjust_date Table 14: List of Tools. H Sample Conversation from T1 Dataset Attribute Type Description airline string Airline of the flight, flight_classes string Classes for the flight (economy, business, first) num_layovers integer Number of layovers for the flight, between 0 and 2 layover_duration_amount integer Duration of a layover flight. Between 1 and 6 hours airports list Airport information for each flight including the city, state, airport code and airport name. Table 15: Flight Ontology Attributes. Box 1. Sample conversation for attractions assistant : Hello! Are you looking for something to do in your free time? user: Yeah, I am thinking of visiting some scenic attractions in San Antonio. assistant : San Antonio has a lot of great Scenic spots. Have you considered Down- town San Antonio? user: Actually, I haven’t. What’s there? assistant : It’s a great area with a lot of Scenic attractions. I can give you some rec- ommendations. user: Okay, that sounds good. 17 Attribute Type Description city string City that the hotel is in state string State that the hotel is in neighborhood string Neighborhood within the city that the hotel is in stars integer Star rating of the hotel between 1 and 5 rating float Customer rating of the hotel between 1.0 and 5.0, incremented by 0.1 price_per_night integer Price per night of the hotel ranging between 20 to 2000 dollars max_occupancy integer Maximum occupancy per room ranging between 1 and 7 gym_present boolean Whether or not the hotel has a gym pool_present boolean Whether or not the hotel has a pool breakfast_included boolean Whether or not the hotel has breakfast included smoking_allowed boolean Whether or not the hotel allows smoking air_conditioning_present boolean Whether or not the hotel has air conditioning heating_present boolean Whether or not the hotel has heating free_wifi_included boolean Whether or not the hotel includes free WiFi airport_shuttle_present boolean Whether or not the hotel has an airport shuffle is_pet_friendly boolean Whether or not the hotel allows pets has_spa_services boolean Whether or not the hotel has spa services has_room_service boolean Whether or not the hotel has room service has_beach_access boolean Whether or not the hotel has access to a beach has_business_center boolean Whether or not the hotel has a business center has_fitness_classes boolean Whether or not the hotel has fitness classes has_laundry_service boolean Whether or not the hotel has laundry services has_valet_parking boolean Whether or not the hotel has valet parking has_balcony boolean Whether or not the hotel has a balcony has_rooftop_bar boolean Whether or not the hotel has a rooftop bar has_inroom_kitchen boolean Whether or not the hotel has an inroom kitchen has_kids_club boolean Whether or not | https://arxiv.org/abs/2505.16986v1 |
the hotel has a kids club has_meeting_rooms boolean Whether or not the hotel has meeting rooms has_electric_vehicle_charging boolean Whether or not the hotel has electric vehicle charging has_hot_tub boolean Whether or not the hotel has a hot tub has_sauna boolean Whether or not the hotel has a sauna has_free_parking boolean Whether or not the hotel has free parking is_wheelchair_accessible boolean Whether or not the hotel is wheelchair accessible has_skiing_lodging boolean Whether or not the hotel has skiing and lodging ocean_view_rooms_present boolean Whether or not the hotel has rooms with a view of the ocean city_view_rooms_present boolean Whether or not the hotel has rooms with views of the city Table 16: Hotel Ontology Attributes. Attribute Type Description city string City that the restaurant is in state string State that the restaurant is in neighborhood string Neighborhood within the city that the restaurant is in rating float Customer rating of the restaurant between 1.0 and 5.0, incremented by 0.1 price_per_person integer Average price per person at the restaurant cuisine string Cuisine of the restaurant has_nut_allergy_options boolean Whether or not the restaurant has any options for individuals with an allergy to nuts has_dairy_allergy_options boolean Whether or not restaurant has any options for individuals with an allergy to dairy products has_shell_fish_allergy_options boolean Whether or not the restaurant has any options for individuals with an allergy to shell fish has_tomato_allergy_options boolean Whether or not the restaurant has options for individuals with an allergy to tomatoes has_nightshade_allergy_options boolean Whether or not the restaurant has options for individuals with an allergy to nightshades has_gluten_free_options boolean Whether or not the restaurant has gluten free options has_vegetarian_options boolean Whether or not the restaurant has vegetarian options has_vegan_options boolean Whether or not the restaurant has vegan options has_halal_options boolean Whether or not the restaurant has halal options has_kosher_options boolean Whether or not the hotel has kosher options Table 17: Restaurant Ontology Attributes. Attribute Type Description city string City that the attraction is in state string State that the attraction is in neighborhood string Neighborhood within the city that the attraction is in type string Type of the attraction Table 18: Attraction Ontology Attributes. 18 Box 2. Sample conversation for flights assistant : Welcome! What can I help you with today? user: I need to book a flight from Colorado Springs to San Antonio for my anniver- sary, and I’m looking for something special. assistant : Happy anniversary! What date are you planning to travel? user: We’re thinking of May 21, 2025. assistant : That’s great! What kind of flight experience are you looking for? Lux- ury, perhaps? user: Yes, something luxurious. And we’d like to depart from COS. assistant : I’ll look into that. What’s your budget for the flight? user: We’re looking to spend around 1100. Box 3. Sample conversation for hotels assistant : Hello, what brings you here today? user: I’m looking for a hotel in St. Louis for my anniversary, can you help me? assistant : Congratulations on your anniversary! What are your check-in and check- out dates? user: We’re planning to check-in on May 18, 2025 and check-out | https://arxiv.org/abs/2505.16986v1 |
on May 29, 2025. assistant : That’s great! How many rooms do you need and what’s your budget per night? user: We need 3 room and our budget is around 550 per night. assistant : I see. What kind of amenities are you looking for in a hotel? user: We’d like a hotel with a gym, pool, and air conditioning. assistant : Got it. Would you prefer a hotel in a specific neighborhood in St. Louis? user: Yes, we’d like to stay in Downtown St. Louis. Box 4. Sample conversation for restaurants assistant : assistant: Welcome, what brings you here today? user: I’m gluten-free and I’m looking for a restaurant that can accommodate my di- etary needs. assistant : Don’t worry, I’m here to help. What kind of cuisine are you interested in? user: I’m really craving Livonian. assistant : There are some great Livonian places that offer gluten-free options. Are 19 you looking for something in a specific neighborhood? user: Yeah, I’d love something in Las Vegas somewhere in Summerlin. assistant : I can check what’s available in Summerlin. What’s your budget for this meal? user: I’m looking to spend around 120 per person. assistant : That helps. Would you like me to recommend a few places that fit your criteria? user: Please do. Box 5. Sample conversation for flight and hotels assistant : assistant: Hello, how can I assist you today? user: I’m looking for a flight from Atlanta to Memphis and a hotel in Memphis assistant : What are your travel dates? user: My departure date is May 18, 2025 and I’d like to return on June 1, 2025 assistant : What class of flight are you interested in? user: user: I’d prefer business assistant : And for your hotel, what star rating are you looking for? user: I’d like a 2 star hotel in Downtown Memphis Box 6. Sample conversation for hotel and restaurant assistant : assistant: Hello! How can I assist you today? user: I’m looking for a hotel in San Antonio and a restaurant that serves Turkish. assistant : That sounds like a great plan! Can you tell me a bit more about your ho- tel preferences, such as the number of rooms and check-in date? user: I need 3 rooms for 5 people, checking in on May 22, 2025 and checking out on May 27, 2025. assistant : I’d be happy to help you with that. What’s your budget per night for the hotel? user: I’m looking to spend around 50 per night. assistant : That’s helpful to know. For the restaurant, are you looking for some- thing with a specific rating or price range? user: Yes, I’d like a restaurant with a rating of at least 3.5 and a price range of around 85. assistant : I’ll keep that in mind. Would you like the hotel to have any specific 20 amenities, such as a gym or pool? user: Yes, a pool would be great. And can you recommend a restaurant with Turk- ish near the hotel? Box 7. Sample conversation for hotel and | https://arxiv.org/abs/2505.16986v1 |
attraction assistant : Hi there! What brings you here today? user: I’m planning a last-minute trip to Portland for 2 people. assistant : Last-minute trips can be exciting! What’s your budget for the hotel? user: We’re looking to spend around 55 per night. assistant : Okay, that helps. What’s your check-in and check-out date? user: We’re checking in on June 28, 2025 and checking out on July 10, 2025. assistant : Are you interested in any specific attractions in Portland? user: Yes, we want to visit Pittock Mansion, which is a Social attraction. assistant : That sounds like a great choice! I can help you find a hotel near Pittock Mansion. What kind of amenities are you look for in the hotel? user: We need a hotel with a gym and wifi. Box 8. Sample conversation for flight-hotel-attraction assistant : Hello, I’d be happy to help you plan your trip. What’s on your mind? user: I’m thinking of going to Las Vegas for my birthday and I need a business flight from Tucson. assistant : Happy birthday in advance. When are you thinking of traveling? user: We’re looking at May 19, 2025 to May 27, 2025. assistant : That’s a great time. How many people are traveling? user: Just 2 of us. assistant : Lovely. For your stay, what kind of hotel are you envisioning? user: A 2 star hotel with a spa and room service would be perfect. And we’d love to stay near Fountains of Bellagio with a beautiful view. assistant : That sounds like a great choice! I can help you find a hotel near Pittock Mansion. What kind of amenities are you look for in the hotel? Box 9. Sample conversation for flight-hotel-restaurant assistant : Welcome to our travel assistant. How can I help you today? user: I’m looking to travel from Tucson to St. Louis for a business trip. 21 assistant : What are your preferred travel dates? user: I’d like to fly out on May 18, 2025 and come back on June 1, 2025 assistant : What class would you like to fly in? user: I prefer first. assistant : Great, I can look into that. Where would you like to stay in St. Louis? user: I’d like a hotel in Downtown St. Louis with a 3 star rating. assistant : And what kind of restaurant are you looking for? user: I’d like a restaurant in Downtown St. Louis with a rating of at least 3.5. I System Prompt Design for Single-Domain Template Generation This section details the unified system prompt and its domain-specific customizations used for generating diverse, realistic multi-turn conversations for four domains: Flights, Hotels, Attractions, and Restaurants . I.1 Unified Prompt Instructions (Common Across Domains) The following design principles apply across all domains to ensure consistency, diversity, and realism in the generated templates: •Conversation Format: –Each template contains 6–8 dialogue turns (3–4 exchanges). –Each template starts with an assistant turn and ends with a user turn. –Turns alternate between assistant: anduser: . •Placeholder Handling: –Use numbered placeholders like <CITY_1> ,<PRICE_2> . –Numbering resets | https://arxiv.org/abs/2505.16986v1 |
at the start of each new template. •Assistant Behavior Diversity: –Vary personality: formal/informal, concise/verbose. –Ask both open-ended and specific questions. –Do not assume user intent or provide options in the first message. –Correct misunderstandings if needed. •User Behavior Diversity: –Include verbose, terse, indecisive, and overly specific users. –Some users provide minimal context; others give excess information. –Include questions that revisit or skip steps. •Flow Diversity: –Avoid fixed dialogue patterns (e.g., greet →date→options). –Allow nonlinear flow: backtracking, parallel requests, etc. •Language Diversity: –Vary greetings, transitions, sentence structure, and phrasings. –No repeated phrasing across templates. •Output Requirements: 22 –Output format is strict JSON with keys like "template_1" through "template_30" . –Each value is a newline-separated conversation string. –No preambles or postambles allowed. I.2 Domain-Specific Customization Domain Placeholders Used Required Scenarios Flight <CITY_x> , <CLASS_x> , <DEPARTURE_DATE_x> , <DEPARTURE_TIME_x> , <ARRIVAL_DATE_x> ,<ARRIVAL_TIME_x> , <AIRLINE_x> ,<PRICE_x> ,<AIRPORT_x> , <NUM_TRAVELERS_x>Round trip (with 2 departure dates), one-way, multi-city, strict arrival time, flexible dates, large group bookings, solo/business/family travel, weekend getaways, budget con- straints, special occasions (e.g., birthdays) Hotel <CHECK_IN_DATE_x> , <CHECK_OUT_DATE_x> , <CITY_x> , <CITY_x_NEIGHBORHOOD_y> , <STAR_x> ,<RATING_x> ,<PRICE_x> , <NUM_ROOMS_x> ,<NUM_TRAVELERS_x>Multi-city trips, multiple hotels in one city, specific neighbor- hoods, special occasions, last-minute reservations, business vs. leisure, extended stays, view/accessibility/pet-friendly requests, family and group accommodations Attraction <ATTRACTION_TYPE_x> , <CITY_x> , <STATE_x> ,<CITY_x_NEIGHBORHOOD_y>Multi-type attraction queries, across cities/states, neighborhood-specific exploration, curiosity about state-level offerings. Assistant must ask open-ended questions and only mention placeholders if user provides them. Restaurant <RESTAURANT_RATING_x> , <RESTAURANT_PRICE_x> , <RESTAURANT_CUISINE_x> ,<CITY_x> , <CITY_x_NEIGHBORHOOD_y>Multi-city dining, cuisine preferences, dietary restrictions (e.g., nut-free, vegan, halal), budget vs. premium, special occasion dining, rating-focused or casual vs. upscale prefer- ences Table 19: Domain-specific customization for template generation. Each domain builds on the shared prompt instructions with unique placeholders and required conversation scenarios. J System Prompt Design for Multi-Domain Template Generation This section outlines the system prompt instructions used for generating multi-turn conversations that span multiple domains. These prompts are tailored to create realistic, nonlinear interactions involving combinations of Flights, Hotels, Attractions, and Restaurants. J.1 Unified Prompt Instructions (Shared Across Multi-Domain Prompts) •Conversation Format: –Each template includes at least 8–10 dialogue turns (4–5 exchanges). –Each template begins with an assistant turn and ends with a user turn. –Turns alternate between assistant: anduser: . •Placeholder Handling: –Use only predefined placeholders (e.g., <CITY_1> ,<DEPARTURE_DATE_1> ). –Numbering restarts from 1 in every template. –Assistant must never mention a placeholder until the user provides it. •Assistant Behavior: –Vary personality (formal, informal), verbosity, and strategies (direct vs. open-ended ques- tions). –Must not assume user intent or combine domain-specific questions unless user initiates. –Must correct misunderstandings and adapt to user behavior. •User Behavior: –Include users who are indecisive, impatient, verbose, overly specific, or vague. 23 –Users may change their mind, skip steps, or backtrack in conversation. •Conversation Flow: –Avoid rigid order (e.g., flight →hotel→attraction). –Encourage nonlinear scenarios: jumping between domains, multiple questions at once. •Language and Style: –Vary greetings, transitions, sentence structures, and terminology across templates. –Avoid repeated phrasings. •Output Requirements: –Format is strict JSON: "template_1" through "template_30" . –Each value is a newline-separated string of alternating assistant/user | https://arxiv.org/abs/2505.16986v1 |
dialogue. –No preamble, postamble, or markdown allowed. J.2 Multi-Domain Scenario Requirements 24 Domains Placeholders Used Required Scenarios Flight + Hotel <CITY_x> , <CLASS_x> , <DEPARTURE_DATE_x> , <CHECK_IN_DATE_x> , <CHECK_OUT_DATE_x> , <NUM_TRAVELERS_x>Trip planning involving both air travel and accommodation. In- cludes round-trip and one-way flights, hotel stays across one or more cities, budget and luxury trav- elers, group vs. solo travel, flexible dates, specific preferences like non- stop flights or hotels with amenities (e.g., pool, gym, pet-friendly). Flight + Hotel + Attraction All flight, hotel, and at- traction placeholders (e.g., <CITY_x> ,<CHECK_IN_DATE_x> , <ATTRACTION_TYPE_x> )User planning a full trip involv- ing flights, hotel stays, and sight- seeing. Must include: round- trip and one-way flights, multi-city stays, hotels near attractions, flex- ible schedules, user-provided at- traction types or specific attrac- tion names (only when linking to hotel proximity), and special con- straints (e.g., group travel, business trips, anniversaries). Assistant must never use placeholder names be- fore user introduces them. Includes amenity discussions and attraction- type grammar handling (e.g., cul- tural vs. <ATTRACTION_TYPE_1> attraction). Flight + Hotel + Restaurant All flight, hotel, and restau- rant placeholders (e.g., <RESTAURANT_CUISINE_x> , <RESTAURANT_PRICE_x> )End-to-end trip planning including flights, accommodation, and din- ing. Required scenarios include: dietary restrictions, cuisine-first or city-first planning, restaurant prox- imity to hotel, multiple hotels or restaurants in one city, round-trip and one-way flights, and ameni- ties like pet-friendly or spa. Must feature diverse language, nonlinear flows, and varied user personas. In- cludes explicit handling of dietary needs (e.g., halal, nut-free, vegetar- ian), and placeholder logic as per prompt. Hotel + Attraction <CITY_x> ,<CHECK_IN_DATE_x> , <ATTRACTION_TYPE_x> , <CITY_x_NEIGHBORHOOD_y>User looking for accommodations and nearby attractions. Includes: specific neighborhoods, family- friendly vs. solo travel, themed vacations, extended stays, proxim- ity to cultural or outdoor attrac- tions, and cases where attraction type drives hotel location. Hotel + Restaurant <RESTAURANT_CUISINE_x> , <HOTEL_PRICE_x> , <CITY_x_NEIGHBORHOOD_y>Combines hotel booking with din- ing preferences. Includes cuisine- specific searches, budget dining vs. fine dining, dietary needs (gluten- free, halal), special events (an- niversaries), restaurant proximity to hotel, and group size consider- ations. Templates vary in planning flow—some start with hotel, others with restaurants. Table 20: Multi-domain combinations and scenario requirements. Each configuration builds on the unified instructions with additional placeholder logic and domain-specific complexity. 25 K Example System Prompt: Flight Domain Your task is to generate diverse conversation templates for a flight finder chatbot. Each template should capture realistic, UNIQUE conversations between a user and an assistant about flight searches. Placeholders to Use: - Departure city: <CITY_x> - Destination arrival city: <CITY_x> - Flight class: <CLASS_x> - Departure date: <DEPARTURE_DATE_x> - Departure time: <DEPARTURE_TIME_x> - Destination arrival date: <ARRIVAL_DATE_x> - Destination arrival time: <ARRIVAL_TIME_x> - Airline name: <AIRLINE_x> - Price: <PRICE_x> - Airport’s name: <AIRPORT_x> - Number of travelers: <NUM_TRAVELERS_x> Key Requirements: 1. Placeholder Numbering: - Use integers for ‘‘x’’ (e.g., <CITY_1>, <CITY_2>). - Reset numbering to 1 for each new template. 2. Conversation Format & Structure: - 6-8 turns minimum per template (3-4 exchanges). - Format as alternating assistant: and user: lines. - Each template must start with an assistant turn and end | https://arxiv.org/abs/2505.16986v1 |
with a user turn. - Vary conversation lengths-some should be shorter, some longer. 3. Assistant Behavior Diversity: - Use distinctly different assistant personalities (formal, casual, verbose, concise). - Vary how information is requested (direct questions vs. open-ended). - Include templates where the assistant corrects misunderstandings. - NOTE: Assistant’s first turn must be generic and not assume user intent (e.g., ‘‘How can I help you?"). - NOTE: The assistant must not say it has found options for the user. 4. User Behavior Diversity: - Create dramatically different user types (impatient, polite, verbose, terse). - Include users who provide minimal information (requiring follow-ups). - Include users who provide too much information. - Include complex, specific requests. - Include indecisive users who change their minds. 5. Conversation Flow Diversity: - Avoid standard ‘‘greeting -> dates -> preferences -> options’’ structure. - Include nonlinear conversations (user circles back to earlier topics). - Include users asking multiple questions at once. - Include conversations skipping obvious steps. - Vary order in which info is elicited. 6. Required Scenarios: - Multi-city trips - Round trip - One-way trip - Flexible travel dates - Users with strong preferences (airline, class, airport, etc.) - Non-stop flights only - Short layovers are okay - Rigid arrival time - Special occasions (e.g., anniversaries) 26 - Large group bookings - Last-minute reservations - Business travel - Family vacations - Solo travelers - Budget constraints - Weekend getaways 7. Round Trip Requirement: - Roundtrip flights must include 2 departure dates. 8. Multiple Travelers: - Always include number of travelers via placeholder in multi-person scenarios. 9. Linguistic Diversity: - Avoid repeating same greetings across templates. - Vary language used for dates, preferences, and constraints. - Use different phrasings for similar ideas. - Unique transitions between topics. - Varied sentence structures and response styles. Important: Each template must be fundamentally different in structure, flow, language, and scenario. L Example System Prompt: Flight-Hotel-Restaurant Domain Your task is to generate diverse conversation templates for a travel assistant chatbot. Each template should be a realistic and UNIQUE multi-turn interaction between a user and an assistant that includes: - Flight search - Hotel booking - Restaurant recommendations or reservations Placeholders to Use: - Flight: <CITY_x>, <CLASS_x>, <DEPARTURE_DATE_x>, <DEPARTURE_TIME_x>, <ARRIVAL_DATE_x>, <ARRIVAL_TIME_x>, <AIRLINE_x>, <PRICE_x>, <AIRPORT_x>, <NUM_TRAVELERS_x> - Hotel: <HOTEL_x>, <CHECKIN_DATE_x>, <CHECKOUT_DATE_x>, <ROOM_TYPE_x>, <NUM_GUESTS_x>, <PRICE_x> - Restaurant: <RESTAURANT_x>, <CUISINE_x>, <RESERVATION_TIME_x>, <RESERVATION_DATE_x>, <PRICE_RANGE_x>, <LOCATION_x>, <NUM_PEOPLE_x> Key Requirements: 1. Conversation must include all 3 components: flight, hotel, and restaurant. 2. Placeholder Numbering: - Use integers for ‘‘x" (e.g., <CITY_1>, <CITY_2>). - Reset numbering to 1 for each new template. 3. Conversation Format: - At least 8 turns (4 full exchanges). - Alternating assistant: and user: lines. - Assistant always starts the conversation and user always ends it. - Vary length and structure across templates. 4. Assistant Behavior Diversity: - Include different tones: friendly, formal, efficient, humorous, inquisitive, etc. - Vary assistant strategies (confirming info early vs. late, etc.). - Assistant cannot confirm bookings or give specific results. - Must clarify ambiguous or missing information. 27 5. User Behavior Diversity: - Mix of confident and indecisive users. | https://arxiv.org/abs/2505.16986v1 |
- Include users who change minds mid-conversation. - Include users who ask multiple things at once. - Include low-information and high-information users. 6. Conversation Flow Diversity: - Vary the sequence (flight -> hotel -> restaurant, or restaurant -> hotel -> flight, etc.). - Include users jumping between topics or circling back. - Include unexpected questions or constraints from users. 7. Required Scenarios: - Honeymoon trip - Last-minute business travel - Budget backpacking vacation - Solo food tourism - Family summer vacation - Couple anniversary trip - Conference trip with team - Weekend getaway - Luxury experience - Specific date events (concerts, sports games, etc.) 8. Linguistic Diversity: - Avoid repeating phrasing and structures across templates. - Vary tone and style significantly. - Mix concise and elaborate dialogues. Important: Every template must feel distinct from all others in tone, language, structure, and scenario. M Example System Prompt: Plan Generation Prompt You are an expert AI travel planner and your responsibility is to generate Python code using APIs or Tools. N Example User Prompt: Plan Generation Prompt Your task is to generate a Python code based on a conversation between the user and the assistant, where the last turn is from the user. The code typically involves calling one or more tools (functions) to help the user in planning their travel request. In the Python code, you need to use the following tools: # TOOL CONFIG <TOOL_CONFIG> # INSTRUCTIONS - Track content: Maintain the conversation state across turns and use all known information from earlier in the conversation. - As soon as the mandatory parameters (non-optional parameters) are all provided (refer to TOOL CONFIG to find mandatory parameters for each tool), generate the appropriate plan using Python code. - Do NOT modify entity values under any circumstances. Use them exactly as they appear in the conversation while populating attributes in the function during code generation. For example, if the city is "new york" (lowercase), do not convert it to "New York" or "NYC". - Do not fill optional parameters unless they are explicitly provided in the conversation. 28 - When generating seek_information, only mention mandatory parameters (non-optional parameters) that are missing. Never ask for optional parameters using seek_information. Refer to TOOL CONFIG to figure out what the mandatory parameters (non optional parameters) are and check CONVERSATION to know what parameters have been provided by the user. For example, "seek_information(’<explain what mandatory parameters (non-optional parameters) are missing and must be gathered by the assistant>’)" - Only generate the code for the domain which the customer has mentioned in the conversation. For example, if user mentioned only about attractions, don’t generate the code with restaurants search. Only if the user mentioned searching for restaurant anywhere in the conversation, then only search for restaurants. - If a tool result from a previous turn is still valid and relevant, use get_results_from_cache(key="<cache_key>") to retrieve it. Use the cache summary to determine the most appropriate key to select from. If you have many keys in the cache for the same domain. Use the one which would be most relevant. - | https://arxiv.org/abs/2505.16986v1 |
If you generate a tool call and its result could be reused later, save it with save_to_cache("<key>",value). Ensure the cache key is unique and avoid naming collision with previously stored cache key name - If a result has already been stored in the cache for a conversation and no new result needs to be generated, do not regenerate the code. Instead, return the code as "print("No planning needed")" # OUTPUT FORMAT - You need to generate the reasoning and the python code. The reasoning should clearly explain the process, steps and the reason behind the python plan that is going to be generated The reasoning should be within the <REASONING> </REASONING> tags and the python code should be within the <CODE> </CODE> tags. Note while generating the python code, never have any markdown tags. # EXAMPLES <FEW_SHOT_EXAMPLES> # CONVERSATION <CONVERSATION> # CACHE <CACHE_FOR_CONVERSATION> Given the provided conversation and cache summary, generate a Python code for the last user turn. O Template: Plan Generation for Fine-Tuning You are an expert AI travel planner and your responsibility is to generate Python code using APIs or Tools. Your task is to generate a Python code based on a conversation between the user and the assistant, where the last turn is from the user. The code typically involves calling one or more tools (functions) to help the user in planning their travel request. In the Python code, you need to use the following tools: # TOOL CONFIG <TOOL_CONFIG> # INSTRUCTIONS - Track content: Maintain the conversation state across turns and use all known information from earlier in the conversation. - As soon as the mandatory parameters (non-optional parameters) are all provided (refer to TOOL CONFIG to find mandatory parameters for each tool), generate the appropriate plan using Python code. - Do NOT modify entity values under any circumstances. Use them exactly as they appear in the conversation while populating attributes in the function during code generation. For example, if the city is "new york" (lowercase), do not convert it to "New York" or "NYC". - Do not fill optional parameters unless they are explicitly provided in the conversation. 29 - When generating seek_information, only mention mandatory parameters (non-optional parameters) that are missing. Never ask for optional parameters using seek_information. Refer to TOOL CONFIG to figure out what the mandatory parameters (non optional parameters) are and check CONVERSATION to know what parameters have been provided by the user. For example, "seek_information(’<explain what mandatory parameters (non-optional parameters) are missing and must be gathered by the assistant>’)" - Only generate the code for the domain which the customer has mentioned in the conversation. For example, if user mentioned only about attractions, don’t generate the code with restaurants search. Only if the user mentioned searching for restaurant anywhere in the conversation, then only search for restaurants. - If a tool result from a previous turn is still valid and relevant, use get_results_from_cache(key="<cache_key>") to retrieve it. Use the cache summary to determine the most appropriate key to select from. If you have many keys in the cache for the same domain. | https://arxiv.org/abs/2505.16986v1 |
Use the one which would be most relevant. - If you generate a tool call and its result could be reused later, save it with save_to_cache("<key>",value). Ensure the cache key is unique and avoid naming collision with previously stored cache key name - If a result has already been stored in the cache for a conversation and no new result needs to be generated, do not regenerate the code. Instead, return the code as "print("No planning needed")" # OUTPUT FORMAT - You need to generate only the python code. The python code should be within the <CODE> </CODE> tags. Note while generating the python code, never have any markdown tags. # CONVERSATION <CONVERSATION> # CACHE <CACHE_FOR_CONVERSATION> Given the provided conversation and cache summary, generate a Python code for the last user turn. P Script to replicate SFT >> git clone https://github.com/huggingface/trl.git >> cd trl >> git checkout v0.17.0 # We use the TRL version 0.17.0 >> pip install -e . >> accelerate launch --config_file trl/examples/accelerate_configs/multi_gpu.yaml \ trl/scripts/sft.py \ --model_name_or_path <YOUR PATH TO A Meta-Llama-3.1-8B-Instruct LOCALLY> \ --dataset_name <TRAIN SET OF THE TOOL CALLING IN HUGGINGFACE DATASET FORMAT> \ --learning_rate 2.0e-5 \ --num_train_epochs 1 \ --packing \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 8 \ --gradient_checkpointing \ --logging_steps 25 \ --eval_strategy steps \ --eval_steps 100 \ --use_peft \ --lora_r 32 \ --lora_alpha 32 \ --lora_target_modules q_proj k_proj v_proj o_proj gate_proj up_proj down_proj \ --output_dir <SOME OUTPUT DIRECTORY>\ --report_to tensorboard \ --torch_dtype bfloat16 30 | https://arxiv.org/abs/2505.16986v1 |
MASLab: A Unified and Comprehensive Codebase for LLM-based Multi-Agent Systems Rui Ye1Keduan Huang1Qimin Wu1 Yuzhu Cai1Tian Jin1Xianghe Pang1Xiangrui Liu1Jiaqi Su1 Chen Qian1Bohan Tang3Kaiqu Liang4Jiaao Chen5Yue Hu6Zhenfei Yin3,7 Rongye Shi8Bo An9Yang Gao10Wenjun Wu8Lei Bai2,†Siheng Chen1,† 1Shanghai Jiao Tong University2Shanghai AI Laboratory3University of Oxford 4Princeton University5Meta6University of Michigan7The University of Sydney 8Beihang University9Nanyang Technological University10Nanjing University MASLab: https://github.com/MASWorks/MASLab Abstract LLM-based multi-agent systems (MAS) have demonstrated significant potential in enhancing single LLMs to address complex and diverse tasks in practical applica- tions. Despite considerable advancements, the field lacks a unified codebase that consolidates existing methods, resulting in redundant re-implementation efforts, unfair comparisons, and high entry barriers for researchers. To address these chal- lenges, we introduce MASLab, a unified, comprehensive, and research-friendly codebase for LLM-based MAS. (1) MASLab integrates over 20 established meth- ods across multiple domains, each rigorously validated by comparing step-by-step outputs with its official implementation. (2) MASLab provides a unified envi- ronment with various benchmarks for fair comparisons among methods, ensuring consistent inputs and standardized evaluation protocols. (3) MASLab implements methods within a shared streamlined structure, lowering the barriers for under- standing and extension. Building on MASLab, we conduct extensive experiments covering 10+ benchmarks and 8 models, offering researchers a clear and compre- hensive view of the current landscape of MAS methods. MASLab will continue to evolve, tracking the latest developments in the field, and invite contributions from the broader open-source community. ComprehensiveManual VerifiedRe-ImplementedCodeOriginal OfficialCodeüFinal Output AlignedüIntermediate Outputs AlignedüOver 20 MAS MethodsüVarious Benchmarks & LLMsVanillaCoTCAMELAutoGenSCAgentVerseLLM DebateMADGPTSwarmDyLANSelf-RefineMacNetADASAFlowMAVMAS-GPTMetaGPTChatDevMapCoderEvoMACMACMMedAgentsOWL Configurations-UnifiedüData Pre-ProcessingüLLM ConfigurationsüEvaluation ProtocolsPre-processRawDataEvaluationMethod 1Method 2Research-FriendlyüUnified High-Level StructureüEasy to Start for NewcomersüEasy for Secondary DevelopmentMAS Method = Python Inference Function MASLab Figure 1: MASLab: A unified, comprehensive, and research-friendly codebase for LLM-based MAS. We support fairly comparing over 20 methods, whose correctness are manually verified. Preprint. Under review.arXiv:2505.16988v1 [cs.CL] 22 May 2025 1 Introduction Large language models (LLMs) [ 1,2,3,4,5] have achieved remarkable success and are being increasingly applied across various domains [ 6,7,8,9]. However, despite their continuous advance- ments, a single LLM inherently faces limitations such as unreliable and random generation [ 10,11], hallucinations [ 12,13], and difficulty handling complex, multi-step tasks [ 14,15]. These limitations hinder their ability to effectively tackle the full spectrum of real-world applications on their own. The limitations of single LLMs have driven emerging research towards the development of LLM- based multi-agent systems (MAS) [ 16,17,18,19], where multiple agents, each with distinct roles, contexts, and tools, collaborate to address complex tasks more effectively. This paradigm has shown great promise across a range of applications, including code generation [ 16,20], mathematical problem-solving [ 21,22], academic research [ 23,24], and data synthesis [ 25,26]. Over the past year, this field has seen rapid development, evolving from early MAS approaches that rely on manually designed, fixed systems [ 16,27,20,28,29,21] to more dynamic systems where the roles and behaviors of the agents are adaptable [ 18,19,30,31,32]. This ongoing evolution is steering the field towards greater automation and generalization, with the potential to create more intelligent systems. Despite the rapid progress in LLM-based MAS, the field lacks a unified codebase that consolidates the various methods and | https://arxiv.org/abs/2505.16988v1 |
algorithms. This gap results in several critical issues that hinder the field’s long-term advancement: (1) Redundant effort. Without shared, accessible resources, researchers expend significant time reimplementing existing works, diverting effort from innovative contributions. (2) Unfair comparison. Varied implementation designs of individual codebases, such as differing dataset preprocessing and evaluation protocols, complicate fair and reliable comparisons across methods. (3) High entry barriers. Newcomers face difficulties navigating through disparate repositories, with no clear starting points. Addressing these challenges is crucial to accelerate research and promote cohesive progress in the field. However, unifying massive methods—that originally employ distinct codebase styles, architectures, and dependencies—into one codebase poses significant challenges. This requires not only substantial efforts for re-implementation and verification, but also a comprehensive understanding of all methods to enable unification. To bridge this gap, we present MASLab , the first unified codebase for LLM-based MAS, inte- grating over 20 established methods (e.g., most cited or accepted by recent top-tie venues) with a coherent high-level structure and standardized evaluations; see overview in Table 1. (1)MASLab consolidates diverse research spanning multiple domains—including general tasks [ 29], coding [ 16], and mathematics [ 21]—covering representative advancements from March 2023 through March 2025. Each method integrated into MASLab has been rigorously verified by comparing step-by-step outputs with its official implementation, greatly reducing redundant reimplementation efforts for future researchers. (2)MASLab supports unified evaluations across a wide array of benchmarks, ensuring consistent inputs and standardized evaluation protocols. This facilitates reliable and fair comparisons, emphasizing core methodological differences rather than implementation disparities. (3) All methods are implemented within a streamlined, high-level structure, where each is encapsulated as a core inference function that processes a query and delivers the MAS response. This transparent structure explicitly highlights key methodological components, significantly lowering entry barriers and enabling researchers to easily understand, extend, and innovate upon existing approaches. Based on MASLab, we conduct comprehensive experiments to benchmark the implemented methods, offering the research community a clear understanding of the current landscape of LLM-based MAS. Our evaluation spans 10+ benchmarks spanning diverse domains—including general ques- tion answering, mathematics, coding, science, and medicine—using 8 LLM backbones including Llama-3.3-70B-Instruct, Qwen-2.5-7/14/32/72B-Instruct, and GPT-4o-mini/4.1-mini/4.1 models. Our analysis examines the impact of varying evaluation protocols adopted by prior studies, the scaling behavior with respect to method configuration and model size, and failure cases. Notably, we demonstrate that discrepancies in evaluation protocols can lead to substantial variation in performance rankings, underscoring the importance of a unified codebase for fair and reproducible comparisons. 2 Related Work LLM-based MAS. LLM-based multi-agent systems (MAS) extend the capabilities of LLMs by enabling collaborative interactions among multiple agents. CAMEL [ 17] and AutoGen [ 34] primarily focus on two-agent (user–assistant) role-playing, while MetaGPT [ 20] and ChatDev [ 16] assign 2 Table 1: Descriptions of 24 methods that MAS-Lab currently support. We show several critical perspectives of MAS methods. (1) Role: whether agents’ roles in the method is fixed or dynamic. (2) Topo.: whether the topology in the method is fixed or dynamic. (3) Tool: whether the method includes tool usage. (4) Optim.: whether the method is optimizable. (5) | https://arxiv.org/abs/2505.16988v1 |
Generalization: whether the method can generalize to handle diverse tasks. No. Methodology Venue Role Topo. Tool Optim. Generalization Single-Agent Baselines 1 Vanilla LLM - Fixed Fixed No No Yes 2 CoT [33] NeurIPS 2022 Fixed Fixed No No Yes Multi-Agent Systems for General Tasks 3 CAMEL [17] NeurIPS 2023 Fixed Fixed No No Yes 4 AutoGen [34] ICLR-W 2024 Fixed Fixed Yes No Yes 5 Self-Consistency [35] ICLR 2024 Fixed Fixed No No Yes 6 AgentVerse [29] ICLR 2024 Dynamic Fixed No No Yes 7 LLM Debate [27] ICML 2024 Fixed Fixed No No Pre-defined Roles 8 GPTSwarm [32] ICML 2024 Fixed Dynamic Yes Yes Validation-Required 9 DyLAN [31] COLM 2024 Fixed Dynamic No No Pre-defined Roles 10 MAD [28] EMNLP 2024 Fixed Fixed No No Pre-defined Roles 11 Self-Refine [36] NeurIPS 2024 Fixed Fixed No No Yes 12 MacNet [37] ICLR 2025 Fixed Fixed No No Pre-defined Roles 13 ADAS [18] ICLR 2025 Fixed Fixed Yes Yes Validation-Required 14 AFlow [30] ICLR 2025 Fixed Fixed Yes Yes Validation-Required 15 MA V [38] ICLR-W 2025 Fixed Fixed No No Yes 16 MAS-GPT [19] ICML 2025 Dynamic Dynamic Yes Yes Yes Multi-Agent Systems for Coding Tasks 17 MetaGPT [20] ICLR 2024 Fixed Fixed Yes No Coding-Specific 18 ChatDev [16] ACL 2024 Fixed Fixed Yes No Coding-Specific 19 MapCoder [39] ACL 2024 Fixed Fixed Yes No Coding-Specific 20 EvoMAC [40] ICLR 2025 Dynamic Dynamic Yes No Coding-Specific Multi-Agent Systems for Mathematical Tasks 21 MACM [21] NeurIPS 2024 Fixed Fixed No No Math-Specific Multi-Agent Systems for Scientific Tasks 22 MedAgents [41] ACL-F 2024 Fixed Fixed No No Medicine-Specific Multi-Agent Systems for Tool-Required Tasks 23 OWL-Roleplaying [42] GitHub 2025 Fixed Fixed Yes No Yes (with Proper Tools) 24 ReAct-MASLab [43] ICLR 2023 Fixed Fixed Yes No Yes (with Proper Tools) multiple specialized roles (e.g., coder, reviewer) for fixed software development pipeline. Debate- style systems [ 27,28,44] employ multiple agents to propose and criticize solutions. AgentVerse [ 29] and DyLAN [31] allow iterative adjustment of team configurations during task execution. While these fixed-role architectures demonstrate the potential of MAS, they rely heavily on manually defined roles and workflows, limiting generalizability across tasks. To address this, recent works explore automatic workflow generation [ 19,18,45,46,47]. GPTSwarm [ 32] models agents as an optimizable graph of LLM operations refined via validation feedback. Similarly, ADAS [ 18] and AFlow [ 30] leverage a strong meta-agent to iteratively design agentic workflows. MAS-GPT [ 19] trains an LLM that generates an executable MAS based on each user query. However, these methods are implemented in isolated codebases, leading to redundant efforts, incon- sistent evaluations, and steep entry barriers. MASLab resolves these issues by providing a unified and comprehensive codebase that supports all of the above methods within an extensible framework. LLM-agent codebase. In parallel with algorithmic advances, several open-source frameworks have emerged to facilitate the development of LLM-based agents. CAMEL [ 17] and AutoGen [ 34] introduce conversational agent frameworks based on role-playing. LangChain [ 48], LangGraph [ 49], and OpenAgents [ 50] provide low-code environments for constructing LLM-driven applications and workflows. However, none | https://arxiv.org/abs/2505.16988v1 |
of these frameworks are designed specifically for research purposes: they lack implementations of representative multi-agent methods from the existing literature and 3 Unified CodebasePre-processOptimizationInferenceEvaluation Shared Resources DiverseLLMsCommon To o l k i t sStreamlined Representation of 20+ MAS Methodsfromtoolkits importcode_executorclassMASExample(MAS):...definference(self, sample):roles =self.agent_recruit(sample)solutions =self.agent_discuss(roles)solution =self.aggregate(solutions)feedback =code_executor(solution)response=self.improve(solution, feedback)returnresponsedatametricFigure 2: Overview of MASLab codebase. MASLab incorporates and unifies the whole pipeline from data pre-processing to evaluation, ensuring that inputs to all methods are aligned, non-algorithmic configurations are standardized, and the evaluation protocols are consistent and accurate. All 20+ methods are represented by a similar streamlined structure of python class. offer limited support for systematic evaluation. In contrast, our MASLab offers the first all-in-one research-friendly codebase that integrates the community’s collective progress in LLM-based MAS. 3 MASLab MASLab is a unified, comprehensive, research-oriented codebase for LLM-based multi-agent systems (MAS). It consolidates over 20 published MAS methods with consistent inference basic configurations and unified evaluation protocols, facilitating researchers for fast and fair algorithmic comparisons. All methods are verified by comparing their intermediate outputs with the official implementations. 3.1 Inference of MAS In order to unify and streamline the diverse MAS codebases in the field, MASLab focuses on four key aspects during inference that ensure consistency and fairness across different methods: representation of MAS, inputs, configurations, and accessible resources. These aspects are designed to eliminate the disparities that have traditionally hindered cross-method comparisons and replication of results. Streamlined representation of MAS. Each MAS method within MASLab is abstracted into a Python class, all of which inherit from a common base class. This base class provides shared functionality across methods, such as making LLM requests and tracking LLM token consumptions. The core of each method is the inference function, which takes a data sample (e.g., a mathematical problem) as input and outputs the solution generated by the MAS. By standardizing the representation in this manner, the structure of each MAS approach is simplified, allowing researchers to gain a clear understanding of the key steps involved by merely inspecting the inference function. In many cases, the inference process is further modularized, with specific components encapsulated as additional functions to highlight the different stages of task-solving, such as team recruitment and code execution. This design ensures that the complexity inherent in different MAS methods is handled in a consistent, easily interpretable manner, while preserving the unique features of each individual approach. For optimization-based methods [ 18,32,30], another core optimization function will process a validation set to produce an optimized MAS. See re-implementation notes in Section D. Consistent inputs. MASLab standardizes input preprocessing for all MAS methods, ensuring fair comparisons by eliminating discrepancies. For instance, prior implementations of MapCoder [ 39], Reflexion [ 51], and EvoMAC [ 40] use different preprocessing on the MBPP dataset, making per- formance differences hard to interpret. MASLab’s unified preprocessing pipeline ensures that all methods operate on identical data, relieving researchers of the need to manually prepare datasets. Shared resources. MASLab unifies the underlying resources required by MAS methods, including LLMs and external tools. It supports both externally hosted APIs and locally deployed models, covering a wide | https://arxiv.org/abs/2505.16988v1 |
range of widely used LLMs. The integrated toolkit provides common utilities such as code execution (secured via sandboxing [ 52]), web search, and image analysis—capabilities frequently required across MAS designs. These shared components eliminate redundant engineering 4 effort and facilitate reproducibility. Moreover, MASLab is designed to be extensible and compatible with ongoing open-source developments (e.g., MCP [53]), ensuring long-term adaptability. Unified configurations. MASLab standardizes non-algorithmic configurations across all methods to ensure fair and consistent comparisons. This includes aligning LLM settings (e.g., maximum token limits) and tool parameters (e.g., timeout durations for code execution). Such uniformity eliminates confounding factors introduced by implementation-level differences, allowing performance comparisons to reflect true methodological distinctions. 3.2 Evaluation of MAS Accurate, automated, and scalable evaluation protocols are always essential for all AI fields. How- ever, existing MAS works often adopt inconsistent evaluation procedures, introducing confounding variables that hinder fair comparison. For example, certain methods may be tailored to specific evaluation heuristics (e.g., rule-based string matching) which can be gamed by emphasizing format- specific prompts for agents, thereby inflating performance without reflecting true intelligence gains. These issues underscore the need for standardized, robust evaluation protocols that reflect genuine task-solving capabilities rather than formatting tricks. Evaluating responses with ground-truth answers. To address this, MASLab adopts a unified evaluation framework centered on LLM-based evaluation methods grounded in ground-truth answers, designed to assess semantic correctness rather than superficial formatting. We support two primary variants: (1) A two-step pipeline using general-purpose LLMs, which first extracts a final answer from the MAS-generated output based on the query, and then compares it against the ground-truth to determine correctness; (2) A direct scoring approach using task-specific evaluators (e.g., xVerify [ 54]), which are fine-tuned to assess correctness across various domains. In addition, MASLab includes three commonly used rule-based evaluation strategies from the MAS literature. Figure 3: Evaluation (5 different protocols) of methods us- ing Llama-3.3-70B-Instruct as the backend on MATH. The rankings of methods could be significantly different under different evaluation protocols, emphasizing the need for ac- curate and unified evaluation protocols.Surprisingly, our empirical results on MATH [ 55] benchmark (Figure 3) show that evaluation protocol choice significantly affects both absolute scores and method rankings. For in- stance, under the LLM-based two-step evaluation, MA V [ 38] ranks 1st, but drops to 10th under DyLAN’s rule- based scheme [ 31]. Conversely, Dy- LAN itself rises from 5th to 3rd. Sim- ilarly, AgentVerse’s accuracy drops from 79.0 to 25.6 when switching from the LLM-based two-step evalua- tion to the Hendrycks-style rule-based metric [ 55]. Manual inspection (Ta- ble 4) confirms the higher reliability of LLM-based evaluations, with both the two-step and xVerify approaches achieving over 98% agreement with human judgments, whereas the best-performing rule-based method reaches only 65%. Considering performance-cost trade-off, MASLab defaults to using xVerify, while remaining open to improvements as evaluation methodologies evolve. Evaluating coding tasks with test cases. For coding tasks, where ground-truth labels are often unavailable, MASLab similarly promotes LLM-assisted evaluation. Since tools like xVerify are inapplicable in this setting, we employ a two-step approach: (1) an LLM extracts executable code from the | https://arxiv.org/abs/2505.16988v1 |
MAS output given the original query, and (2) the extracted code is executed against the provided set of test cases to determine correctness. This process ensures that evaluation focuses on functional validity and abstracts away from inconsistencies in format or verbosity. All executions are sandboxed [52] to guarantee safety and consistency. 4 Empirical Study Experimental setups. Our experiments cover Llama (Llama-3.3-70B-Instruct [ 4]), Qwen (Qwen- 2.5-7/14/32/72B-Instruct [ 56]), and GPT (GPT-4omini/4.1mini/4.1 [ 57,58,59]) LLMs. We set the 5 Table 2: Results of general methods on diverse domains ( mathematics ,science ,knowledge , medicine ,coding ).Avg-V and Avg-R denotes averaged accuracy value ( ↑) and rank ( ↓) across benchmarks. Best and second-best numbers are highlighted. Method MATH GSM-H AQUA AIME SciBe GPQA MMLUP MedMC HEval MBPP Avg-V Avg-R Llama-3.3-70B-Instruct Single [4] 72.8 52.8 76.0 23.3 25.5 48.0 66.8 70.4 85.4 67.9 58.9 8.1±2.5 CoT [33] 74.4 57.0 76.8 26.7 24.7 53.0 69.8 73.8 85.4 69.7 61.1 5.5±2.8 SC [35] 76.2 53.4 80.3 30.0 27.9 52.5 70.6 72.6 82.3 69.7 61.6 4.8±2.8 AutoGen [34] 72.8 53.0 79.5 20.0 21.9 41.9 66.0 69.2 51.2 62.9 53.8 9.9±2.5 Debate [27] 78.4 53.6 80.3 30.0 27.9 51.0 73.6 75.0 84.8 69.7 62.4 3.5±1.9 MAD [28] 76.2 52.6 78.3 33.3 23.7 50.0 69.8 71.0 75.0 56.9 58.7 8.1±3.1 AgentVerse [29] 78.6 51.2 79.5 23.3 25.7 51.0 70.4 72.6 87.8 71.3 61.2 4.4±3.2 DyLAN [31] 77.6 53.6 78.3 33.3 26.7 54.0 70.4 72.6 82.9 70.1 62.0 4.7±2.8 MacNet [37] 75.2 56.6 77.2 26.7 23.5 51.0 64.0 69.8 86.6 67.1 59.8 7.2±3.0 MA V [38] 79.4 35.6 65.8 30.0 24.3 45.0 61.0 68.6 78.0 69.1 55.7 9.3±3.0 AFlow-Math [30] 82.2 59.8 77.2 26.7 25.3 48.0 68.6 69.2 84.2 69.7 61.1 5.9±3.0 MAS-GPT [19] 79.8 67.0 80.7 33.3 26.9 48.5 71.2 72.0 86.6 70.3 63.6 3.3±2.6 Qwen-2.5-72B-Instruct Single [56] 82.4 63.2 79.5 20.0 28.1 45.0 69.8 67.6 88.4 76.5 62.1 6.2±2.4 CoT [33] 83.0 64.2 80.3 16.7 26.3 47.0 71.2 67.8 93.9 75.8 62.6 5.1±2.1 SC [35] 86.0 63.2 83.5 20.0 28.3 49.0 73.0 69.0 90.2 75.8 63.8 2.8±1.8 AutoGen [34] 81.4 63.2 78.3 13.3 26.5 44.4 70.2 68.4 75.6 54.9 57.6 8.8±2.6 Debate [27] 85.4 62.0 80.3 20.0 26.9 49.0 74.0 71.0 90.2 77.6 63.6 3.2±2.2 MAD [28] 83.8 61.6 80.3 20.0 26.9 47.0 65.6 67.4 72.0 66.3 59.1 7.6±3.4 AgentVerse [29] 82.8 57.6 79.5 13.3 25.9 46.0 72.0 71.2 86.6 77.6 61.2 6.4±3.1 DyLAN [31] 84.2 62.4 82.3 20.0 24.7 43.4 71.2 70.0 79.9 76.8 61.5 6.1±3.2 MacNet [37] 82.2 63.0 80.3 10.0 25.1 42.4 65.4 63.8 87.2 75.5 59.5 8.5±3.3 MA V [38] 82.2 20.4 48.0 0.0 14.5 46.5 61.2 65.4 76.2 74.0 48.8 9.9±3.2 AFlow-Math [30] 84.8 68.4 78.7 23.3 28.3 47.5 69.2 66.2 87.8 75.5 63.0 5.7±3.5 MAS-GPT [19] 87.0 65.4 78.3 20.0 28.1 49.0 72.6 66.2 89.0 78.0 63.4 4.0±3.2 Figure 4: Trade-off between performance and cost. For fair comparisons, we only plot methods that do not involve tool usage. Methods above the fitted line are more cost-effective. default max token limit as 2048 with a temperature of 0.5. Our datasets cover domains including | https://arxiv.org/abs/2505.16988v1 |
mathematics (MATH [ 55], GSM-Hard [ 60], AQUA-RAT [ 61], AIME-2024), science (SciBench [ 62], GPQA [ 63]), knowledge (MMLU-Pro [ 64]), medicine (MedMCQA [ 65]), coding (HumanEval [ 66], MBPP [67]), and AI-assistant (GAIA [68]). 4.1 Current Landscape of MAS Methods Comparisons of general MAS on diverse domains. We show the current landscape of MAS methods (those for general tasks) by comparing them on diverse domains, including mathematics, science, knowledge, medicine, and coding in Table 2. From the table, we observe that (1) no method rules on all domains, suggesting that there is large room for future methods that could generalize well on more domains. (2) Using different backend models may result in different landscapes. For example, both AgentVerse [ 29] and DyLAN [ 31] achieve better performance than Single using Llama-3.3-70B-Instruct while worse using Qwen-2.5-72B-Instruct. One hypothesis is that Llama-3.3- 70B-Instruct has better collaboration capability than Qwen-2.5-72B-Instruct as we see that the gap between best-performing MAS and Single reduce from 4.7% to 1.7%. This suggests an interesting future direction of exploring the most suitable LLM for MAS or training more appropriate ones. (3) 6 Figure 5: Examining coding-specific methods (MapCoder [ 39] and EvoMAC [ 40]). Using GPT-4o-mini, EvoMAC performs best; with Llama-3.3-70B-Instruct, MapCoder leads. Figure 6: Optimization-based methods (GPTSwarm [ 32] and AFlow [ 30]) on MATH dataset. Figure 7: Examining MAS as AI assistants on GAIA [ 68]. (1) Equipping agents with tools (OWL- Roleplaying and MASLab-ReAct) significantly improves MAS performance. (2) The performance gains are more significant using stronger LLMs. (3) Our MASLab-ReAct performs the best. Generally, MAS-GPT [ 19] and LLM-Debate [ 27] are two methods that achieve the best performance across domains and LLMs, mainly due to their dataset-agnostic designs. In addition to performance comparison, we compare performance-cost trade-offs among methods in Figure 4 and Figure 11. From the Figure, we see that generally methods with better performances incur more token consumptions, where methods above the fitted line are more cost-effective. Examining coding-specific methods. We compare two coding-specific methods, MapCoder [ 39] and EvoMAC [ 40], on HumanEval and MBPP, using GPT-4o-mini and LLama-3.3-70B-Instruct as backends. Results in Figure 5 show that the performance is closely tied to the underlying LLM. Specifically, EvoMAC [ 40] consistently outperforms others when paired with GPT-4o-mini, whereas MapCoder [ 39] achieves the best results with LLaMA-3.3-70B-Instruct, especially on MBPP. This discrepancy may be attributed to backend-specific prompt optimization: e.g., EvoMAC [ 40] was primarily developed and tuned on GPT-4o-mini in its original work. Examining optimization-based methods. We compare two optimization-based methods, AFlow [ 30] and GPTSwarm [ 32], on the MATH [ 55] dataset. Following AFlow’s [ 30] original setup, we apply Claude-3.5-Sonnet [ 2] as the optimizer and GPT-4o-mini [ 57] as the executor. The evaluation protocol during testing matches that used in the optimization process (AFlow’s rule-based evaluation). Figure 6 reports the required cost for optimization and the achieved performance. We see that AFlow [ 30] incurs the most optimization cost while also achieving the best performance; while GPTSwarm [ 32] experiences performance drop after optimization in | https://arxiv.org/abs/2505.16988v1 |
this setup. This discrepancy likely stems from AFlow’s LLM-based optimization being more effective than GPTSwarm’s numerical approach, suggesting that the strategy of optimization should be carefully considered to ensure effectiveness. Examining MAS as AI assistants. While our earlier experiments primarily focus on standard LLM benchmarks—where improvements from MAS may sometimes appear marginal—this is due to the current lack of benchmarks specifically tailored to MAS. Nevertheless, such evaluations help establish a broad understanding of MAS performance across diverse scenarios. Here, we evaluate MAS on a more suitable benchmark: GAIA [ 68], which is designed to assess tool- augmented AI assistants. In this experiment, we provide agents with a suite of tools including a code 7 Figure 8: Examining compute-scaling properties on GPQA-Diamond. MASLab offers a platform for readily examining and choosing methods. Here, we see Self-Consistency and AgentVerse achieve better cost-performance trade-off. Figure 9: Examining size-scaling proprieties on GPQA-Diamond and MMLU-Pro. LLM-Debate performs the best overall. Some methods (e.g., AgentVerse) require the model to attain sufficient capability before MAS can be effective. executor, web search engine, document reader, and image/audio/video analysis utilities (see details in Section C.2). We consider two representative MAS methods with iterative planning and action: OWL-Roleplaying [ 42], and our implemented MASLab-ReAct, inspired by the ReAct paradigm [ 43]. We run experiments using two recent OpenAI models—GPT-4.1-mini and GPT-4.1 [ 59]. As shown in Figure 7, our findings are as follows: (1) Equipping agents with tools significantly improves MAS performance, surpassing both single-agent baselines and tool-less MAS methods. (2) The performance gains from tools are more pronounced when stronger LLM backends are used. For example, MASLab-ReAct achieves a 91% relative improvement over the single-LLM baseline when using GPT-4.1-mini, and an impressive 171% improvement with GPT-4.1. (3) Table 5 presents the performance–cost trade-off. MASLab-ReAct not only achieves the best performance but also consumes less than half the tokens compared to the second-best method, OWL-Roleplaying. We provide a failure analysis in Figure 10. 4.2 Scaling Properties As a unified codebase, our MASLab offers a platform for researchers and practitioners to readily examine, explore, and choose different methods. For example, we could use MASLab to explore the scaling properties of different methods by simply modifying some of the configurations. Scaling compute / inference times. As examples, we compare three configurable methods on the GPQA-Diamond [ 63] benchmark using Llama-3.3-70B-Instruct [ 4] as the backend, including Self-Consistency [ 35], LLM-Debate [ 27], and AgentVerse [ 29], to examine which method has the best compute-scaling property. The configurable parameters are: the number of parallel solutions for Self-Consistency [ 35]; the number of debate agents and debate rounds for LLM-Debate [ 27]; the number of recruiting agents, loop turns, and criticizing rounds for AgentVerse [ 29]. We plot the comparisons in Figure 8. We see that (1) Self-Consistency and AgentVerse achieve the best performance-cost trade-off as their dots are mostly on the upper left. (2) Scaling the compute can generally enhance the performance of these examined methods. For AgentVerse, increasing the number of loop turns from 3 to 5 brings the most performance improvement. For LLM-Debate, | https://arxiv.org/abs/2505.16988v1 |
increasing the number of agents is more effective than increasing the number of rounds in this case. Scaling backend model size. We evaluate the impact of scaling backend model size by comparing three representative MAS methods and a single-agent baseline on GPQA-Diamond [ 63] and MMLU- Pro [ 64]. To this end, we adopt the Qwen-2.5 instruct series, which provides a family of models with varying sizes—7B, 14B, 32B, and 72B—enabling controlled scaling experiments. As shown in Figure 9, we observe the following: (1) Overall, the performance of all methods improves with increasing model size, suggesting that stronger LLM backends generally benefit both MAS and single- agent approaches. Notably, LLM-Debate achieves particularly strong gains on GPQA-Diamond. (2) On MMLU-Pro, two outliers emerge: AgentVerse with 7B and 14B backends shows significantly degraded performance compared to other methods. Manual inspection reveals that these smaller models often fail to follow instruction formats correctly, causing outputs to deviate from the expected response schema (see Section 4.3 for details). (3) These observations indicate that MAS methods relying on precise formatting, intermediate reasoning steps, or structured inter-agent communication, 8 such as role assignment, voting, or sequential planning, may require a minimum threshold of language competency from the backend model. Below this threshold, the benefits of MAS design may be overshadowed by failures in basic task adherence. This highlights an interesting future direction: designing MAS methods that are more robust to backend model limitations, or adapting interaction protocols to better accommodate smaller, less capable LLMs. 4.3 Failure Analysis Here, we explore the reasons why MAS methods fail by analyzing the error logs. Table 3: Error analysis of AgentVerse [ 29] using Qwen-2.5-14B-Instruct as the back- end. Expect for that answers are wrong, all errors are caused by format errors. Dataset Wrong Format Others GPQA-D 79.66% 22.34% 0.00% MMLU-Pro 47.11% 52.89% 0.00% MATH 42.56% 57.44% 0.00%Format errors. Format error is a common type of fail- ure in many MAS methods, where LLMs fail to produce responses in the required format. A notable example oc- curs during the recruiting step in AgentVerse [ 29], where LLMs are tasked with outputting a predefined number of agents in a specific format. To investigate this, we analyze an outlier case from Figure 9 using Qwen-2.5- 14B-Instruct as the model backend. We classify incorrect outputs into three categories: wrong answers (i.e., the MAS produces an incorrect final answer), format errors (i.e., the MAS fails to produce a final answer due to formatting issues), and others. As shown in Table 3, format errors account for a significant portion of failures. Similar issues are observed in other methods like MAD [ 28] and DyLAN [ 31]. These findings underscore a critical challenge in LLM-based MAS: success hinges not only on reasoning or task comprehension but also on the model’s ability to meet strict formatting requirements. Improving format adherence or relaxing these constraints could significantly enhance system reliability. Figure 10: Error analysis of OWL-Roleplaying on GAIA [68] using GPT-4.1 as the model backend.Error analysis in tool-augmented scenario. We investigate the performance of OWL- Roleplaying on the GAIA benchmark, | https://arxiv.org/abs/2505.16988v1 |
which encompasses the most diverse components dur- ing task-solving, making it an ideal case study for comprehensive failure analysis. Our analysis reveals that, in this context, failure cases account for 66% of all samples. However, only 36.7% of these failures stem from incorrect final answers, while 45.0% are attributed to errors in tool us- age. These findings suggest that future research should focus not only on enhancing agents’ tool- handling capabilities but also on improving the quality of tools themselves—particularly their stability and efficiency—to create more robust and effective MAS. We believe that advancements in MCP tools within the open-source community could significantly contribute to the development of MAS. 5 Conclusions This paper introduces MASLab, a unified, comprehensive, research-friendly codebase for LLM- based multi-agent systems (MAS). (1) MASLab integrates 20+ established methods across multiple domains, each rigorously validated by comparing step-wise outputs with its official implementation. (2) MASLab unifies the whole pipeline from data pre-processing to evaluation, ensuring that all non-algorithmic factors are well aligned for fair comparisons. (3) MASLab implements methods in a shared streamlined structure, lowing entry barriers and simplifying secondary development. Extensive experiments covering 10+ benchmarks and 8 LLMs comprehensively showcase the current landscape of MAS methods. We also provide several analysis, such as exploring the effects of different evaluation protocols in existing works, the compute- and size-scaling properties. Notably, we demonstrate that the discrepancies in evaluation protocols can lead to substantial variation in performance rankings, directly underscoring the importance of such as unified codebase. MASLab will continue to evolve, tracking the latest developments in the field and incorporating advanced benchmarks, and welcome diverse contributions from the broader open-source community. 9 References [1] OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [2]Anthropic. Claude 3.5 sonnet. https://www.anthropic.com/news/claude-3-5-sonnet , 2024. Accessed: 2025-01-22. [3]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [4]Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [5]An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [6]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021. [7]Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th annual acm symposium on user interface software and technology , pages 1–22, 2023. [8]Tao Tu, Shekoofeh Azizi, Danny Driess, Mike Schaekermann, Mohamed Amin, Pi-Chuan Chang, Andrew Carroll, Charles Lau, Ryutaro Tanno, Ira Ktena, et al. Towards generalist biomedical ai. Nejm Ai , 1(3):AIoa2300138, 2024. [9]Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, | https://arxiv.org/abs/2505.16988v1 |
Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. Bloomberggpt: A large language model for finance. arXiv preprint arXiv:2303.17564 , 2023. [10] Lexin Zhou, Wout Schellaert, Fernando Martínez-Plumed, Yael Moros-Daval, Cèsar Ferri, and José Hernández-Orallo. Larger and more instructable language models become less reliable. Nature , 634(8032):61–68, 2024. [11] Yotam Wolf, Noam Wies, Oshri Avnery, Yoav Levine, and Amnon Shashua. Fundamental limitations of alignment in large language models. In Proceedings of the 41st International Conference on Machine Learning , pages 53079–53112, 2024. [12] Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. Siren’s song in the ai ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219 , 2023. [13] Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. Factscore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12076–12100, 2023. [14] Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Sean Welleck, Peter West, Chandra Bhagavatula, Ronan Le Bras, et al. Faith and fate: Limits of transformers on compositionality. Advances in Neural Information Processing Systems , 36:70293–70332, 2023. [15] Muhammad Usman Hadi, Rizwan Qureshi, Abbas Shah, Muhammad Irfan, Anas Zafar, Muhammad Bilal Shaikh, Naveed Akhtar, Jia Wu, Seyedali Mirjalili, et al. Large language models: A comprehensive survey of its applications, challenges, limitations, and future prospects. Authorea Preprints , 2023. [16] Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, et al. Chatdev: Communicative agents for software development. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 15174–15186, 2024. [17] Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Commu- nicative agents for" mind" exploration of large language model society. Advances in Neural Information Processing Systems , 36:51991–52008, 2023. 10 [18] Shengran Hu, Cong Lu, and Jeff Clune. Automated design of agentic systems. In The Thirteenth International Conference on Learning Representations , 2025. [19] Rui Ye, Shuo Tang, Rui Ge, Yaxin Du, Zhenfei Yin, Jing Shao, and Siheng Chen. MAS-GPT: Training LLMs to build LLM-based multi-agent systems. In Workshop on Reasoning and Planning for Large Language Models , 2025. [20] Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, et al. Metagpt: Meta programming for a multi-agent collaborative framework. In The Twelfth International Conference on Learning Representations , 2024. [21] Bin Lei, Yi Zhang, Shan Zuo, Ali Payani, and Caiwen Ding. Macm: Utilizing a multi-agent system for condition mining in solving complex mathematical problems. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [22] Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning using large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: | https://arxiv.org/abs/2505.16988v1 |
Industry Track) , pages 37–42, 2023. [23] Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, and David Ha. The ai scientist: Towards fully automated open-ended scientific discovery. arXiv preprint arXiv:2408.06292 , 2024. [24] Samuel Schmidgall, Yusheng Su, Ze Wang, Ximeng Sun, Jialian Wu, Xiaodong Yu, Jiang Liu, Zicheng Liu, and Emad Barsoum. Agent laboratory: Using llm agents as research assistants. arXiv preprint arXiv:2501.04227 , 2025. [25] Xianghe Pang, Shuo Tang, Rui Ye, Yuxin Xiong, Bolun Zhang, Yanfeng Wang, and Siheng Chen. Self- alignment of large language models via monopolylogue-based social scene simulation. In International Conference on Machine Learning , pages 39416–39447. PMLR, 2024. [26] Shuo Tang, Xianghe Pang, Zexi Liu, Bohan Tang, Rui Ye, Tian Jin, Xiaowen Dong, Yanfeng Wang, and Siheng Chen. Synthesizing post-training data for llms through multi-agent simulation. arXiv preprint arXiv:2410.14251 , 2024. [27] Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. In Forty-first International Conference on Machine Learning , 2024. [28] Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Shuming Shi, and Zhaopeng Tu. Encouraging divergent thinking in large language models through multi-agent debate. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 17889–17904, Miami, Florida, USA, November 2024. Association for Computational Linguistics. [29] Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, et al. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors. In The Twelfth International Conference on Learning Representations , 2024. [30] Jiayi Zhang, Jinyu Xiang, Zhaoyang Yu, Fengwei Teng, Xiong-Hui Chen, Jiaqi Chen, Mingchen Zhuge, Xin Cheng, Sirui Hong, Jinlin Wang, Bingnan Zheng, Bang Liu, Yuyu Luo, and Chenglin Wu. AFlow: Automating agentic workflow generation. In The Thirteenth International Conference on Learning Representations , 2025. [31] Zijun Liu, Yanzhe Zhang, Peng Li, Yang Liu, and Diyi Yang. A dynamic llm-powered agent network for task-oriented agent collaboration. In First Conference on Language Modeling , 2024. [32] Mingchen Zhuge, Wenyi Wang, Louis Kirsch, Francesco Faccio, Dmitrii Khizbullin, and Jürgen Schmid- huber. Gptswarm: Language agents as optimizable graphs. In Forty-first International Conference on Machine Learning , 2024. [33] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [34] Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, et al. Autogen: Enabling next-gen llm applications via multi-agent conversation. InICLR 2024 Workshop on Large Language Model (LLM) Agents , 2024. 11 [35] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations , 2024. [36] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri | https://arxiv.org/abs/2505.16988v1 |
Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. Advances in Neural Information Processing Systems , 36, 2024. [37] Chen Qian, Zihao Xie, YiFei Wang, Wei Liu, Kunlun Zhu, Hanchen Xia, Yufan Dang, Zhuoyun Du, Weize Chen, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Scaling large language model-based multi-agent collaboration. In The Thirteenth International Conference on Learning Representations , 2025. [38] Shalev Lifshitz, Sheila A. McIlraith, and Yilun Du. Multi-agent verification: Scaling test-time compute with goal verifiers. In Workshop on Reasoning and Planning for Large Language Models , 2025. [39] Md Ashraful Islam, Mohammed Eunus Ali, and Md Rizwan Parvez. Mapcoder: Multi-agent code generation for competitive problem solving. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 4912–4944, 2024. [40] Yue Hu, Yuzhu Cai, Yaxin Du, Xinyu Zhu, Xiangrui Liu, Zijie Yu, Yuchen Hou, Shuo Tang, and Siheng Chen. Self-evolving multi-agent networks for software development. In The Thirteenth International Conference on Learning Representations , 2025. [41] Xiangru Tang, Anni Zou, Zhuosheng Zhang, Ziming Li, Yilun Zhao, Xingyao Zhang, Arman Cohan, and Mark Gerstein. Medagents: Large language models as collaborators for zero-shot medical reasoning. In Findings of the Association for Computational Linguistics ACL 2024 , pages 599–621, 2024. [42] Mengkang Hu, Yuhang Zhou, Wendong Fan, Yuzhou Nie, Bowei Xia, Tao Sun, Ziyu Ye, Zhaoxuan Jin, Yingru Li, Zeyu Zhang, Yifeng Wang, Qianshuo Ye, Ping Luo, and Guohao Li. Owl: Optimized workforce learning for general multi-agent assistance in real-world task automation, 2025. [43] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations , 2023. [44] Vighnesh Subramaniam, Yilun Du, Joshua B. Tenenbaum, Antonio Torralba, Shuang Li, and Igor Mordatch. Multiagent finetuning: Self improvement with diverse reasoning chains. In The Thirteenth International Conference on Learning Representations , 2025. [45] Guibin Zhang, Yanwei Yue, Xiangguo Sun, Guancheng Wan, Miao Yu, Junfeng Fang, Kun Wang, Tianlong Chen, and Dawei Cheng. G-designer: Architecting multi-agent communication topologies via graph neural networks. arXiv preprint arXiv:2410.11782 , 2024. [46] Guibin Zhang, Yanwei Yue, Zhixun Li, Sukwon Yun, Guancheng Wan, Kun Wang, Dawei Cheng, Jef- frey Xu Yu, and Tianlong Chen. Cut the crap: An economical communication pipeline for LLM-based multi-agent systems. In The Thirteenth International Conference on Learning Representations , 2025. [47] Yuanshuo Zhang, Yuchen Hou, Bohan Tang, Shuo Chen, Muhan Zhang, Xiaowen Dong, and Siheng Chen. Gnns as predictors of agentic workflow performances. arXiv preprint arXiv:2503.11301 , 2025. [48] LangChain. Langchain. https://www.langchain.com/langchain , 2025. Accessed: 2025-05-09. [49] LangGraph. Langgraph. https://www.langchain.com/langgraph , 2025. Accessed: 2025-05-09. [50] Tianbao Xie, Fan Zhou, Zhoujun Cheng, Peng Shi, Luoxuan Weng, Yitao Liu, Toh Jing Hua, Junning Zhao, Qian Liu, Che Liu, Zeyu Liu, Yiheng Xu, Hongjin SU, Dongchan Shin, Caiming Xiong, and Tao Yu. Openagents: An open platform for language agents in the wild. In First Conference on Language Modeling , 2024. [51] Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents | https://arxiv.org/abs/2505.16988v1 |
with verbal reinforcement learning. Advances in Neural Information Processing Systems , 36:8634–8652, 2023. [52] Bytedance. Sandbox fusion: Versatile code sandbox for llms. https://bytedance.github.io/ SandboxFusion/ , 2025. Accessed: 2025-05-09. [53] Anthropic. Introducing the model context protocol. https://www.anthropic.com/news/ model-context-protocol , 2025. Accessed: 2025-05-09. 12 [54] Ding Chen, Qingchen Yu, Pengyuan Wang, Wentao Zhang, Bo Tang, Feiyu Xiong, Xinchi Li, Minchuan Yang, and Zhiyu Li. xverify: Efficient answer verifier for reasoning model evaluations. arXiv preprint arXiv:2504.10481 , 2025. [55] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS , 2021. [56] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [57] OpenAI. Gpt-4o mini: advancing cost-efficient intelligence. https://openai.com/index/gpt-4o-mini- advancing-cost-efficient-intelligence/, 2024. Accessed: 2025-01-23. [58] OpenAI. Hello gpt-4o. https://openai.com/index/hello-gpt-4o/ , 2024. Accessed: 2025-01-23. [59] OpenAI. Introducing gpt-4.1 in the api. https://openai.com/index/gpt-4-1/ , 2025. Accessed: 2025-05-09. [60] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning , pages 10764–10799. PMLR, 2023. [61] Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 158–167, 2017. [62] Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scientific problem- solving abilities of large language models. In Forty-first International Conference on Machine Learning , 2024. [63] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. arXiv preprint arXiv:2311.12022 , 2023. [64] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue, and Wenhu Chen. MMLU-pro: A more robust and challenging multi-task language understanding benchmark. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. [65] Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. Medmcqa: A large-scale multi- subject multi-choice dataset for medical domain question answering. In Conference on health, inference, and learning , pages 248–260. PMLR, 2022. [66] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 | https://arxiv.org/abs/2505.16988v1 |
, 2021. [67] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 , 2021. [68] Grégoire Mialon, Clémentine Fourrier, Thomas Wolf, Yann LeCun, and Thomas Scialom. GAIA: a benchmark for general AI assistants. In The Twelfth International Conference on Learning Representations , 2024. [69] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles , pages 611–626, 2023. 13 Protocol LLM-2step LLM-xVerify Rule-HF Rule-DyLAN Rule-Hendry. Accuracy 98.59 98.35 41.65 65.65 27.29 Table 4: Accuracy comparisons of 5 different evaluation protocols by human’s manual check. This measurement is based on MATH dataset. The two LLM-based evaluation protocols achieve significantly higher agreement with human evaluation. LLM-2step is based on two-time inference of Llama-3.3-70B-Instruct while LLM-xVerify is based on one-time inference of a 9B-sized LLM. Generally, LLM-xVerify achieves the best effectiveness-efficiency trade-off. MethodLevel 1 Level 2 Level 3 All Acc Cost Acc Cost Acc Cost Acc Cost GPT-4.1-mini Single 16.98 663 16.28 353 0.0 1529 13.94 638 SC 22.64 4504 15.12 2412 0.0 8484 15.15 4041 Debate 24.53 4388 16.28 4870 7.69 12972 17.58 5992 AgentVerse 32.08 7174 15.12 7368 7.69 15753 19.39 8627 OWL-Roleplaying 35.85 51543 25.58 58881 11.54 107635 26.67 64206 ReAct-MASLab 33.96 19866 26.74 41768 11.54 55743 26.67 36935 GPT-4.1 Single 24.53 394 16.28 470 3.85 1378 16.97 589 SC 20.75 3037 16.28 3362 11.54 11786 16.97 4585 Debate 32.08 4103 24.42 4339 11.54 11564 24.85 5402 AgentVerse 28.30 6876 18.60 5995 3.85 11034 19.39 7072 OWL-Roleplaying 43.40 48073 30.23 101827 26.92 101986 33.94 84586 ReAct-MASLab 56.60 18278 47.67 35636 19.23 43525 46.06 31303 Table 5: Comparisons of performance and cost on GAIA. The performance is evaluated by accuracy while the cost is evaluated by the number of costed text tokens per query. A Limitations Despite being the most comprehensive codebase in LLM-based MAS, there are still methods that have not been incorporated yet. Secondly, despite that most of the benchmarks in this paper are commonly used in MAS literature, they are not specifically designed for the field of MAS. However, this is not a unique limitation of this paper. We will continually working on this codebase to support more methods and benchmarks. We also plan to design new benchmarks specifically for MAS in the future. B Broader Impacts This paper introduces a unified, comprehensive, and research-friendly codebase for the community of LLM-based MAS. This resource alleviates the burden of reproduction for researchers, enabling them to allocate more effort to innovative algorithm design. It fosters fair comparisons across studies, lowers the entry barrier for newcomers, and facilitates secondary development, thereby accelerating progress in the field. While potential negative impacts of our approach mirror those associated with large language models—such as ethical concerns and risks of misuse—these issues are intrinsic to LLM usage in general and do not necessitate further elaboration | https://arxiv.org/abs/2505.16988v1 |
here. 14 RunOptimization Inference Optimizer Cost Executor Cost Val Acc Test Acc Cost MASLab 0.58251$ 19.05964$ 54.52 65.20 1.489$ Official - 19.52409$ 53.27 65.06 2.231$ Table 6: Comparisons of our implementation of AFlow [ 30] and the official one. The optimizer is Claude-3.5-Sonnet while the executor is GPT-4o-mini. The official code does not record the optimizer cost. This table verifies the effectiveness of our re-implementation. Figure 11: Examinations of trade-offs of performance and cost of nine MAS methods across 10 benchmarks. C Implementation Details C.1 Computational Resources For open-source LLMs, we leverage the vLLM [ 69] library launch LLM service. For 32B-, 70B-, and 72B-sized LLMs, we use 4 NVIDIA A100 GPUs; for 14B-sized LLMs, we use 2 NVIDIA A100 GPUs; for 7B-sized LLMs, we use 1 NVIDIA A100 GPU. C.2 GAIA GAIA is a challenging benchmark for general AI assistants. In our experiments, we utilize the validation set of GAIA, which contains a total of 165 samples categorized into three levels of difficulty. It requires the MAS to engage in multi-turn collaboration to solve the tasks. Both the OWL-Roleplaying and React-MASLab methods are constrained to a maximum of 12 turns per task. Toolkits. All methods share a common set of toolkits, including a web interaction tool, a document processing tool, a video analysis tool, an audio analysis tool, a code execution tool, an image analysis tool, a search tool, and an Excel tool. Several of these tools incorporate multimodal large language models. Except for the audio analysis tool, all such tools utilize the same model version as the one configured in the main experimental pipeline. The web interaction tool employs the Playwright library 15 to simulate browser behavior. However, we observe occasional instability during experimentation. To reduce both runtime and token consumption, we impose strict operational constraints: a 30,000 ms timeout for website navigation, a 20,000 ms timeout for page loading, and a hard cap of 10 web interaction turns per task. Tasks exceeding this limit are forcibly terminated. The document processing tool supports parsing a wide range of document formats. For web content extraction and parsing specifically, we employ an external tool called Firecrawl. The video analysis tool extracts 28 evenly spaced frames from each video and uses OpenAI’s Whisper-1 model to transcribe the audio into text. These frames, along with the transcribed text, are jointly input into a vision-language model for multimodal analysis. The audio analysis tool processes audio files by encoding them in Base64 format and feeding them into the GPT-4o-mini-audio-preview model for analysis. The code execution tool operates by spawning a subprocess that simulates the writing and execution of Python code in a sandboxed environment. The search tool integrates multiple retrieval backends such as Google, DuckDuckGo, Wikipedia, and Archive.org, allowing agents to gather information from diverse sources. Memory. We simplify the process of memory storage and retrieval for the model. To strike a balance between performance and token efficiency during memory retrieval, we impose a maximum limit of 51,200 tokens on the retrieved content. Similarly, we cap the maximum token length for model output at 12,800 tokens. | https://arxiv.org/abs/2505.16988v1 |
Failure analysis. Throughout the experiments, we log MAS outputs and failure cases. After the experiments, we select results from the OWL-Roleplaying method running on the GPT-4.1 model and perform a detailed categorization and statistical analysis of the errors encountered. D Re-Implementation Notes D.1 MAS for General Tasks AutoGen [ 34].Based on the examples proposed in the paper of AutoGen [ 34] and the guidelines provided in its official documentation ( https://microsoft.github.io/autogen/0.2/ ), we have developed a foundational workflow that embodies its conversational characteristics, tailored to solve basic text-level problems with code execution and memory retention. AgentVerse [ 29].AgentVerse provides several dataset-specific versions including those for MGSM and HumanEval. we have replicated workflows corresponding to datasets such as HumanEval and MGSM, aligning with those presented in the original AgentVerse repository ( https://github. com/OpenBMB/AgentVerse ) and its paper. Additionally, we develop a general workflow capable of solving common problems. LLM-Debate [ 27]. We notice that the official code in https://github.com/ composable-models/llm_multiagent_debate is not readily executable and that the code relies on an string operation for extracting answers from responses, which frequently causes errors. Therefore, we slightly modify the code by making it bug-free and rely on LLM for aggregating final answers. This significantly enhance the performance of LLM-Debate as it no longer encounter errors during execution. GPTSwarm [ 32]. The official code of GPTSwarm https://github.com/metauto-ai/ GPTSwarm/tree/main/experiments contains versions for MMLU, HumanEval, GAIA, Cross- words. We implement the version of HumanEval and MMLU, and based on the logic of MMLU, we develop a version for general problem-solving. DyLAN [ 31].The official code in https://github.com/SALT-NLP/DyLAN uses a custom answer extraction function to return final mathematical results. To ensure fair comparison across evalua- tion protocols, we modify the return logic of the original code while preserving the task-specific initialization parameters as defined in the original implementation. Self-Refine [ 36].The official implementation in https://github.com/madaan/self-refine provides dataset-specific prompt examples. Following its logic for solving mathematical problems, we develop code for general problem-solving. Additionally, since the original code’s extraction logic for mathematical problems is not robust and often results in syntactically incorrect code, we redesign the extraction function to more effectively extract executable code from raw LLM responses. 16 MacNet [ 37].We simplify the structure of waiting.py in https://github.com/OpenBMB/ ChatDev/tree/macnet when reproducing MacNet, but keep its functionality consistent, mainly in terms of high maintainability and memory safety. In addition to this, we develop a version for general cases according to their implementation for SRDD. Reflexion [ 51].For the method in https://github.com/noahshinn/reflexion , we implement the HumanEval and MBPP modes for programming tasks. Additionally, based on the logic of the programming tasks, we develop a version for general problem-solving. ADAS [ 18].We notice that the official code in https://github.com/ShengranHu/ADAS does not support flexible selection of execution models, which makes it difficult to evaluate the effect of the MAS module and to develop a heterogeneous MAS version. Therefore, we slightly modify the code to fix existing bugs and to allow users to specify both the meta LLM and the execution LLM during optimization, as well as choose the execution model during inference. | https://arxiv.org/abs/2505.16988v1 |
We also set the temperature to zero and ensure that when using GPT-3.5 as the execution model (same as in the original repo), the output remains exactly the same. These improvements significantly enhance the compatibility and extensibility of ADAS. AFlow [ 30].The official code in https://github.com/FoundationAgents/MetaGPT/tree/ main/examples/aflow andhttps://github.com/FoundationAgents/AFlow is very complex and even buggy, we simplify the format and make sure that the core parts are fully aligned and bug free. In addition we use AsyncOpenAI when reproducing AFlow to speed up the optimization. MA V [ 38].We reproduce the MATH and MMLU versions of MA V and develope a general version based on the MATH version. D.2 MAS for Coding Tasks MetaGPT [ 20].MetaGPT is an intricate system that presents a considerable challenge to analysis. Our research reveals that the efficacy of its communication infrastructure on the entire system is negligible, and its practical impact is confined to modest-scale projects. To facilitate comprehension, we streamline it into a linear framework, aligning it with the structure of the original paper. In fact, we find that the existing structure cannot be applied to datasets such as HumanEval and MBPP. ChatDev [ 16].ChatDev primarily focuses on the domain of software development. By leveraging natural language processing techniques, ChatDev enables seamless automation of the entire software development lifecycle, including the generation of GUIs (graphical user interfaces). The complexity of the resulting software is closely tied to the specificity of user-defined requirements. Based on the official ChatDev paper ([ 16]) and its official repository ( https://github.com/OpenBMB/ ChatDev ), we adapted a ChatDev workflow within the MAS-Lab framework tailored to SRDD (Software Requirement Description Dataset), aligning with the design principles and capabilities demonstrated in the original ChatDev system. MapCoder [ 39].Our implementation follows the official codebase from https://github.com/ Md-Ashraful-Pramanik/MapCoder , preserving its core methodology. However, we note that the original implementation uses a pre-processed version of the HumanEval dataset, which includes example test cases. To ensure a fair comparison across different methods, we do not use this pre- processed version. Instead, we augment the framework with a function that dynamically extracts test cases from the original HumanEval prompts. This modification does not affect MapCoder’s core logic but ensures all baselines are evaluated under identical conditions. EvoMAC [ 40].We collaborate with the authors of EvoMAC, who provide their official implemen- tation to be integrated into our framework. The method remains unchanged. Together with the authors, we release this joint implementation as part of our open-source framework, maintaining full transparency and reproducibility. D.3 MAS for Mathematical Tasks MACM [ 21].MACM is an MAS method specialized in solving mathematical problems using the code interpreter tool to assist problem solving. Since their official code is specifically designed for the usage of OpenAI’s Assistants interface, we follow the same LLM usage for this particular case. In the future, we plan to extend it to support OpenAI’s chat mode. 17 D.4 MAS for Scientific Tasks MedAgents [ 41].The official code in https://github.com/gersteinlab/MedAgents supports multiple working modes. We fully reproduce all modes and set the default mode to match the original repository’s default configuration, | https://arxiv.org/abs/2505.16988v1 |
keeping all other external parameters consistent with the original defaults. D.5 MAS for Tool-Required Tasks OWL-Roleplaying [ 42].OWL ( https://github.com/camel-ai/owl ) is a framework for multi- agent collaboration. This framework includes OWL-Roleplaying as a MAS method specifically designed for the GAIA benchmark [ 68]. This framework may introduce massive token consumption for each specific task/query. Considering the research-friendly nature of the our MASLab framework, several trade-offs and simplifications are made during the adaptation of this method to MASLab, with a focus on enhancing code readability and reducing computational costs. Overall, the main process of OWL is maintained during the adaptation while we limit the maximum retrying times considering economy. For example, we set stricter limitations on the use of the web tool to mitigate the substantial token costs associated with frequent web interactions. ReAct-MASLab [ 43].Building upon the toolkits from OWL, we propose a method ReAct-MASLab inspired by the ReAct [ 43] method. This method achieves better performance with lower cost compared to OWL-Roleplaying. 18 | https://arxiv.org/abs/2505.16988v1 |
R2EC: TOWARDS LARGE RECOMMENDER MODELS WITH REASONING Runyang You1Yongqi Li1∗Xinyu Lin2Xin Zhang1 Wenjie Wang3Wenjie Li1Liqiang Nie4 1The Hong Kong Polytechnic University2National University of Singapore 3University of Science and Technology of China4Harbin Institute of Technology (Shenzhen) runyang.y@outlook.com ,liyongqi0@gmail.com , xylin1028@gmail.com ,zhangxin2023@stu.hit.edu.cn , wenjiewang96@gmail.com ,cswjli@comp.polyu.edu.hk ,nieliqiang@gmail.com ABSTRACT Large recommender models have extended LLMs as powerful recommenders via encoding or item generation, and recent breakthroughs in LLM reasoning synchronously motivate the exploration of reasoning in recommendation. Current studies usually position LLMs as external reasoning modules to yield auxiliary thought for augmenting conventional recommendation pipelines. However, such decoupled designs are limited in significant resource cost and suboptimal joint optimization. To address these issues, we propose R2ec, a unified large recommender model with intrinsic reasoning capabilities. Initially, we reconceptualize the model architecture to facilitate interleaved reasoning and recommendation in the autoregressive process. Subsequently, we propose RecPO, a corresponding reinforcement learning framework that optimizes R2ecboth the reasoning and recommendation capabilities simultaneously in a single policy update; RecPO introduces a fused reward scheme that solely leverages recommendation labels to simulate the reasoning capability, eliminating dependency on specialized reasoning annotations. Experiments on three datasets with various baselines verify the effectiveness of R2ec, showing relative improvements of 68.67% in Hit@5 and 45.21% in NDCG@20. Code available at https://github.com/YRYangang/RRec. 1 Introduction Large language models (LLMs) have demonstrated remarkable capabilities in contextual understanding and open- ended generation [ 1,2,3,4]. This success has catalyzed the development of large recommender models that inherit and specialize in the advantages of LLMs for recommendation. Current approaches can be divided into two main streams: one employs LLMs as powerful encoders to embed users (their historical interactions) [ 5,6,7], while the other reformulates item prediction into the autoregressive generation of item identifiers [ 4,8]. These large recommender models exhibit remarkable generalization capabilities, achieving advanced performance across diverse application scenarios, including cold-start recommendations [ 9], cross-domain personalization [ 10], and long-tail item prediction [11, 12]. The frontier of LLM capabilities now extends beyond model size scaling to test-time scaling, i.e.,reasoning. Emerging advances like DeepSeek-R1 [ 13] demonstrates that such extra computation during inference can further improve LLM capabilities in areas such as mathematics, coding, and scientific problem-solving [ 14]. Given that large recommender models are instantiated from pretrained LLMs, a natural question is: how can large recommender model benefit from reasoning to further enhance recommendation performance? Existing studies have preliminarily explored LLM reasoning for recommendation, including user preference analysis [15,16], synthetic item profile enrichment [ 17,18], user search query rewriting [ 19,20], and rating prediction [ 21]. These approaches typically take LLMs as additional reasoning modules to augment the original recommendation ∗Yongqi Li is the corresponding author.arXiv:2505.16994v1 [cs.IR] 22 May 2025 L a r g e R e c o m m e n d e r M o d e l I hav e pur chased “ C yberpunk 20 77 ” , ... R ec ommend my ne x t pur chase aft er <answer> . ... ... Y ou hav e enjo y ed s t or y - rich open w orld ... < answer > | https://arxiv.org/abs/2505.16994v1 |
hav e enjo y ed s t or y - rich open w orld ... answer ><Y ou...l m _ h e a dr e c _ h e a dA ssassin' s Cr eed - Shado ws U ser pr omp t T ok en EmbeddingHidden S t a t eT ok enIt emR e asonin gR e commen dFigure 1: The architecture overview of R2ec, which facilitates interleaved reasoning and recommendation in an autoregressive process with two task-specific heads: 1) language-modeling head ( lm_head ) for reasoning generation; and 2) recommendation head ( rec_head ) for item prediction. pipeline, decoupling reasoning from item prediction. However, such designs introduce crucial limitations: 1) Significant resource cost. Two distinct modules, namely the large reasoning model and the recommendation model, must be trained, checkpointed, and served in parallel, inflating both memory footprint and inference latency. 2) Suboptimal joint optimization. Reasoning and recommendation modules can only be trained by freezing one while updating the other—first update the reasoning module, then switch to the recommendation module, and continue alternating in this manner [ 15]. This alternate optimization scheme prevents gradient flow across the pipeline, precluding true end-to-end learning, hindering fine -grained alignment between reasoning and ranking objectives, and ultimately leading to suboptimal convergence. In this work, we aim to develop a unified large recommender model with intrinsic reasoning capabilities, exploring new technical solutions for reasoning-enhanced recommendation systems. It is non-trivial to develop a unified large recommender model with reasoning due to the following aspects: 1) Model design. Recommendation is traditionally framed as a discriminative ranking task, whereas reasoning is a generative process. How to incorporate both generative reasoning and discriminative recommendation into one single model architecture is worth careful consideration. 2) Training optimization. It is also challenging to stimulate and optimize the reasoning capabilities of large recommender models. On the one hand, recommendation domains lack annotated reasoning traces, as personalized recommendation rationale is hardly obtainable, making supervised learning impractical. On the other hand, Reinforcement Learning (RL) [ 20,22,23] is a natural alternative but encounters unique problems in reward and training objective design within the recommendation task. Since reasoning and recommendation capabilities interact within a single model, this necessitates a training framework that effectively bridges these two aspects. To address the above issues, we propose a unified large Recommender model with intrinsic Reasoning, dubbed as R2ec. 1) In the model design, as illustrated in Figure 1, we introduce the recommendation head to the LLM backbone, alongside the original language-modeling head. This setup allows the model to first generate reasoning tokens autoregressively and then predict the item in the last step. This model architecture integrates both generation and recommendation tasks, while the next crucial step is to simulate the reasoning capabilities tailored for accurate item predictions. 2) In training optimization, we propose an RL-based training framework without human reasoning annotations. First, we sample trajectories from the model, which essentially consist of reasoning-then-item sequences, similar to the reasoning-then-answer sequences in LLMs. Second, unlike Deepseek-R1, which defines rewards based on answer accuracy, we found it insufficient to set the reward | https://arxiv.org/abs/2505.16994v1 |
solely using item recommendation metrics. Therefore, we propose a fused reward scheme that combines discrete ranking rewards with continuous similarity rewards. With the sampled trajectories and assigned rewards, a joint reasoning-and-recommendation training objective is introduced to optimize the model for improving the recommendation through reasoning. We conducted extensive experiments to compare R2ecwith various traditional, LLM-based, and reasoning-augmented recommendation baselines. The experimental results on three datasets illustrate R2ecsignificantly outperforms all baselines with relative improvements of 68.67% in Hit@5 and 45.21% in NDCG@20, verifying the effectiveness of the unified reasoning-recommendation model design. We also performed the ablation study and 5 in-depth analysis experiments, demonstrating that R2ecpossesses strong generalization capabilities with different optimization algorithms. 2 More surprisingly, we observed a similar phenomenon in LLM reasoning that the reasoning length increases as training progresses. The key contributions in this work are summarized as follows: •We establish R2ec, the first unified large recommender model that incorporates reasoning and recommendation. By introducing the recommendation head along with the original language-modeling head, R2ecinterleaves the reasoning generation and item prediction in the single autoregressive generation process. •We propose RecPO, an RL framework that enables optimization of both the reasoning and recommendation simultaneously in a single policy update, which only requires the common recommendation labels with the introduced fused reward scheme. •Extensive experiments show that R2ecachieves significant improvement (68.67% in Hit@5 and 45.21% in NDCG@20) on three datasets. It is noted that a similar phenomenon in LLM reasoning occurs, where the reasoning length increases as training progresses. 2 Related Work 2.1 Reinforcement Learning for LLM Reasoning OpenAI’s GPT -o1 demonstrated that scaling reinforcement learning (RL) training with large compute budgets enables models to internalize reasoning as a differentiable policy, achieving state-of-the-art performance in emergent meta- reasoning tasks [ 24]. Early RLHF methods trained a reward model on human preference data and fine-tuned the policy using Proximal Policy Optimization (PPO) [ 23]; however, PPO’s multiple optimization passes introduce implementation complexity. To streamline RL tuning, Direct Preference Optimization (DPO) [ 25] performs single-step policy updates, trading simplicity for potential off-policy bias. Alternative estimators, such as Group Relative Policy Optimization (GRPO) [ 22], derive baselines from group-level rewards, eliminating the need for a separate critic, while REINFORCE’s leave-one-out (RLOO) [ 26] computes advantages by excluding each trajectory sequentially. Building on these advancements, DeepSeek-Zero [ 27] removes reliance on supervised fine-tuning by driving reasoning emergence purely through intrinsic RL rewards, and DeepSeek-R1 further integrates rule-driven reward engineering with self-play verification to enhance reasoning robustness. 2.2 Large Language Model-based Recommendation Recent advances in LLM-based recommendation typically follow two paradigms: 1) LLM-enhanced, where LLMs enrich recommendation pipelines with additional features, and 2) LLM-centric, where recommendation is framed as a generative task via item identifiers [ 1,2,3,4,28,29], or LLMs serve as encoders to embed users and items [ 6,30]. We term models under the second paradigm as large recommender models. In the first paradigm, LLM-generated embeddings are utilized to enhance conventional recommenders. Adoption of LLM-generated text are not new, typical examples includes user intent [ 15,31], search queries [ 20], item summaries [ 18], or rationales [ 32]. | https://arxiv.org/abs/2505.16994v1 |
Among them, SLIM [ 32], LangPTune [ 15], and Rec-R1 [ 20] further optimize LLM for finer rationale generation with RL. However, such designs introduce high resource demands for training and serving both models, and the connection between two modules remains gradient-less, blocking true end-to-end learning, resulting in suboptimal convergence and performance. Large recommender models typically recommend through autoregressive generation of item identifiers [ 4,28] or titles [ 1,2,3,29], or by embedding user interaction sequences [ 6,30]. Large recommender model has demonstrated remarkable potential, but current methods fail to utilize reasoning ability of LLMs. To the best of our knowledge, no existing large recommender model improves recommendation over explicit reasoning generation, which is the gap that motivated our work. 3 Preliminaries 3.1 LLM-based Recommendation LLM-based recommendation is an emerging paradigm, and the most applicable approach is to leverage LLMs as encoders to embed users and items, which are widely adopted in industry [ 30,28,33]. Typically, the recommendation process within this paradigm involves the following structured steps: User and Item Formulation. Structured interaction histories and item metadata are first formulated into natural language prompts suitable for LLMs. We first collect a user’s historically interacted items with the corresponding 3 ratings. We then construct a textual prompt includes both the instruction and a natural-language description of these past behaviors. E.g., “User has purchased and rated the following products sequentially: 1. ‘Avatar Blu-ray’ (4/5); 2. ‘Wireless Headphones’ (5/5); . . . Recommend the next item. ” Finally, we tokenize this prompt into a sequence of tokens, denoted as xu, which serves as the LLM input. Likewise, each candidate item vcan be described using its metadata, then tokenized into a sequence of tokens xv. The tokenized textual prompts xuandxvare then input into the LLM transformer backbone, by extracting the final hidden state of the input sequence, we can obtain corresponding user embedding huand item embedding hv. Training via Contrastive Learning. To optimize LLM for recommendation tasks, in-batch contrastive loss is typically adopted: LCL=−logexp h⊤ uhv+/τ P v′∈Bexp h⊤uhv′/τ, (1) where τis the temperature hyperparameter controlling similarity dispersion, Bis the ground truth item set in one batch, andv+is the groud-truth target item. Inference. During inference, recommendation score s(u, v)for each user-item pair is computed via the inner product between their embeddings: s(u, v) =h⊤ uhv.Finally, we rank all items according to the scores and obtain top- Kitem recommendation list. 3.2 Reinforcement Learning for Large Reasoning Models Terminologies and Notations. RL for LLMs treats text generation as a sequential action process, where each action refers to outputting the next token. Formally, at each time step t, we denote the current state as (x, o<t), where xis the initial prompt and o<tis all previously generated tokens. Then, the conditioned probability distribution of an action is denoted as πθ(ot|x, o<t), where πθis the policy parameterized by θ. Based on the distribution, we can obtain the next token otthrough manual selection, greedy or temperature sampling. Upon obtaining a complete sequence o, termed as a trajectory, a reward function then evaluates and assigns it with a scalar score Rthat can reflect user | https://arxiv.org/abs/2505.16994v1 |
satisfaction or answer correctness. However, using Rdirectly in gradient estimates leads to high variance and unstable updates; instead, the advantage A, which measures how much better an action is than the expected return from that state, is adopted to reduce variance and improve sample efficiency [22, 23, 26, 34]. Sampling and Advantage Computation. Sampling-based advantage estimation is widely adopted in recent ad- vances [ 22,26,34,35]; below we describe its basic pipeline and two typical computation methods. Given an input x, a group of Gdifferent trajectories {oi}G i=1, are sampled from πθold. Existing studies widely obtain these trajectories via top- Ksampling with temperature, where θoldrefers to the frozen policy parameters before current update. Each trajectory oireceives a scalar reward Ri, which will then be used to compute the trajectory-level advantages via two widely adopted approaches, namely GRPO [22] and RLOO [26]: ARLOO i =Ri−1 G−1GX j̸=iRj, AGRPO i =Ri−mean( {Rj}G j=1) std({Rj}G j=1). (2) Training Objective. Training then proceeds with policy-gradient algorithms. Specifically, let ri,t(θ) =πθ(oi,t|x, oi,<t) πθold(oi,t|x, oi,<t)(3) be the importance ratio between the updated and old policies of trajectory iat token position t. The training objective is given by: J(θ) =Ex∼D,{oi}G i=1∼πθold(·|x)GX i=1|oi|X t=1[min ( ri,t(θ)Ai,clip(ri,t(θ),1−ϵ,1 +ϵ)Ai)], (4) where ϵdefines the clipping trust-region radius, which prevents excessively large updates, thereby reducing variance and improving optimization stability. 4 4 R2ec: Large Recommender Model with Reasoning R2ecis a large recommender model that “thinks” to recommend. We first outline the model design that enables the incorporation of generative reasoning and discriminative recommendation into a single model in Section 4.1. And then our proposed RL optimization algorithm tailored for achieving unified reasoning and recommendation capabilities is introduced in Section 4.2. 4.1 Model Design Architecture. As illustrated in Figure 1, our proposed R2ecis built upon a decoder-only backbone with two task- specific heads: 1) Language-modeling head (lm_head ). The token embedding table HT∈R|T |×d. Where Tis the token set, and each row is a d-dimensional embedding for one token. This head is responsible for the generation of reasoning tokens. 2) Recommendation head (rec_head ). The item embedding table HV∈R|V|×d. Where Vis the item set, and each row hvinHVis obtained by encoding item description prompt into the model itself and extracting the final hidden state. This head is used to score items for recommendation. Such design integrates generative reasoning and discriminative recommendation within a single unified model. It supports flexible and scalable item update by simply adding, deleting, or replacing vectors in the item embedding table, contrasting with generative recommendation systems that require hard-coded tokenization [ 36,8,4,37], enabling effective zero-shot generalization and accommodating large-scale item catalogs without severe degradation on recommendation quality or efficiency. Item prediction. Inference begins with feeding the tokenized user prompt xu(template of prompts can be found in Appendix E) into the transformer-based backbone, producing an initial hidden state h0. The language-modeling head then maps h0to the first reasoning token o1. This process continues autoregressively, yielding a sequence of Treasoning tokens o1:T. The final hidden state of the generated sequence hTis then fed into the recommendation head, where each candidate item v∈ V is | https://arxiv.org/abs/2505.16994v1 |
scored by an inner product s(v) =h⊤ THV[v], v∈ V,which determines the final ranking. This mechanism yields a tight reasoning–recommendation coupling, since both the language-modeling head and recommendation head share the same hidden-state space as input, reasoning directly reshapes hTand thus yielding more accurate recommendation scores s(v). Such alignment ensures that reasoning optimization (Section 4.2) contributes directly to finer recommendation. 4.2 Optimization: RecPO Our goal is to train the policy πθto jointly performs reasoning and recommendation, i.e.,it must generate coherent reasoning sequences to rank the target item accurately. Accordingly, we structure our optimization workflow in three parts. First, we introduce the trajectory sampling strategy that draws multiple reasoning trajectories for each user (Section 4.2.1). Second, we describe reward and advantage estimation , where discrete ranking signals and softmax similarities are fused into a single scalar award for the above sampled trajectories (Section 4.2.2). Finally, we formulate thetraining objective , which blends reasoning-level and recommendation-level updates through a clipped-ratio loss (Section 4.2.3). For a complete description of our end-to-end training and inference pipeline, please refer to Appendix D. 4.2.1 Trajectory Sampling Due to the optimization objective of joint learning, we define one trajectory in our settings spanning the entire reasoning-then-recommend process: xuπθ− − → o1πθ− − → ...πθ− − → oTπθ− − → v+, where the initial state xuencodes the user history and instruction, o1, ..., o Trepresents the intermediate actions of outputing Treasoning tokens, and v+as the action of recommending the ground-truth target item. For each user u, we first sample Greasonings {oi}G i=1with the old policy πθoldusing the tokenized input xu:{oi}G i=1∼ πθold(· |xu)by top- Ksampling with temperature to control stochasticity. Each sampled reasoning is then fed through the policy πθto produce a complete reasoning-then-recommend trace, which is subsequently used for reward calculation and advantage estimation. 4.2.2 Reward and Advantage Estimation Given the above sampled trajectories, we now aim to assign rewards to them. Basically, the reward should align with the evaluation criteria, i.e.,the recommendation metrics in our work, encouraging the model to achieve better performance. 5 However, in practice, we find that directly using the recommendation metrics as rewards is insufficient, as many trajectories of varying quality can result in the same top- Kranking. We therefore introduce a fused reward scheme that combines discrete ranking reward Rdand continuous similarity reward Rc, which are formulated as follows: Rd= NDCG @ k rank(v+) , R c=exp h⊤ Thv+/τ P v∈Vexp h⊤ Thv/τ, where Rdis the NDGC, Rcis the softmax similarity of recommending target item against all items in V. The final reward is then obtained through a linear combination: R=β Rc+ (1−β)Rd, β ∈[0,1], (5) where the weighting coefficient βis empirically set to β≈0.05to keep the ranking term dominant while providing sufficient resolution among trajectories that attain identical ranks. With rewards {Ri}G i=1we can obtain trajectory-level advantages {Ai}G i=1via Eqn.(2). 4.2.3 Training Objective Given the goal of joint optimization of reasoning and recommendation, we treat the entire reasoning-then-recommend sequence (xu, o1, ..., o T, v+)as a single RL trajectory. Policy optimization therefore operates over a composite action space, where the policy first | https://arxiv.org/abs/2505.16994v1 |
makes token-level decisions to generate reasoning, then selects an item at the recommendation stage. Under this formulation, the importance ratio from Eqn. (3) extends to: ri,t(θ) = πθ(oi,t|xu,oi,<t) πθold(oi,t|xu,oi,<t),ift⩽T(reasoning) πθ(v+|xu,oi,⩽T) πθold(v+|xu,oi,⩽T),ift=T+ 1(recommendation) .(6) Specifically, we model the recommending action, i.e.,recommending the target item v+via the in-batch softmax: πθ(v+|xu, oi) =exp h⊤ Thv+/τ P v′∈Bexp h⊤ Thv′/τ. (7) Letℓϵ(r, A) = min( rA,clip(r,1−ϵ,1 +ϵ)A)be the standard clipping operator with threshold ϵ, we define the joint reasoning-and-recommendation training objective as: J(θ) =E{u,v+}∼D,{oi}G i=1∼πθold(·|xu)1 GGX i=1hTiX t=1ℓϵ(ri,t(θ), Ai) +δi,i⋆ℓϵ ri,T+1(θ), Aii . (8) In Eqn. (8), all trajectories contribute to token-level policy updates via the standard clipped objective ℓϵ ri,t(θ), Ai . This design ensures the policy continues to learn from diverse reasoning behaviors. For the last recommendation action, only the trajectory with the highest advantage, identified by i⋆= arg max jAj, contributes gradients to recommendation optimization. This design concentrates recommendation learning with the most promising reasoning path, while the presence of other G−1reasoning paths contributes to ample exploration in reasoning actions, thus preserving exploration and ensuring effective recommendation learning. 5 Experiments 5.1 Setups Dataset and Metrics. Following previous work on large recommender models [ 3,2,1,29], we conducted experiments using real-world datasets sourced from Amazon, including CDs and Vinyl (CDs), Video Games (Games), and Musical Instruments (Instruments). Dataset statistics and preprocessing steps can be referred to Appendix A. We utilized two commonly used metrics: Hit Rate (H@K) and Normalized Discounted Cumulative Gain (N@K), with cutoff Kset to 5,10and20. We adopted the full set recommendation setting, where metrics are computed over the entire item set, providing a better reflection of practical scenarios. Baselines and Implementation. To comprehensively evaluate R2ec, we selected competitive baselines from various categories, including traditional sequential recommenders (GRU4Rec [ 38], Caser [ 39], SASRec [ 40]), LLM-based recommender (TIGER [ 8], BigRec [ 2],D3[3]), and reasoning augmented recommendation systems (LangPTune [ 15]). More baseline details can be found in Appendix B. We used Qwen and Gemma as the LLM backbones. More implementation details are described in Appendix C. 6 Table 1: The overall performance of baselines and R2econ three datasets. The best results in each group are marked in Bold, while the second-best results are underlined. * implies the improvements over the second-best results are statistically significant (p-value < 0.05). % improve represents the relative improvement achieved by R2ecover the best baseline. Instruments CDs and Vinyl Video Games Method H@5 N@5 H@10 N@10 H@20 N@20 H@5 N@5 H@10 N@10 H@20 N@20 H@5 N@5 H@10 N@10 H@20 N@20 GRU4Rec 0.0171 0.0135 0.0193 0.0142 0.0201 0.0144 0.0067 0.0037 0.0104 0.0041 0.0156 0.0051 0.0109 0.0070 0.0181 0.0093 0.0301 0.0123 Caser 0.0109 0.0141 0.0115 0.0149 0.0127 0.0155 0.0045 0.0029 0.0067 0.0037 0.0089 0.0042 0.0124 0.0083 0.0191 0.0103 0.0279 0.0126 SASRec 0.0175 0.0144 0.0201 0.0162 0.0223 0.0210 0.0076 0.0104 0.0081 0.0119 0.0086 0.0141 0.0129 0.0080 0.0206 0.0105 0.0326 0.0135 TIGER 0.0171 0.0128 0.0184 0.0132 0.0193 0.0134 0.0067 0.0045 0.0097 0.0055 0.0156 0.0069 0.0123 0.0085 0.0222 0.0116 0.0323 0.0142QwenBigRec 0.0052 0.0033 0.0111 0.0052 0.0189 0.0072 0.0045 0.0025 0.0089 0.0039 0.0141 0.0052 0.0008 0.0004 0.0016 0.0006 0.0128 0.0034 D30.0042 | https://arxiv.org/abs/2505.16994v1 |
0.0020 0.0094 0.0037 0.0192 0.0062 0.0082 0.0057 0.0141 0.0076 0.0253 0.0104 0.0054 0.0028 0.0104 0.0044 0.0197 0.0067 LangPTune 0.0127 0.0083 0.0224 0.0115 0.0348 0.0145 0.0074 0.0053 0.0156 0.0080 0.0208 0.0094 0.0049 0.0027 0.0088 0.0040 0.0140 0.0140 R2ec 0.0237 *0.0154 *0.0374 *0.0198 *0.0615 *0.0259 *0.0513 *0.0372 *0.0647 *0.0414 *0.0818 *0.0457 *0.0288 *0.0185 *0.0532 *0.0264 *0.0827 *0.0337 * % Improve. 35.43% 6.94% 66.96% 22.22% 52.61% 23.33% 46.57% 58.30% 37.95% 51.09% 20.83% 40.62% 84.62% 76.19% 104.62% 87.23% 92.33% 50.45%GemmaBigRec 0.0068 0.0048 0.0101 0.0058 0.0130 0.0066 0.0030 0.0030 0.0052 0.0037 0.0119 0.0053 0.0156 0.0105 0.0260 0.0138 0.0430 0.0182 D30.0072 0.0038 0.0202 0.0080 0.0339 0.0114 0.0216 0.0129 0.0327 0.0164 0.0446 0.0194 0.0117 0.0068 0.0210 0.0141 0.0378 0.0224 LangPTune 0.0130 0.0079 0.0221 0.0107 0.0403 0.0152 0.0350 0.0235 0.0469 0.0274 0.0677 0.0325 0.0068 0.0053 0.0120 0.0059 0.0195 0.0094 R2ec 0.0264 *0.0161 *0.0397 *0.0203 *0.0615 *0.0257 *0.0573 *0.0398 *0.0804 *0.0472 *0.1042 *0.0527 *0.0326 *0.0205 *0.0531 *0.0271 *0.0835 *0.0347 * % Improve. 50.86% 11.81% 77.23% 25.31% 52.61% 22.38% 63.71% 69.36% 71.43% 72.26% 53.91% 62.15% 108.97% 95.24% 104.23% 92.20% 94.19% 54.91% Table 2: Ablation study on key components of R2ec. Instruments CDs and Vinyl Video Games Method H@5 N@5 H@10 N@10 H@20 N@20 H@5 N@5 H@10 N@10 H@20 N@20 H@5 N@5 H@10 N@10 H@20 N@20 w/oReasoning 0.0176 0.0121 0.0296 0.0153 0.0511 0.0200 0.0469 0.0321 0.0692 0.0393 0.0945 0.0456 0.0277 0.0174 0.0441 0.0227 0.0748 0.0303 w/oRd 0.0198 0.0124 0.0338 0.0164 0.0560 0.0224 0.0521 0.0338 0.0766 0.0404 0.0974 0.0486 0.0302 0.0196 0.0487 0.0254 0.0798 0.0332 w/oRc 0.0244 0.0160 0.0394 0.0208 0.0605 0.0258 0.0543 0.0382 0.0774 0.0456 0.1012 0.0515 0.0316 0.0202 0.0534 0.0264 0.0814 0.0355 R2ec 0.0264 0.0161 0.0397 0.0203 0.0615 0.0257 0.0588 0.0388 0.0804 0.0457 0.1086 0.0525 0.0326 0.0205 0.0531 0.0271 0.0853 0.0363 5.2 Overall Performance To validate the effectiveness of the proposed R2ec, we showed the overall performance of baselines and our R2ecin Table 1. By analyzing the results, we gained the following findings. 1) Overall, R2ecconsistently outperforms every competing baseline, underscoring the value of jointly optimizing reasoning and recommendation. 2) Traditional methods perform well on the Instruments dataset but struggle on CDs and Games, revealing their limited generality. LangPTune frequently ranks second, which validates the benefit of integrating explicit reasoning into the recommendation pipeline. Finally, large recommenders generally outperform traditional approaches, notably secure secondary position on the Games dataset. We attribute this advantage to their larger model scale and semantic understanding capabilities. 3) Comparing two backbones, Gemma consistently outperforms its larger counterpart, achieving up to 2× gains for D3– suggesting that Gemma may generally deliver stronger recommendation performance despite its smaller parameter count (2B vs. 3B). 5.3 Ablation Study We conducted ablation studies by evaluating the following variants: 1) “ w/oReasoning”. We eliminated reasoning tokens from prompts and trained exclusively with in-batch contrastive loss; 2) “ w/oRc”. We keep only the discrete ranking reward Rdand removed the continuous similarity reward Rcin Eqn.(5); and 3) “ w/oRd”. We removed the discrete ranking reward Rdfrom Eqn.(5). The results are summarized in Table 2, and several observations stand out. 1) It is found that R2ecachieves an average improvement of roughly 15% across all metrics compared | https://arxiv.org/abs/2505.16994v1 |
to w/oReasoning. These substantial gains demonstrate that our designed optimization have enabled R2ecto better leverage test-time scaling to deliver significantly stronger recommendation performance. 2) As revealed that w/oRc(using only Rd) consistently outperforms w/oRd(using only Rc), this indicates that adopting reward signal that directly reflects evaluation result is crucial for training, while the continuous reward Rc, despite offering finer granularity, fails to provide meaningful distinctions and instead introduces noise that leads to suboptimal performance. 3) By fusing Rd 7 0 100 200 300 400 500 600 700 80000.020.040.060.080.10.120.14 RLOO GRPO stepreward(a) Train Reward (CDs) 0 400 8000.070.0750.080.0850.090.0950.10.1050.1 1 RLOO GRPO stepreward (b) Val Reward (CDs) 0 100 200 300 400 500 600 700 800480490500510520 RLOO GRPO stepreasoning length (c) Val Length (CDs) 0 200 400 600 800 1000 1200 1400 1600 180000.020.040.060.080.10.120.14 RLOO GRPO stepreward (d) Train Reward (Instruments) 0 400 800 1200 16000.060.0650.070.0750.080.0850.09 RLOO GRPO stepreward (e) Val Reward (Instruments) 0 400 800 1200 1600560570580590600610620630640650 RLOO GRPO stepreasoning length (f) Val Length (Instruments) Figure 2: Analysis on advantage estimation methods, RLOO and GRPO, across two datasets. “Train Reward” and “Val Reward” indicate the variation in rewards on the training set and validation set, respectively. “Val Length” represents the variation in reasoning length on the validation set. with a small weight on Rc, our approach preserves the task alignment of ranking rewards while benefiting from the supplementary signal of the continuous term. As a result, R2ec achieves optimal performance on nearly all metrics. 5.4 In-depth Analysis 5.4.1 Analysis on Advantage Estimation Accurate advantage estimation is crucial for reducing variance and improving sample efficiency in policy-gradient RL [ 26,23,34]. We therefore evaluated two estimators, GRPO [ 27] and RLOO [ 26], in our training pipeline. The variations of the training reward, validation reward, and reasoning length across the training steps are summarized in Figure 2, and we had the following observations. 1) First, as shown in Figure 2a and Figure 2d, both RLOO and GRPO exhibit high-variance training curves. This is due to the inherent nature of recommendation environments, which produce highly varied reward magnitudes—some sessions result in high rankings, while others yield very low ones. 2) Second, as Figure 2b and Figure 2e illustrate, GRPO demonstrates faster learning in the initial training steps and consistently outperforms RLOO in terms of validation reward, whereas RLOO progresses more steadily. This divergence stems from the GRPO’s unit-variance normalization magnifies rewards into larger gradients that accelerate early learning. 3) Besides, as shown in Figure 2c and Figure 2f, GRPO’s reasoning length gradually increases as training progresses, which is consistent with the phenomenon observed in LLM reasoning training [ 27], while RLOO appears to maintain a certain level of stability. This is because RLOO-driven optimization does not encourage improved reasoning with the low reward magnitudes in our task. 5.4.2 Analysis on Trajectory Sampling ForR2ec’s training, we performed trajectory sampling via the temperature τand top- Ksampling, which influence the stochasticity and diversity of the generated samples. To quantify their impact, we varied τandK, and the results are presented in Figure 3. 1) It is observed that increasing the temperature | https://arxiv.org/abs/2505.16994v1 |
produces longer resoning and boosts recommendation performance, i.e.,, NDCG@5. A higher temperature introduces greater sampling entropy, allowing the model to explore a wider range of reasoning trajectories. 2) Conversely, it is found that increasing top- Kactually shortens the reasoning length and generally leads to a decline in recommendation performance. This is because a larger top-Kenlarges the candidate token set, which counterintuitively mitigates length hacking yet excessively large K reintroduces noisy and low-quality samples. 5.4.3 Analysis on Group Size During training, we sampled a group of trajectories to estimate their advantages. Therefore, we conducted experiments to analyze how varying group sizes impact performance. The results are outlined in Figure 3, and we gained several key 8 1.5 2 2.5 30.0370.03750.0380.0385 505510515520 temperatureNDCG@5 reasoning length(a) 100 200 300 4000.0350.0360.0370.0380.039 500505510515 top- KNDCG@5 reasoning length (b) 1 2 4 80.020.0250.030.035 QWen Gemma group sizeNDCG@5 (c) 1 2 4 80.030.0350.040.0450.050.055 QWen Gemma group sizeHit Rate@5 (d) Figure 3: Analysis on trajectory sampling and group size over the CDs dataset. (a) and (b) show the impact of temperature and top- Ksampling on performance and reasoning length, respectively. (c) and (d) present the effect of group size on NDCG@5 and Hit Rate@5, respectively. findings as follows: 1) It is observed that performance improves for both backbones as group size increases, but the rate of improvement gradually slows down. While a larger group size generally leads to more explored paths, it also raises the training cost. These results suggest that selecting a group size of 6 or 8 is sufficient, and further increasing the group size is unnecessary. 2) Comparing the two backbones, we found a difference in sensitivity to group size. Qwen’s performance at a group size of 1 lags significantly behind Gemma’s, but it improves rapidly as the group size increases. Gemma performs well even with smaller groups, likely because its pretraining exposed it to a broader range of reasoning scenarios, thereby equipping it with stronger initial reasoning-for-recommendation capabilities. 5.4.4 More Analysis Due to the scope limitation, we detail more analysis on the case study in Appendix H, and the analysis on the efficiency in Appendix G, respectively. We also discuss the limitations in Appendix F. 6 Conclusion and Future Work In this study, we investigate the integration of reasoning into large recommender models by introducing R2ec, a unified model that combines generative reasoning with discriminative recommendation within a single autoregressive framework. This model is optimized using RecPO, a novel reinforcement learning framework, without reliance on human-annotated reasoning annotations. Extensive experiments on multiple real-world datasets demonstrated that R2ecachieves substantial gains over traditional, LLM-based, and reasoning-augmented baselines. Further ablation and in-depth analyses provide insights into the impact of model and optimization design on reasoning emergence, recommendation quality, and inference efficiency. Taken together, these findings highlight the importance of tightly coupling reasoning and recommendation to unlock large recommender model’s full potential. In the future, we aim to further investigate the efficient reasoning in larger recommender models, striving for optimal “thinking” in recommendations. References [1]Jiayi Liao, Sihang Li, Zhengyi Yang, Jiancan Wu, Yancheng Yuan, Xiang Wang, and Xiangnan He. LLaRA: | https://arxiv.org/abs/2505.16994v1 |
Large Language-Recommendation Assistant, May 2024. arXiv:2312.02445 [cs]. [2]Keqin Bao, Jizhi Zhang, Wenjie Wang, Yang Zhang, Zhengyi Yang, Yancheng Luo, Chong Chen, Fuli Feng, and Qi Tian. A Bi-Step Grounding Paradigm for Large Language Models in Recommendation Systems, December 2023. arXiv:2308.08434. [3]Keqin Bao, Jizhi Zhang, Yang Zhang, Xinyue Huo, Chong Chen, and Fuli Feng. Decoding Matters: Addressing Amplification Bias and Homogeneity Issue in Recommendations for Large Language Models. In Yaser Al- Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 10540–10552, Miami, Florida, USA, November 2024. Association for Computational Linguistics. [4]Bowen Zheng, Yupeng Hou, Hongyu Lu, Yu Chen, Wayne Xin Zhao, Ming Chen, and Ji-Rong Wen. Adapting Large Language Models by Integrating Collaborative Semantics for Recommendation, April 2024. arXiv:2311.09049. [5]Jun Hu, Wenwen Xia, Xiaolu Zhang, Chilin Fu, Weichang Wu, Zhaoxin Huan, Ang Li, Zuoli Tang, and Jun Zhou. (SAID) Enhancing Sequential Recommendation via LLM-based Semantic Embedding Learning. In Companion Proceedings of the ACM Web Conference 2024 , pages 103–111, Singapore Singapore, May 2024. ACM. 9 [6]Jiacheng Li, Ming Wang, Jin Li, Jinmiao Fu, Xin Shen, Jingbo Shang, and Julian McAuley. Text Is All You Need: Learning Language Representations for Sequential Recommendation. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , KDD ’23, pages 1258–1267, New York, NY , USA, August 2023. Association for Computing Machinery. [7]Xinyu Zhang, Linmei Hu, Luhao Zhang, Dandan Song, Heyan Huang, and Liqiang Nie. Laser: Parameter- Efficient LLM Bi-Tuning for Sequential Recommendation with Collaborative Information, September 2024. arXiv:2409.01605 [cs]. [8]Shashank Rajput, Nikhil Mehta, Anima Singh, Raghunandan H. Keshavan, Trung Vu, Lukasz Heldt, Lichan Hong, Yi Tay, Vinh Q. Tran, Jonah Samost, Maciej Kula, Ed H. Chi, and Maheswaran Sathiamoorthy. Recommender Systems with Generative Retrieval, November 2023. arXiv:2305.05065. [9]Xuansheng Wu, Huachi Zhou, Yucheng Shi, Wenlin Yao, Xiao Huang, and Ninghao Liu. Could Small Lan- guage Models Serve as Recommenders? Towards Data-centric Cold-start Recommendations, March 2024. arXiv:2306.17256 [cs]. [10] Qidong Liu, Xiangyu Zhao, Yejing Wang, Zijian Zhang, Howard Zhong, Chong Chen, Xiang Li, Wei Huang, and Feng Tian. Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation, April 2025. arXiv:2504.18383 [cs]. [11] Qidong Liu, Xian Wu, Wanyu Wang, Yejing Wang, Yuanshao Zhu, Xiangyu Zhao, Feng Tian, and Yefeng Zheng. LLMEmb: Large Language Model Can Be a Good Embedding Generator for Sequential Recommendation, December 2024. arXiv:2409.19925 [cs]. [12] Qidong Liu, Xian Wu, Yejing Wang, Zijian Zhang, Feng Tian, Yefeng Zheng, and Xiangyu Zhao. LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation, November 2024. arXiv:2405.20646 [cs]. [13] Bowen Jin, Hansi Zeng, Zhenrui Yue, Dong Wang, Hamed Zamani, and Jiawei Han. Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning, March 2025. arXiv:2503.09516 [cs]. [14] Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. s1: Simple test-time scaling, February 2025. arXiv:2501.19393 [cs]. [15] Zhaolin Gao, Joyce Zhou, Yijia Dai, and Thorsten Joachims. End-to-end Training for Recommendation with Language-based User Profiles, October 2024. arXiv:2410.18870. [16] Yi Fang, Wenjie Wang, | https://arxiv.org/abs/2505.16994v1 |
Yang Zhang, Fengbin Zhu, Qifan Wang, Fuli Feng, and Xiangnan He. Reason4Rec: Large Language Models for Recommendation with Deliberative User Preference Alignment, February 2025. arXiv:2502.02061 [cs]. [17] Jieyong Kim, Hyunseo Kim, Hyunjin Cho, SeongKu Kang, Buru Chang, Jinyoung Yeo, and Dongha Lee. Review- driven Personalized Preference Reasoning with Large Language Models for Recommendation, December 2024. arXiv:2408.06276 [cs]. [18] Yunjia Xi, Weiwen Liu, Jianghao Lin, Xiaoling Cai, Hong Zhu, Jieming Zhu, Bo Chen, Ruiming Tang, Weinan Zhang, Rui Zhang, and Yong Yu. Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models, December 2023. arXiv:2306.10933. [19] Wenjun Peng, Guiyang Li, Yue Jiang, Zilong Wang, Dan Ou, Xiaoyi Zeng, Derong Xu, Tong Xu, and Enhong Chen. Large Language Model based Long-tail Query Rewriting in Taobao Search, March 2024. arXiv:2311.03758 [cs]. [20] Jiacheng Lin, Tian Wang, and Kun Qian. Rec-R1: Bridging Generative Large Language Models and User-Centric Recommendation Systems via Reinforcement Learning, March 2025. arXiv:2503.24289 [cs] version: 1. [21] Alicia Y . Tsai, Adam Kraft, Long Jin, Chenwei Cai, Anahita Hosseini, Taibai Xu, Zemin Zhang, Lichan Hong, Ed H. Chi, and Xinyang Yi. Leveraging LLM Reasoning Enhances Personalized Recommender Systems, July 2024. arXiv:2408.00802. [22] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models, April 2024. arXiv:2402.03300 [cs]. [23] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal Policy Optimization Algorithms, August 2017. arXiv:1707.06347 [cs]. [24] Yafu Li, Xuyang Hu, Xiaoye Qu, Linjie Li, and Yu Cheng. Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual Feedback, January 2025. arXiv:2501.12895 [cs]. 10 [25] Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. Direct Preference Optimization: Your Language Model is Secretly a Reward Model, July 2024. arXiv:2305.18290. [26] Arash Ahmadian, Chris Cremer, Matthias Gallé, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin, Ahmet Üstün, and Sara Hooker. Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs, February 2024. arXiv:2402.14740 [cs]. [27] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [28] Jian Jia, Yipei Wang, Yan Li, Honggang Chen, Xuehan Bai, Zhaocheng Liu, Jian Liang, Quan Chen, Han Li, Peng Jiang, and Kun Gai. LEARN: Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application, December 2024. arXiv:2405.03988 [cs]. [29] Yuxin Chen, Junfei Tan, An Zhang, Zhengyi Yang, Leheng Sheng, Enzhi Zhang, Xiang Wang, and Tat-Seng Chua. On Softmax Direct Preference Optimization for Recommendation, November 2024. arXiv:2406.09215 [cs]. [30] Junyi Chen, Lu Chi, Bingyue Peng, and Zehuan Yuan. HLLM: Enhancing Sequential Recommendations via Hierarchical Large Language Models for Item and User Modeling, September 2024. arXiv:2409.12740 [cs]. [31] Hongke Zhao, Songming Zheng, Likang Wu, Bowen Yu, and Jing Wang. LANE: Logic Alignment of Non-tuning Large Language Models and Online Recommendation Systems for Explainable Reason Generation, July 2024. arXiv:2407.02833 [cs]. [32] Yuling | https://arxiv.org/abs/2505.16994v1 |
Wang, Changxin Tian, Binbin Hu, Yanhua Yu, Ziqi Liu, Zhiqiang Zhang, Jun Zhou, Liang Pang, and Xiao Wang. [SLIM] Can Small Language Models be Good Reasoners for Sequential Recommendation?, March 2024. arXiv:2403.04260 [cs]. [33] Bowen Zheng, Zihan Lin, Enze Liu, Chen Yang, Enyang Bai, Cheng Ling, Wayne Xin Zhao, and Ji-Rong Wen. A Large Language Model Enhanced Sequential Recommender for Joint Video and Comment Recommendation, March 2024. arXiv:2403.13574 [cs] version: 1. [34] Fengli Xu, Qianyue Hao, Zefang Zong, Jingwei Wang, Yunke Zhang, Jingyi Wang, Xiaochong Lan, Jiahui Gong, Tianjian Ouyang, Fanjin Meng, Chenyang Shao, Yuwei Yan, Qinglong Yang, Yiwen Song, Sijian Ren, Xinyuan Hu, Yu Li, Jie Feng, Chen Gao, and Yong Li. Towards Large Reasoning Models: A Survey of Reinforced Reasoning with Large Language Models, January 2025. arXiv:2501.09686 [cs]. [35] Tian Xie, Zitian Gao, Qingnan Ren, Haoming Luo, Yuqian Hong, Bryan Dai, Joey Zhou, Kai Qiu, Zhirong Wu, and Chong Luo. Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement Learning, February 2025. arXiv:2502.14768 [cs]. [36] Wenjie Wang, Honghui Bao, Xinyu Lin, Jizhi Zhang, Yongqi Li, Fuli Feng, See-Kiong Ng, and Tat-Seng Chua. Learnable Item Tokenization for Generative Recommendation, August 2024. arXiv:2405.07314. [37] Yijie Ding, Yupeng Hou, Jiacheng Li, and Julian McAuley. Inductive Generative Recommendation via Retrieval- based Speculation, October 2024. arXiv:2410.02939. [38] Shu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, and Tieniu Tan. Session-based Recommendation with Graph Neural Networks, January 2019. arXiv:1811.00855. [39] Jiaxi Tang and Ke Wang. Personalized Top-N Sequential Recommendation via Convolutional Sequence Embed- ding, September 2018. arXiv:1809.07426 [cs]. [40] Wang-Cheng Kang and Julian McAuley. Self-Attentive Sequential Recommendation, August 2018. arXiv:1808.09781 [cs]. A Dataset Following the temporal-truncation protocol introduced by [ 3,2], we construct three subsets— CDs and Vinyl (CDs), Video Games (Games), and Musical Instruments (Instruments) – from the lastest public Amazon review dataset2 spanning from May 1996 to October 2023. For each domain, we begin with the most recent year of interactions (October 2022 – October 2023) and, if the resulting number of valid items is insufficient, iteratively roll the time window backward month-by-month until 10k items are obtained. We omit the 5-core filter so as to retain the nature behaviour characteristic of recommendation scenarios. Each user’s interaction history is then chronologically sorted and truncated to the latest 20 actions , yielding fixed-length sequences for all subsequent modelling and evaluation. Finally the dataset is split with 80% training 10% validation 10% test. The resulting statistics are listed in Table 3. 2https://amazon-reviews-2023.github.io/index.html . 11 Table 3: Dataset Statistics. Dataset Users Items Density Interactions Train Val Test Video Games 29,230 10,144 0.031% 63,502 50,801 6,350 6,351 CDs and Vinyl 7,701 12,024 0.023% 13,435 10,748 1,343 1,344 Musical Instruments 15,656 10,320 0.031% 34,373 27,498 3,437 3,438 B Baselines • GRU4Rec [38] utilizes the GRU (Gated Recurrent Unit) architecture to model sequences. •Caser [ 39] employs both horizontal and vertical convolutional operations to enhance the capture of high-order interactions within item sequences, improving recommendation accuracy. •SASRec[ 40] incorporates a multi-head self-attention mechanism in its self-attentive sequential recommendation model, facilitating the modeling of intricate sequential data patterns. •TIGER [ 8] learns | https://arxiv.org/abs/2505.16994v1 |
extra tokens from item features to present items, and then converts the user sequence into the sequence of the new item token for next-item generation. • BigRec [2] utilizes the item title to present the user sequence for recommendation generation. •D3[3] proposed to address amplification bias and homogeneity issues in LLM recommenders that generate item title generation. •LangPTune [ 15] utilized LLM as a user profile generator, and proposed an end-to-end training pipeline that optimizes LLM-generated user profiles for the recommendation objective. C Implementation Details • Hardware: 4x NVIDIA 3090 (24GB) GPUs • Framework: PyTorch 2.5.0, Transformers 4.48.1, DeepSpeed 0.15.7 Traditional Recommenders. We train all non-LLM baselines with the Cross-Entropy loss and the AdamW optimiser (learning-rate 1×10−4); the batch size is 256. TIGER [8] adopts T5as its backbone without further architectural changes. LLM-based Methods. For every LLM variant – including the contrastive-learning baseline ( CL, Section 5.3) – we finetune Gemma2-2B-Instruct andQwen2.5-3B-Instruct . Models are adapted with LoRA (rank = 4) using DeepSpeed Stage 2 and the quantized PagedAdamW optimizer. Training lasts for at most three epochs with early stopping (patience = 1); the maximum generation length is 512 tokens. If public implementations exist, we keep the original hyper-parameters; otherwise, a grid search over learning rates {5×10−4,1×10−4}is performed. R2ecOptimization. The learning rate is set as 1e−5with a batch size of 24, and apply linear warm-up for the first 32 steps. For trajectory generation, we set the group size Gas four, utilized VLLM3(tensor-parallelism = 1, target GPU utilisation = 95 %) for efficient generation. VLLM reserves one GPU for inference and leaves three for training. Sampling uses temperature = 1.5 and top- K= 200. Policy optimization follows the clipping ratio ϵ= 0.2and omits KL regularization. Rewards are computed with NDCG @1000 andβ= 0.05. Unless stated otherwise, these settings are used throughout. D Overall Pipeline Training To instantiate R2ec, given the dataset D, the initial policy πθ, and item embedding cache HV, we train the model with detailed training process illustrated in Algorithm 1. 3https://docs.vllm.ai/en/latest/ 12 Algorithm 1 Training Process Input: Dataset D, initial policy πθ, embedding function fθ, item embedding table HV Output: Optimized policy model πθ 1:forstep= 1toNdo 2: ifstep % Trefresh == 0 then 3: Refresh item embedding: HV[v]←fθ(xv),∀v∈ V 4: end if 5: Sample a training batch B={(u, v+)} ∼ D 6: Encode target item prompts and update embedding table: HV[v+]←fθ(xv+)∀(u, v+)∈ B 7: for all (u, v+)inBdo 8: Generate Gtrajectory: {[o1, v+]...,[oG, v+]} ∼πθold(·|xu) 9: Compute reward for each trajectory using Eq. (5) 10: Compute advantage for each trajectory using Eq. (2) 11: end for 12: Update policy parameters θvia loss in Eq. (8) 13: Update old policy: θold←θ 14:end for Inference For inference, we first pass every item prompt xvthrough the trained model once to obtain all item embeddings HV. At inference time, the model greedily generates a deterministic reasoning ofor a user prompt xu, the last hidden state hT goes through the embedding table, scoring each candidate via the inner product s(v) =h⊤ THV[v]. The top- Kitems can then be recommended by sorting these scores. E Prompt | https://arxiv.org/abs/2505.16994v1 |
User Prompt Analyze in depth and finally recommend next {category} I might purchase inside <answer> and </answer> . For example, <answer> a product </answer> . Below is my historical {category} purchases and ratings (out of 5): {% for hist in purchase_histories %} {% {hist.time_delta} ago: [ {hist.item_title} ] ({hist.rating} )%} Item Prompt Summarize key attributes of the following {category} inside <answer> and</answer> : {% for key, attr in item.meta %} {% {key} :{attr} %} Figure 4: Prompt templates for user interaction history and item metadata. The User Prompt encodes a user’s past purchases as a sequence of item titles, relative timestamps (e.g., “2hrs”, “4d”), and explicit ratings (in [1,5]), followed by an instruction to analyze and recommend the next item within the span of </answer> and</answer> . The Item Prompt summarizes structured item attributes (e.g., brand, type, features) with the same format requirement. As illustrated in Figure 4, user interaction histories and item metadata are serialized into token sequences via the given prompt templates. To signal the boundary between reasoning and recommendation, we introduce control symbol <answer> , which triggers a shift from language modeling head to recommendation head. F Limitations Our work acknowledges two primary limitations. First, introducing explicit reasoning generation inevitably increases inference latency and reduces efficiency due to additional autoregressive decoding steps; nevertheless, exploring the potential of reasoning within recommendation is valuable, and our experiments have empirically confirmed its 13 Table 4: Average per-request inference latency (mean) of our unified model R2ecversus the two-stage LangPtune pipeline, measured across varying batch sizes. batch size R2ec LangPTune 1 16725.54 ±2636.84 19030.95 ±2753.81 4 3980.77 ±77.13 4625.045 ±67.00 8 2220.88 ±32.00 2375.64 ±69.476723 effectiveness in improving recommendation accuracy. Second, constrained by computational resources, we employed parameter-efficient tuning (LoRA) rather than full-parameter fine-tuning, thus not fully demonstrating the potentially superior performance achievable through comprehensive optimization. G Analysis on Inference Efficiency To quantify the efficiency gains of our unified model R2ecover the two-stage conventional pipeline, we benchmark its inference latency against LangPtune [ 15]. Inferences were performed on one NVIDIA RTX 3090 GPU using identical input prompts, maximum token lengths, and candidate sets. We executed 100 queries at batch sizes of 1, 4, and 8. For each batch, per-request latency was calculated by dividing the total duration by the batch size. The mean ±standard deviation for each configuration is reported in Table 4. R2ecachieves approximately 10% reduction in average per-request inference time, demonstrating the superior efficiency of our model architecture compared to the conventional two-stage approach. Notably, the efficiency gap narrows as batch size increases, since parallelization spreads computation costs over more requests. In addition, the low standard deviations demonstrate that R2ec’s inference speed is reliably stable. H Analysis on Case Study To illustrate domain-specific and backbone-specific reasoning behaviors, we analyze representative cases from the CDs and Video Games datasets (Figures 5,6,7,8) using Gemma and Qwen as backbones. 1) On the CDs dataset, both models tend to reason by discussing several potentially interesting items and explicitly enumerating their features and genres, before providing a final recommendation. This behavior highlights the models’ ability to leverage | https://arxiv.org/abs/2505.16994v1 |
domain knowledge acquired during pretraining and reinforced through RecPO, for better recommendation. 2) On the Video Games dataset, the models shift towards inferring user intent from the purchase sequence, speculating on the likely category or intended use of the next purchase. This demonstrates a shift from feature-based enumeration to intent-driven reasoning as the domain changes. 3) Notably, Gemma generally produces shorter and more concise reasoning sequences compared to Qwen, which often generates more elaborate justifications. These patterns indicate that, while both models benefit from explicit reasoning, their generation style and reliance on pretrained knowledge can manifest differently across domains and backbone choices. 14 Case Study 1 (CD and Vinyl) User Analyze in depth and finally recommend next CD or vinyl I might purchase inside <answer> and</answer> . For example, <answer> a product </answer> . Below is my historical CDs and vinyl purchases and ratings (out of 5): 255d 13.2h ago: [Pure 60’s: The #1 Hits] (5.0) 255d 12.6h ago: [Seven] (4.0) 26.5min ago: [Comeblack] (4.0) 18.9min ago: [Hot Rocks 1964-1971] (5.0) 16.3min ago: [Return to Forever] (4.0) 9.4min ago: [Rock Believer] (5.0) 6.3min ago: [From Small Things: The Best of Dave Edmunds] (3.0) R2ec(Gemma2-2B ) Based on your past purchases, it seems you enjoy a mix of classic rock, soul/funk, and a touch of progressive rock. You appreciate well-crafted albums with strong vocals and a focus on musicality. Here are some recommendations, keeping in mind your taste: CD/Vinyl: •The Very Best of The Rolling Stones (CD/Vinyl): You’ve enjoyed classic rock, and this compilation offers a great introduction to the Stones’ iconic sound. It features their biggest hits and showcases their bluesy roots and rock ’n’ roll energy. •The Dark Side of the Moon (Vinyl): This Pink Floyd masterpiece is a classic for a reason. It’s a complex and innovative album with a timeless sound. The vinyl format adds a special touch to the listening experience. •Kind of Blue (Vinyl): Miles Davis’s iconic jazz album is a must-have for any music lover. It’s a beautiful and introspective album that showcases Davis’s genius as a musician. The vinyl format brings out the nuances of the music. •The Miseducation of Lauryn Hill (CD/Vinyl): This album is a powerful blend of hip-hop, soul, and R&B. Lauryn Hill’s incredible vocals and the album’s socially conscious lyrics make it a truly special listening experience. Why these recommendations: •Variety: These albums offer a mix of genres and styles, reflecting your diverse taste. •Quality: They are all critically acclaimed and considered essential listening. •Format: The vinyl format adds a unique and immersive experience, while the CD format offers convenience and portability. Ultimately, the best way to find your next favorite album is to explore and discover new music. I recommend <answer> Figure 5: Case Study on CDs with Gemma as backbone. 15 Case Study 2 (CD and Vinyl) User Analyze in depth and finally recommend next CD or vinyl I might purchase inside <answer> and</answer> . For example, <answer> a product </answer> . Below is my historical CDs and vinyl purchases and ratings (out of 5): 97d 7.7h ago: [Live | https://arxiv.org/abs/2505.16994v1 |
At The El Mocambo[2 CD]] (5.0) 97d 7.5h ago: [Radio Silence] (5.0) 2d 16.5h ago: [Little Box ’O’ Snakes The Sunburst Years 1978-1982] (5.0) 2d 16.0h ago: [Goats Head Soup Super Deluxe] (5.0) R2ec(Qwen2.5-3B ) Given your history of purchasing and rating albums, it seems you have a strong preference for live recordings, classic rock, and deluxe editions. Based on this, here are some recommendations for CDs or vinyls that align with your tastes: 1. “The Wall” (Deluxe Edition) Artist: Pink Floyd Format: CD or Vinyl Description: This is a legendary album that has been reissued multiple times, including a deluxe edition that includes rare tracks, documentaries, and bonus material. It’s a must-have for any Pink Floyd fan. Rating: Given your appreciation for deluxe editions and classic rock, this would likely receive a high rating. 2. “The Dark Side of the Moon” (Super Deluxe Edition) Artist: Pink Floyd Format: CD or Vinyl Description: Another iconic album that has been extensively reissued. The super deluxe edition includes a wealth of bonus content, making it a comprehensive collection for fans. Rating: Similar to “Goats Head Soup,” this would also likely receive a high rating due to its extensive content and your appreciation for deluxe editions. 3. “Live at the Royal Albert Hall” by David Bowie Artist: David Bowie Format: CD or Vinyl Description: This live album captures one of David Bowie’s most famous performances, showcasing his versatility and the energy of his live shows. It’s a great addition to your collection if you enjoy live recordings. Rating: Given your enjoyment of live recordings, this would likely be highly rated. 4. “The Complete Studio Albums” by Fleetwood Mac Artist: Fleetwood Mac Format: CD or Vinyl Description: This box set includes all of Fleetwood Mac’s studio albums, providing a comprehensive overview of their career. It’s a great way to explore their evolution as a band. Rating: If you’re a fan of Fleetwood Mac, this would likely be highly rated due to the quality of their music and the comprehensive nature of the collection. 5. “The Best of the Beatles” (Deluxe Edition) Artist: The Beatles Format: CD or Vinyl Description: A compilation that includes some of the Beatles’ most popular songs, arranged in chronological order. The deluxe edition often includes additional content like rare recordings and documentaries. Rating: Given your appreciation for deluxe editions and classic rock, this would likely receive a high rating. Conclusion: Based on your past purchases and ratings, you might want to consider “The Wall” (Deluxe Edition), “The Dark Side of the Moon” (Super Deluxe Edition), “Live at the Royal Albert Hall” by David Bowie, “The Complete Studio Albums” by Fleetwood Mac, or “The Best of the Beatles” (Deluxe Edition). Each of these options offers a unique and comprehensive look into the artists’ work, which aligns with your tastes. Figure 6: Case Study on CDs with Qwen as backbone. 16 Case Study 3 (Video Games) User Analyze in depth and finally recommend next video game I might purchase inside <answer> and</answer> . For example, <answer> a prod- uct</answer> . Below is my historical | https://arxiv.org/abs/2505.16994v1 |
video games purchases and ratings (out of 5): 80d 1.6h ago: [For Apple Watch Fast Charger USB C, 2 in 1 Apple MFi Certified 3FT iWatch Charger Magnetic Charging Cable &PD USBC Fast Wall Charger Power Adapter for Apple Watch Series 6/SE/5/4/3/2/1&iPhone 14/13/12] (5.0) 76d 7.5h ago: [Kickstand Back Shell Holder Back Bracket for Nintendo Switch Console, Switch Replacement Kickstand Repair Kit with Screwdriver] (4.0) 45d 12.5h ago: [VOYEE Switch Controllers Wireless, Compatible with Switch Pro Controllers for Switch/Switch Lite/OLED/PC, One-Key Pairing Gamepad with 9-Color RGB LEDs 800mAh Battery 6-Axis Motion Wake Up Function] (5.0) 40d 23.5h ago: [Gvyugke 2.4GHz Wireless Gaming Headset for PS4, PS5, PC, Nintendo Switch, Bluetooth 5.2 Gaming Headphones with Mic for Mobile Device, Noise Canceling, Bass Surround, 50mm Driver, 40H Battery] (5.0) R2ec(Gemma2-2B ) It seems you enjoy tech accessories and gaming peripherals! Based on your recent purchases, you likely appreciate: •Functionality : You’re drawn to products that enhance your gaming experience, like the charging cable and the wireless gaming headset. •Convenience : You value products that simplify your life, like the kickstand for your Switch. •Quality :You prioritize products that are well-made and reliable, evidenced by your high ratings for the charging cable and the gaming headset. Considering these preferences, here are some recommendations for your next video game purchase: <answer> Figure 7: Case Study on Games with Gemma as backbone. 17 Case Study 4 (Video Games) User Analyze in depth and finally recommend next video game I might purchase inside <answer> and</answer> . For example, <answer> a prod- uct</answer> . Below is my historical video games purchases and ratings (out of 5): 79d 15.5h ago: [BUSQUEDA Pro Head Strap with Battery for Quest 2, Magnetic Connection 6000mAh Replaceable VR Power, Lightweight & Balance Elite Strap Meta Accessories for Enhanced Support and Comfort in VR] (5.0) 69d 14.2h ago: [iTwano Stainless VR Stand for Quest 2 Pro, Desk Mount VR Holder and Headset Stand Compatible with Quest Pro 2 Rift S] (5.0) 59d 18.9h ago: [ROCCAT Vulcan II Max Optical-Mechanical PC Gaming Keyboard with Customizable RGB Illuminated Keys and Palm Rest, Titan II Smooth Linear Switches, Aluminum Plate, 100M Keystroke Durability Black] (5.0) 57d 21.9h ago: [ZIYOU LANG K3 Mechanical Gaming Keyboard 98 Keys with Number pad Chroma RGB Illuminated Water-Resistant Anti-Ghosting NKRO Retro Metal Panel Type-C for PC/Mac Gamer Typist Optical Clicky Blue Switch] (5.0) 57d 21.8h ago: [Corsair K60 PRO TKL Wired Optical-Mechanical OPX Linear Switch Gaming Keyboard with 8000Hz Polling Rate - Black] (4.0) 52d 16.8h ago: [IROK FE87/104 RGB Mechanical Keyboard, Hot Swappable Gaming Keyboard, Customizable Backlit, Magnet Upper Cover Type-C Wired Keyboard for Mac Windows-White/Brown Switch] (5.0) R2ec(Qwen2.5-3B ) Given your purchase history, it is clear that your interests focus primarily on VR setup accessories, gaming peripherals (particularly high-quality keyboards), and devices that enhance comfort and customizability during play, rather than on specific video games themselves. Based on the trends observed in your history: •VR Content: Multiple purchases of Quest 2/Pro accessories indicate a strong inclination towards immersive VR experiences. •Comfort & Ergonomics: Investment in premium straps and ergonomic keyboards suggests | https://arxiv.org/abs/2505.16994v1 |
DecoupledESC: Enhancing Emotional Support Generation via Strategy-Response Decoupled Preference Optimization Chao Zhang, Xin Shi, Xueqiao Zhang, Yifan Zhu, Yi Yang, Yawei Luo∗ Zhejiang University {chao_zhang, yaweiluo}@zju.edu.cn Abstract Recent advances in Emotional Support Con- versation (ESC) have improved emotional sup- port generation by fine-tuning Large Language Models (LLMs) via Supervised Fine-Tuning (SFT). However, common psychological errors still persist. While Direct Preference Optimiza- tion (DPO) shows promise in reducing such errors through pairwise preference learning, its effectiveness in ESC tasks is limited by two key challenges: (1) Entangled data structure: Existing ESC data inherently entangles psycho- logical strategies and response content, mak- ing it difficult to construct high-quality pref- erence pairs; and (2) Optimization ambigu- ity:Applying vanilla DPO to such entangled pairwise data leads to ambiguous training ob- jectives. To address these issues, we introduce Inferential Preference Mining (IPM) to con- struct high-quality preference data, forming the IPM-PrefDial dataset. Building upon this data, we propose a Decoupled ESC framework inspired by Gross’s Extended Process Model of Emotion Regulation (Gross, 2015), which decomposes the ESC task into two sequential subtasks: strategy planning and empathic re- sponse generation. Each was trained via SFT and subsequently enhanced by DPO to align with the psychological preference. Extensive experiments demonstrate that our Decoupled ESC framework outperforms joint optimization baselines, reducing preference bias and improv- ing response quality1. 1 Introduction Mental health is essential to well-being (Prince et al., 2007), yet rising stress and fast-paced life have increased related issues (Bor et al., 2014; Brundtland, 2000; Paisley and McMahon, 2001). According to WHO, 1 in 8 people suffer from men- ∗Corresponding author 1Our data and code are available at https://anonymous. 4open.science/r/DecoupledESC-0E37 . Positive ( P) Negative ( N)Bias: 0.31 Psy-Errors Psy-ErrorsNegative optimization Psy-ErrorsBias: 0.30 Decoupled -DPOVanilla -DPOVanilla -SFT Ps Pr Ps Pr NsNr Nr Ps Ns Pr NsNr Ps Ns Pr Nr Bias: 0.22 Gross’s Extended Process Model strategy ( s) response ( r)Identification → Strategy Selection →ImplementationFigure 1: Comparison from Vanilla-SFT to Vanilla-DPO to Decoupled-DPO. Vanilla-SFT lacks negative prefer- ence data, leading to high preference bias; Vanilla-DPO uses entangled preference data, causing potential nega- tive optimization (regards PsNr, NsPr as pure negative samples); Decoupled-DPO disentangles strategy and re- sponse, effectively reduce bias and psychological errors. tal disorders (Organization, 2022). Amid a short- age of professionals, this underscores the need for scalable solutions, where Large Language Models (LLMs) offer promising potential. To enhance the performance of LLMs in Emotional Support Conversation (ESC), prior works (Zhang et al., 2024a; Chen et al., 2023) have constructed several large-scale, high-quality dia- logue datasets and applied Supervised Fine-Tuning (SFT) to improve model responses. Among them, Liuet al. (Liu et al., 2021) built the ESConv dataset based on Hill’s Helping Skills Theory (Hill, 1999) and filtered out FailedESConv dataset. The ES- Conv dataset follows a three-phase structure (Ex- ploration →Comfort→Action) and includes eight types of support strategies, each paired with cor- responding responses, details are provided in Ap- pendix A and C.1. This structured design signifi- cantly enhances a model’s ability to generate em- pathetic dialogue. Observation Currently, SFT has become the mainstream approach in | https://arxiv.org/abs/2505.16995v1 |
the ESC field. How-arXiv:2505.16995v1 [cs.CL] 22 May 2025 ever, we observe that models still frequently ex- hibit common psychological errors (Gross, 2002; Raskin and Rogers, 2005; Stebnicki, 2007) during inference, which align with those identified in the FailedESConv dataset (§3.1). In addition, Zhao et al. (Zhao et al., 2025) found that SFT’s focus on single gold strategy-response pairs limits adapt- ability to nuanced contexts, weakening empathetic support. To mitigate this, they use MCTS to col- lect pairwise preference data linking strategies and responses, and apply Direct Preference Optimiza- tion (Vanilla-DPO) to guide the model in choosing appropriate strategies, thereby partially reducing preference bias and improving response quality. Challenges However, as shown in Figure 1 and 4, our further analysis reveals that the limita- tions of current work lie not in the SFT or DPO training methods themselves, but rather in two overlooked challenges (§3.2): (1) Entangled data structure: Existing ESC datasets heavily entan- gle psychological strategies with response content, making it difficult to construct high-quality pref- erence pairs. For instance, penalizing responses with correct strategies but flawed content may de- grade data quality. (2) Optimization ambiguity: Applying Vanilla-DPO directly to such entangled data can blur training objectives and even lead to negative optimization outcomes. Approach To address these issues, we first in- troduce the Inferential Preference Mining ( IPM ) method, which automatically constructs prefer- ence samples decoupled from strategy-response. Specifically, we use dynamic data routing to route four types of psychological error samples iden- tified from the SFT model’s inference data to the DPO training stage of either strategy plan- ning or response generation, depending on the error type. These samples are then paired with human-annotated ground truth samples to form the Inferential Preference Mining Preference Dialogues ( IPM-PrefDial ) dataset, containing 21k strategy preference pairs and 11k response prefer- ence pairs. This dataset provides disentangled and high-quality supervision signals for two separate DPO models. Building on this, we propose a De- coupled ESC optimization framework , grounded in the Extended Process Model of Emotion Reg- ulation (EPMER) (Gross, 2015), which divides emotion regulation into three sequential stages: identification ,strategy selection , and implemen- tation , details are provided in Appendix A.1. Ac-cordingly, we explicitly split the ESC task into two subtasks: Strategy Planning (SP) andResponse Generation (RG) , enabling structured and goal- driven emotional support in dialogue. Results Across multiple evaluation metrics, our decoupled optimization framework signifi- cantly outperforms joint training baselines. It not only enhances the diversity of strategy selection but also improves response quality and empathy. Contributions Our key contributions are sum- marized as follows: •We analyze common psychological errors in existing SFT paradigms, and introduce the IPM method to construct the IPM-PrefDial, a strategy–response decoupled dataset. •We propose a Decoupled ESC framework in- spired by Gross’s Extended Process Model of Emotion Regulation, which explicitly splits the ESC task into two subtasks: Strategy Plan- ning and Response Generation, effectively mitigating preference bias and enhancing re- sponse quality. •Extensive experiments show that our Decou- pled ESC optimization framework signifi- cantly outperforms joint optimization base- lines across multiple evaluation metrics. | https://arxiv.org/abs/2505.16995v1 |
2 Related Work Emotional Support Conversation. Emotional Support Conversation (ESC) aims to alleviate users’ emotional distress through empathetic and support- ive responses. Liu et al. (Liu et al., 2021) first in- troduced the concept and built the ESConv dataset with 8 support strategies, 1.3k dialogues. They also released the FailedESConv dataset, containing 196 failed dialogues. Subsequent studies improved ESC systems by enhancing data quality (Sun et al., 2021; Qiu et al., 2023; Chen et al., 2023), adding external strategy planners (Deng et al., 2024; He et al., 2024, 2025), and incorporating common- sense reasoning (Tu et al., 2022; Deng et al., 2023; Ma et al., 2023). Supervised Fine-Tuning (SFT) re- mains the dominant training paradigm with strong real-world performance (e.g., MeChat (Qiu et al., 2023), SweetieChat (Ye et al., 2025)). Recently, preference-based methods like Direct Preference Optimization (DPO) (Rafailov et al., 2023) have emerged. Zhao et al. (Zhao et al., 2025) introduced DPO with MCTS-based data to jointly optimize FailedESConv Qwen -SFT Llama -SFTFigure 2: Observation 1: Comparison of common psychological error type proportions among the FailedESConv dataset, Qwen-SFT inference results, and Llama-SFT inference results. strategies and responses. However, the fixed cou- pling limited independent optimization and resulted in lower response quality. Reinforcement Learning for LLM. Reinforce- ment Learning (RL) was initially introduced into LLM training to align with human prefer- ences (Ouyang et al., 2022). This approach uses a reward model to guide the optimization of the policy model via the Proximal Policy Optimization (PPO) algorithm (Schulman et al., 2017). Recently, the Group Relative Policy Optimization (GRPO) al- gorithm was proposed to enhance model reasoning capabilities (Shao et al., 2024), which eliminates the need for a critic model by using within-group rewards as advantages. While these online rein- forcement learning methods are effective, they suf- fer from high computational costs and reliance on accurate reward modeling. As a simpler offline optimization algorithm, DPO optimizes the pol- icy model from pairwise preference data directly without the need for reward modeling. Due to its simplicity and effectiveness, DPO has achieved sig- nificant success across multiple domains, including mathematical reasoning, code generation, and rec- ommendation systems (Lai et al., 2024; Luo and Yang, 2024; Zhang et al., 2024b, 2025; Chen et al., 2024). 3 Preliminary Observations To investigate the causes of low response qual- ity in the ESC task, we analyzed outputs from six models: Base, SFT, and DPO ver- sions of Qwen2.5-7B-Instruct (Team, 2024) and Llama3.1-8B-Instruct (Dubey et al., 2024). Strategy DistributionFigure 3: Strategy Distribution across different models. 3.1 Preference Bias and Psychological Errors (Obs 1) Current base and SFT models (e.g., Qwen- Base, Qwen-SFT) show strong strategy prefer- ences (Kang et al., 2024), often overusing fixed strategies and failing to adapt to users’ emotional states. As shown in Figure 3, their strategy distri- butions diverge significantly from the ground truth. To further explore the impact of preference bias on response quality, we compared the outputs of Qwen-Base, Qwen-SFT, Llama-Base, and Llama- SFT with the FailedESConv dataset. As shown in Figure 2, common psychological errors (Raskin and Rogers, 2005; Stebnicki, 2007) observed in | https://arxiv.org/abs/2505.16995v1 |
the SFT-generated responses frequently aligned with those found in FailedESConv, including: (1) Strat- egy Mismatch, (2) Lack of Empathy, (3) Early Emotion Shift, (4) Template Response, (5) Emo- tion Misread. The definitions and corresponding examples are detailed in Appendix A.2. Although SFT reduces some errors, the empa- thy quality remains unsatisfactory. We argue that this stems from the SFT paradigm’s reliance on high-quality samples (Zhao et al., 2025) without incorporating negative supervision signals from the FailedESConv dataset, failing to address bias in strategy selection and emotional understanding. 3.2 Limitations of the DPO Method (Obs 2) To address Obs 1, a natural approach is to treat filtered failures as negative signals and train with DPO. Prior work (Zhao et al., 2025) adopted a vanilla DPO setup that jointly optimizes strategy- response pairs. However, as shown in Figure 1 and Figure 3, Vanilla-DPO relies heavily on the Question strategy and shows a strong preference for it, which fails to significantly reduce preference bias (§6.2). To investigate the failure of Vanilla-DPO in aligning with human preferences, we conduct a controlled study. We split the preference data based on response quality into two types: (PsPr, PsNr) : where the preferred sample has both a positive strat- egy (Ps) and positive response (Pr), and the non- preferred sample has a positive strategy (Ps) but a negative response (Nr). (PsPr, NsNr) : where the non-preferred sample contains both negative strategy (Ns) and negative response (Nr). We train models using each dataset on Qwen and Llama, and evaluate them on preference bias and strategy preference. As shown in Figure 4, models trained on (PsPr, NsNr) consistently outperform those trained on (PsPr, PsNr). It reduces preference bias and better aligns with diverse strategies. These results show that Vanilla-DPO training with en- tangled pairs like (PsPr, PsNr) harms strategy learning. This reveals two issues in Vanilla-DPO: 1.Entangled data structure: The coupling be- tween strategy and response complicates the construction of high-quality preference data, highlighting the need for more rigorous evalu- ation and filtering methods. 2.Optimization Ambiguity: Entangled strat- egy and response training lead to optimiza- tion ambiguity or even negative optimization: mislabeling PsNr as a negative sample leads to negative optimization on strategy learning, while NsPr harms response learning. According to Hill’s Helping Skills Theory (Hill, 1999) and Gross’s Extended Process Model of Emo- tion Regulation (EPMER) (Gross, 2015), strategies should precede response generation and serve as its guidance. In essence, the two are decouplable. However, when strategies and responses are jointly PsPr NrPs PsPr NsNr NrPs PsPr PsPr NsNr NrPs PsPr PsPr NsNrStrategy Pr eference (b)Preference Bias (a)Figure 4: Observation 2: (a) Preference Bias and (b) Strategy Preference across Qwen and Llama models trained on different preference datasets. optimized as a single DPO objective, the optimiza- tion objective becomes ambiguous. To address this, we propose a decoupled modeling and staged op- timization framework for ESC, which separates strategy planning from response generation, en- abling more structured and targeted improvements in dialogue quality. 4 Datasets 4.1 Preference Dataset Construction Inferential Preference Mining. Standard SFT approaches focus on expert demonstrations and neglect | https://arxiv.org/abs/2505.16995v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.