text string | source string |
|---|---|
consistently brings further improvements, confirming the effectiveness of our tailored RL strategy. Tool-Use Efficiency Analysis. To validate whether Tool-Star can efficiently invoke tools during stepwise reasoning, we compare the Tool-use Accuracy TEof various TIR methods on knowledge- intensive and computation-based reasoning datasets, as shown in Figure 4 (left). Results indicate that while Search-R1 and ToRL achieve high accuracy in knowledge and computation tasks respectively, they underperform in the other domain—highlighting the tendency of single-tool RL methods to specialize in specific reasoning types. In contrast, Tool-Star consistently maintains high tool-use accuracy across both task types and baselines, demonstrating its efficiency in tool utilization. 9 HotpotQA 2Wiki MuSiQue Bamboogle010203040506070Overall PerformanceDifferent Search Mode Tool-Star-3B(Local) Tool-Star-3B(Web)Figure 5: Performance Comparison of Tool- Star(3B) in Local and Web Search ModesLocal Search VS. Web Search. Inspired by R1- Searcher [ 53] and DeepResearch [ 81], we further compared the differences between local search and web search of Tool-Star-3B in knowledge-intensive reasoning tasks. As shown in Figure 5, we found that after using web search, except for HotpotQA which did not show obvious improvement, the other three domains all demonstrated significant improvements, especially 2wiki (+13% F1) and Bamboogle (+8% F1). Regarding the above phenomena, we believe that the performance differences between the two mainly stem from: (1)The retrieval library of wiki is a subset of the web page corpus. Moreover, when combined with the browser agent for web page information, it can provide more accurate and concise information compared to wiki documents. (2)We use the Bing API4for web search, and the retrieval models and strategies of their services perform better than the E5-base we deployed offline for local search. Inference-Time Tool-Use Analysis. To verify the effectiveness of inference-time tool design, we further analyze its impact on the performance of Tool-Star and DotaMath across two datasets. As shown in Figures 4 (middle) and (right), both models exhibit a notable reduction in tool-use errors during reasoning after applying inference-time tool optimizations, accompanied by significant performance improvements. These results not only underscore the detrimental effect of incorrect tool usage on reasoning accuracy, but also reinforces the motivation for incorporating inference-time tools. Notably, the performance gain for DotaMath is substantially larger than that for Tool-Star, with improvements exceeding 20% on GSM8K. This suggests that inference-time tools are particularly beneficial for models with weaker coding capabilities, while models with stronger coding proficiency, such as Tool-Star, naturally make fewer tool-related errors and thus have less room for improvement. Figure 6: Training curves of reward and response length for models with different parameters.Scaling Analysis on Parameter Scales. To inves- tigate the effectiveness of Tool-Star across differ- ent parameter scales, we display the RL training curves of Qwen2.5 models with 0.5B, 1.5B, and 3B parameters in Figure 6. Our key observations are as follows: (1) All models achieve good reward scores at the beginning of training (step 0), ben- efiting from the cold-start SFT stage, which sig- nificantly reduces the exploration burden in early RL; (2) As training progresses, the average reward scores steadily improve across all model sizes, in- dicating that our self-critic RL framework further improve | https://arxiv.org/abs/2505.16410v1 |
TIR capabilities on top of the cold-start initialization. (3) Notably, the average reward shows signs of emergent improvement around step 10, while no clear inflection point (i.e., “aha moment”) is observed in response length. Instead, response lengths gradually stabilize, suggesting convergence toward optimized response patterns. More training details can be found in Appendix A.3. Scaling Analysis on the RL Training Curve. To better understand the training dynamics of the reinforcement learning process, we present the trends of KL penalty, average reward score, average response length, and validation score over the first 100 steps RL training (Figure 7). Our key observations are as follows: •All metrics—including reward score, KL penalty, and validation score—increase steadily with training steps, indicating the stability of our self-critic RL training method. •The initial reward scores of the 3B and 1.5B models are already above 0 (within a possible range of -1 to 1.1), with corresponding validation scores starting at approximately 0.375 and 0.29 respectively—substantially higher than the vanilla instruct model’s score of below 0.15. These results demonstrate the effectiveness of our cold-start strategy. In contrast to RL methods trained entirely from scratch [ 35,53,23], the cold-start approach endows the model with a baseline level of TIR capability from the outset, significantly reducing the need for extensive black-box exploration during the early stages of RL training. 4https://www.microsoft.com/en-us/bing/apis/bing-web-search-api 10 Figure 7: The scaling relationship of kl loss, reward, response length and validation score for Tool-Star(Qwen2.5-3B & 1.5B in instruct version). •After cold-start SFT, the reward score shows mild emergence within the first 0–20 steps, though less pronounced than in from-scratch RL (i.e., no sudden rise-and-drop). Similarly, the response length does not fluctuate sharply but gradually converges to a stable value, further supporting the stability benefits of cold-start training. 5 Conclusion In this paper, we introduce Tool-Star, an RL-based framework designed to enable LLMs to au- tonomously invoke multiple external tools during stepwise reasoning. To address the scarcity of tool-use data, Tool-Star employs a scalable TIR data synthesis pipeline, incorporating normalization and a difficulty-aware data classification process. Furthermore, we propose a two-stage training framework to enhance multi-tool collaborative reasoning in LLMs, consisting of a Cold-Start Fine- tuning phase and a Multi-Tool Self-Critic RL stage. This framework progressively fosters effective multi-tool collaboration and strengthens the LLM’s understanding of reward principles. Extensive ex- periments across over ten challenging benchmarks demonstrate that Tool-Star significantly improves both the effectiveness and efficiency of tool-augmented reasoning in LLMs. References [1]Mingyang Chen, Tianpeng Li, Haoze Sun, Yijie Zhou, Chenzheng Zhu, Haofen Wang, Jeff Z. Pan, Wen Zhang, Huajun Chen, Fan Yang, Zenan Zhou, and Weipeng Chen. Research: Learning to reason with search for llms via reinforcement learning, 2025. [2]Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. Trans. Mach. Learn. Res. , 2023, 2023. [3]Zhipeng Chen, Yingqian Min, Beichen Zhang, Jie Chen, Jinhao Jiang, Daixuan Cheng, Wayne Xin Zhao, Zheng Liu, Xu Miao, Yang Lu, Lei Fang, Zhongyuan Wang, and Ji-Rong Wen. An empirical study on eliciting and improving r1-like reasoning models. CoRR , abs/2503.04548, | https://arxiv.org/abs/2505.16410v1 |
2025. [4]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. CoRR , abs/2110.14168, 2021. 11 [5]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [6]Ganqu Cui, Lifan Yuan, Zefan Wang, Hanbin Wang, Wendi Li, Bingxiang He, Yuchen Fan, Tianyu Yu, Qixin Xu, Weize Chen, Jiarui Yuan, Huayu Chen, Kaiyan Zhang, Xingtai Lv, Shuo Wang, Yuan Yao, Xu Han, Hao Peng, Yu Cheng, Zhiyuan Liu, Maosong Sun, Bowen Zhou, and Ning Ding. Process reinforcement through implicit rewards. CoRR , abs/2502.01456, 2025. [7]Quy-Anh Dang and Chris Ngo. Reinforcement learning for reasoning in small llms: What works and what doesn’t. CoRR , abs/2503.16219, 2025. [8]Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning. CoRR , 2023. [9]DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y . K. Li, Y . Q. Wang, Y . X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, | https://arxiv.org/abs/2505.16410v1 |
Yuyang Zhou, Y . X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, and Zhen Zhang. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. [10] Guanting Dong, Keming Lu, Chengpeng Li, Tingyu Xia, Bowen Yu, Chang Zhou, and Jingren Zhou. Self-play with execution feedback: Improving instruction-following capabilities of large language models. CoRR , abs/2406.13542, 2024. [11] Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang, Zheng Yuan, Chang Zhou, and Jingren Zhou. How abilities in large language models are affected by supervised fine-tuning data composition. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 177–198. Association for Computational Linguistics, 2024. [12] Guanting Dong, Chenghao Zhang, Mengjie Deng, Yutao Zhu, Zhicheng Dou, and Ji-Rong Wen. Progressive multimodal reasoning via active retrieval. CoRR , abs/2412.14835, 2024. 12 [13] Guanting Dong, Yutao Zhu, Chenghao Zhang, Zechen Wang, Zhicheng Dou, and Ji-Rong Wen. Understand what LLM needs: Dual preference alignment for retrieval-augmented generation. CoRR , abs/2406.18676, 2024. [14] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [15] Mehdi Fatemi, Banafsheh Rafiee, Mingjie Tang, and Kartik Talamadupula. Concise reasoning via reinforcement learning, 2025. [16] Jiazhan Feng, Shijue Huang, Xingwei Qu, Ge Zhang, Yujia Qin, Baoquan Zhong, Chengquan Jiang, Jinxin Chi, and Wanjun Zhong. Retool: Reinforcement learning for strategic tool use in llms, 2025. [17] Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu Chen. Tora: A tool-integrated reasoning agent for mathematical problem solving. InThe Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. [18] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset. In Joaquin Vanschoren and Sai-Kit Yeung, editors, Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual , 2021. [19] Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. Constructing A multi- hop QA dataset for comprehensive evaluation of reasoning steps. In Donia Scott, Núria Bel, and Chengqing Zong, editors, Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020 , pages 6609–6625. International Committee on Computational Linguistics, 2020. [20] Jian Hu. Reinforce++: A simple and efficient approach for aligning large language models. arXiv preprint arXiv:2501.03262 , 2025. [21] Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: A survey. In | https://arxiv.org/abs/2505.16410v1 |
Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 1049–1065. Association for Computational Linguistics, 2023. [22] Jinhao Jiang, Jiayi Chen, Junyi Li, Ruiyang Ren, Shijie Wang, Wayne Xin Zhao, Yang Song, and Tao Zhang. Rag-star: Enhancing deliberative reasoning with retrieval augmented verification and refinement. CoRR , abs/2412.12881, 2024. [23] Bowen Jin, Hansi Zeng, Zhenrui Yue, Dong Wang, Hamed Zamani, and Jiawei Han. Search- r1: Training llms to reason and leverage search engines with reinforcement learning. CoRR , abs/2503.09516, 2025. [24] Jiajie Jin, Yutao Zhu, Xinyu Yang, Chenghao Zhang, and Zhicheng Dou. Flashrag: A modular toolkit for efficient retrieval-augmented generation research. CoRR , abs/2405.13576, 2024. [25] Shanglin Lei, Guanting Dong, Xiaoping Wang, Keheng Wang, and Sirui Wang. Instructerc: Reforming emotion recognition in conversation with a retrieval multi-task llms framework. CoRR , abs/2309.11911, 2023. [26] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual , 2020. 13 [27] Chengpeng Li, Guanting Dong, Mingfeng Xue, Ru Peng, Xiang Wang, and Dayiheng Liu. Dotamath: Decomposition of thought with code assistance and self-correction for mathematical reasoning. CoRR , abs/2407.04078, 2024. [28] Chengpeng Li, Mingfeng Xue, Zhenru Zhang, Jiaxi Yang, Beichen Zhang, Xiang Wang, Bowen Yu, Binyuan Hui, Junyang Lin, and Dayiheng Liu. START: self-taught reasoner with tools. CoRR , abs/2503.04625, 2025. [29] Chengpeng Li, Mingfeng Xue, Zhenru Zhang, Jiaxi Yang, Beichen Zhang, Xiang Wang, Bowen Yu, Binyuan Hui, Junyang Lin, and Dayiheng Liu. START: self-taught reasoner with tools. CoRR , abs/2503.04625, 2025. [30] Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, and Brian Ichter. Chain of code: Reasoning with a language model-augmented code emulator. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net, 2024. [31] Xiaoxi Li, Guanting Dong, Jiajie Jin, Yuyao Zhang, Yujia Zhou, Yutao Zhu, Peitian Zhang, and Zhicheng Dou. Search-o1: Agentic search-enhanced large reasoning models. CoRR , abs/2501.05366, 2025. [32] Xiaoxi Li, Jiajie Jin, Guanting Dong, Hongjin Qian, Yutao Zhu, Yongkang Wu, Ji-Rong Wen, and Zhicheng Dou. Webthinker: Empowering large reasoning models with deep research capability. arXiv preprint arXiv:2504.21776 , 2025. [33] Xiaoxi Li, Jiajie Jin, Yujia Zhou, Yongkang Wu, Zhonghua Li, Qi Ye, and Zhicheng Dou. Retrollm: Empowering large language models to retrieve fine-grained evidence within genera- tion. CoRR , abs/2412.11919, 2024. [34] Xuefeng Li, Haoyang Zou, and Pengfei Liu. LIMR: less is more for RL scaling. CoRR , abs/2502.11886, 2025. [35] Xuefeng Li, Haoyang Zou, and Pengfei Liu. Torl: Scaling tool-integrated RL. CoRR , abs/2503.23383, 2025. [36] Zhong-Zhi Li, Duzhen Zhang, Ming-Liang Zhang, Jiaxin Zhang, Zengyan Liu, Yuxuan Yao, Haotian Xu, Junhao Zheng, Pei-Jie Wang, Xiuyi Chen, Yingying Zhang, Fei Yin, Jiahua Dong, | https://arxiv.org/abs/2505.16410v1 |
Zhijiang Guo, Le Song, and Cheng-Lin Liu. From system 1 to system 2: A survey of reasoning large language models. CoRR , abs/2502.17419, 2025. [37] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. [38] Pan Lu, Bowen Chen, Sheng Liu, Rahul Thapa, Joseph Boen, and James Zou. Octotools: An agentic framework with extensible tools for complex reasoning. CoRR , abs/2502.11271, 2025. [39] Grégoire Mialon, Clémentine Fourrier, Thomas Wolf, Yann LeCun, and Thomas Scialom. GAIA: a benchmark for general AI assistants. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. [40] Yingqian Min, Zhipeng Chen, Jinhao Jiang, Jie Chen, Jia Deng, Yiwen Hu, Yiru Tang, Jiapeng Wang, Xiaoxue Cheng, Huatong Song, et al. Imitate, explore, and self-improve: A reproduction report on slow-thinking reasoning systems. arXiv preprint arXiv:2412.09413 , 2024. [41] OpenAI. Learning to reason with llms, September 2024. [42] Bhargavi Paranjape, Scott M. Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Túlio Ribeiro. ART: automatic multi-step reasoning and tool-use for large language models. CoRR , abs/2303.09014, 2023. 14 [43] Long Phan, Alice Gatti, Ziwen Han, Nathaniel Li, Josephina Hu, Hugh Zhang, Sean Shi, Michael Choi, Anish Agrawal, Arnav Chopra, Adam Khoja, Ryan Kim, Jason Hausenloy, Oliver Zhang, Mantas Mazeika, Daron Anderson, Tung Nguyen, Mobeen Mahmood, Fiona Feng, Steven Y . Feng, Haoran Zhao, Michael Yu, Varun Gangal, Chelsea Zou, Zihan Wang, Jessica P. Wang, Pawan Kumar, Oleksandr Pokutnyi, Robert Gerbicz, Serguei Popov, John-Clark Levin, Mstyslav Kazakov, Johannes Schmitt, Geoff Galgon, Alvaro Sanchez, Yongki Lee, Will Yeadon, Scott Sauers, Marc Roth, Chidozie Agu, Søren Riis, Fabian Giska, Saiteja Utpala, Zachary Giboney, Gashaw M. Goshu, Joan of Arc Xavier, Sarah-Jane Crowson, Mohinder Maheshbhai Naiya, Noah Burns, Lennart Finke, Zerui Cheng, Hyunwoo Park, Francesco Fournier-Facio, John Wydallis, Mark Nandor, Ankit Singh, Tim Gehrunger, Jiaqi Cai, Ben McCarty, Darling Duclosel, Jungbae Nam, Jennifer Zampese, Ryan G. Hoerr, Aras Bacho, Gautier Abou Loume, Abdallah Galal, Hangrui Cao, Alexis C. Garretson, Damien Sileo, Qiuyu Ren, Doru Cojoc, Pavel Arkhipov, Usman Qazi, Lianghui Li, Sumeet Motwani, Christian Schröder de Witt, Edwin Taylor, Johannes Veith, Eric Singer, Taylor D. Hartman, Paolo Rissone, Jaehyeok Jin, Jack Wei Lun Shi, Chris G. Willcocks, Joshua Robinson, Aleksandar Mikov, Ameya Prabhu, Longke Tang, Xavier Alapont, Justine Leon Uro, Kevin Zhou, Emily de Oliveira Santos, Andrey Pupasov Maksimov, Edward Vendrow, Kengo Zenitani, Julien Guillod, Yuqi Li, Joshua Vendrow, Vladyslav Kuchkin, and Ng Ze-An. Humanity’s last exam. CoRR , abs/2501.14249, 2025. [44] Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. Measuring and narrowing the compositionality gap in language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguis- tics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 5687–5711. Association for Computational Linguistics, 2023. [45] Cheng Qian, Emre Can Acikgoz, Qi He, Hongru Wang, Xiusi Chen, Dilek Hakkani-Tür, Gokhan | https://arxiv.org/abs/2505.16410v1 |
Tur, and Heng Ji. Toolrl: Reward is all tool learning needs. arXiv preprint arXiv:2504.13958 , 2025. [46] Cheng Qian, Emre Can Acikgoz, Qi He, Hongru Wang, Xiusi Chen, Dilek Hakkani-Tür, Gokhan Tur, and Heng Ji. Toolrl: Reward is all tool learning needs, 2025. [47] Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma Gongque, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, Runfeng Qiao, Yifan Zhang, Xiao Zong, Yida Xu, Muxi Diao, Zhimin Bao, Chen Li, and Honggang Zhang. We-math: Does your large multimodal model achieve human-like mathematical reasoning? CoRR , abs/2407.01284, 2024. [48] Yiwei Qin, Xuefeng Li, Haoyang Zou, Yixiu Liu, Shijie Xia, Zhen Huang, Yixin Ye, Weizhe Yuan, Hector Liu, Yuanzhi Li, et al. O1 replication journey: A strategic progress report–part 1. arXiv preprint arXiv:2410.18982 , 2024. [49] Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2024. [50] Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. KDD ’20, 2020. [51] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. CoRR , abs/2402.03300, 2024. [52] Joykirat Singh, Raghav Magazine, Yash Pandya, and Akshay Nambi. Agentic reasoning and tool integration for llms via reinforcement learning. arXiv preprint arXiv:2505.01441 , 2025. 15 [53] Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, and Ji-Rong Wen. R1-searcher: Incentivizing the search capability in llms via reinforcement learning. CoRR , abs/2503.05592, 2025. [54] Petru Soviany, Radu Tudor Ionescu, Paolo Rota, and Nicu Sebe. Curriculum learning: A survey. Int. J. Comput. Vis. , 130(6):1526–1565, 2022. [55] Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, Andrew Wen, Shaochen Zhong, Hanjie Chen, and Xia Ben Hu. Stop overthinking: A survey on efficient reasoning for large language models. CoRR , abs/2503.16419, 2025. [56] Hao Sun, Zile Qiao, Jiayan Guo, Xuanbo Fan, Yingyan Hou, Yong Jiang, Pengjun Xie, Yan Zhang, Fei Huang, and Jingren Zhou. Zerosearch: Incentivize the search capability of llms without searching, 2025. [57] Jiankai Sun, Chuanyang Zheng, Enze Xie, Zhengying Liu, Ruihang Chu, Jianing Qiu, Jiaqi Xu, Mingyu Ding, Hongyang Li, Mengzhe Geng, Yue Wu, Wenhai Wang, Junsong Chen, Zhangyue Yin, Xiaozhe Ren, Jie Fu, Junxian He, Wu Yuan, Qi Liu, Xihui Liu, Yu Li, Hao Dong, Yu Cheng, Ming Zhang, Pheng-Ann Heng, Jifeng Dai, Ping Luo, Jingdong Wang, Ji-Rong Wen, Xipeng Qiu, Yike Guo, Hui Xiong, Qun Liu, and Zhenguo Li. A survey of reasoning | https://arxiv.org/abs/2505.16410v1 |
with foundation models. CoRR , abs/2312.11562, 2023. [58] Shuang Sun, Huatong Song, Yuhao Wang, Ruiyang Ren, Jinhao Jiang, Junjie Zhang, Fei Bai, Jia Deng, Wayne Xin Zhao, Zheng Liu, Lei Fang, Zhongyuan Wang, and Ji-Rong Wen. Simpledeepsearcher: Deep information seeking via web-powered reasoning trajectory synthesis. 2025. [59] Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. [60] Qwen Team. Qwq: Reflect deeply on the boundaries of the unknown. Hugging Face , 2024. [61] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. CoRR , abs/2302.13971, 2023. [62] Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. arXiv preprint arXiv:2212.10509 , 2022. [63] Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. musique: Multihop questions via single-hop question composition. Transactions of the Association for Computational Linguistics , 10:539–554, 2022. [64] Hongru Wang, Cheng Qian, Wanjun Zhong, Xiusi Chen, Jiahao Qiu, Shijue Huang, Bowen Jin, Mengdi Wang, Kam-Fai Wong, and Heng Ji. Otc: Optimal tool calls via reinforcement learning. arXiv preprint arXiv:2504.14870 , 2025. [65] Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. Text embeddings by weakly-supervised contrastive pre-training, 2024. [66] Yiping Wang, Qing Yang, Zhiyuan Zeng, Liliang Ren, Lucas Liu, Baolin Peng, Hao Cheng, Xuehai He, Kuan Wang, Jianfeng Gao, Weizhu Chen, Shuohang Wang, Simon Shaolei Du, and Yelong Shen. Reinforcement learning for reasoning in large language models with one training example, 2025. [67] Zihan Wang, Kangrui Wang, Qineng Wang, Pingyue Zhang, Linjie Li, Zhengyuan Yang, Kefan Yu, Minh Nhat Nguyen, Licheng Liu, Eli Gottlieb, Monica Lam, Yiping Lu, Kyunghyun Cho, Jiajun Wu, Li Fei-Fei, Lijuan Wang, Yejin Choi, and Manling Li. Ragen: Understanding self-evolution in llm agents via multi-turn reinforcement learning, 2025. 16 [68] Jialong Wu, Wenbiao Yin, Yong Jiang, Zhenglin Wang, Zekun Xi, Runnan Fang, Linhai Zhang, Yulan He, Deyu Zhou, Pengjun Xie, and Fei Huang. Webwalker: Benchmarking llms in web traversal. CoRR , abs/2501.07572, 2025. [69] Junde Wu, Jiayuan Zhu, and Yuyuan Liu. Agentic reasoning: Reasoning llms with tools for the deep research. CoRR , abs/2502.04644, 2025. [70] An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jianxin Yang, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Xuejing Liu, Yang Fan, Yang Yao, Yichang Zhang, | https://arxiv.org/abs/2505.16410v1 |
Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zhifang Guo, and Zhihao Fan. Qwen2 technical report. CoRR , abs/2407.10671, 2024. [71] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, and Zhenru Zhang. Qwen2.5-math technical report: Toward mathematical expert model via self-improvement. CoRR , abs/2409.12122, 2024. [72] Dayu Yang, Tianyang Liu, Daoan Zhang, Antoine Simoulin, Xiaoyi Liu, Yuwei Cao, Zhaopu Teng, Xin Qian, Grey Yang, Jiebo Luo, and Julian J. McAuley. Code to think, think to code: A survey on code-enhanced reasoning and reasoning-driven code intelligence in llms. CoRR , abs/2502.19411, 2025. [73] Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop ques- tion answering. In EMNLP , pages 2369–2380, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. [74] Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. LIMO: less is more for reasoning. CoRR , abs/2502.03387, 2025. [75] Chao Yu, Xuejing Zheng, Hankz Hankui Zhuo, Hai Wan, and Weilin Luo. Reinforcement learning with knowledge representation and reasoning: A brief survey. CoRR , abs/2304.12090, 2023. [76] Dian Yu, Baolin Peng, Ye Tian, Linfeng Song, Haitao Mi, and Dong Yu. Siam: Self-improving code-assisted mathematical reasoning of large language models. CoRR , abs/2408.15565, 2024. [77] Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, Haibin Lin, Zhiqi Lin, Bole Ma, Guangming Sheng, Yuxuan Tong, Chi Zhang, Mofan Zhang, Wang Zhang, Hang Zhu, Jinhua Zhu, Jiaze Chen, Jiangjie Chen, Chengyi Wang, Hongli Yu, Weinan Dai, Yuxuan Song, Xiangpeng Wei, Hao Zhou, Jingjing Liu, Wei-Ying Ma, Ya-Qin Zhang, Lin Yan, Mu Qiao, Yonghui Wu, and Mingxuan Wang. DAPO: an open-source LLM reinforcement learning system at scale. CoRR , abs/2503.14476, 2025. [78] Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling relationship on learning mathematical reasoning with large language models. CoRR , abs/2308.01825, 2023. [79] Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild. CoRR , abs/2503.18892, 2025. [80] Yuxiang Zhang, Yuqi Yang, Jiangming Shu, Xinyan Wen, and Jitao Sang. Agent models: Inter- nalizing chain-of-action generation into reasoning models. arXiv preprint arXiv:2503.06580 , 2025. [81] Yuxiang Zheng, Dayuan Fu, Xiangkun Hu, Xiaojie Cai, Lyumanshan Ye, Pengrui Lu, and Pengfei Liu. Deepresearcher: Scaling deep research via reinforcement learning in real-world environments. arXiv preprint arXiv:2504.03160 , 2025. 17 Appendix A Supplement Experimental Results 19 A.1 Deep Web Exploration Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 A.2 Different RL Algorithm: REINFORCE++ VS. GRPO . . . . . . . . . . . . . . . . 19 A.3 Analysis of Tool-Call Masking . . . . | https://arxiv.org/abs/2505.16410v1 |
. . . . . . . . . . . . . . . . . . . . . . . . 19 B Details of Tool-Star Framework 20 B.1 Training Algorithm of Tool-Star . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.2 Details of Tool Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C Implementation Details 22 C.1 The Statistics of Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 C.2 Training Details of Tool-Star . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 C.3 Details of Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 C.4 Baseline Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 C.5 Datasets Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 D Instruction Templates 25 D.1 Instructions for Tool-Star . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 D.2 Instructions for Code Debugger . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 D.3 Instructions for Reasoning Chain Refiner . . . . . . . . . . . . . . . . . . . . . . . 26 D.4 Instructions for Standard RAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 D.5 Instructions for Knowledge-Intensive Reasoning Model . . . . . . . . . . . . . . . 26 D.6 Instructions for Computational Reasoning Model . . . . . . . . . . . . . . . . . . 26 D.6.1 Instructions for Direct Sampling and Reasoning Computational Datasets . . 26 D.6.2 Instructions for Direct Sampling and Reasoning Knowledge-Intensive Datasets 26 E Case Study 27 E.1 TIR Cases of Tool-Star . | https://arxiv.org/abs/2505.16410v1 |
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 E.2 Cases of Self-Critic DPO Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 F Limitations and Future Directions 27 G Broader Impacts 28 18 Table 3: The performance of Tool-Star and other baseline methods on the GAIA and HLE datasets. The experimental settings are consistent with those in Table 1. Here, Qwen refers to the use of the Qwen2.5-3B-Instruct model, and Llama refers to the use of the Llama3.2-3B-Instruct model. Dataset Qwen Llama ToRL DotaMath RAG Search-o1 Search-R1 Multi-Tool Prompting Tool-Star GAIA 3.9 5.8 8.7 4.9 8.7 7.8 9.7 5.8 15.5 HLE 7.4 5.0 7.2 4.8 7.2 6.2 7.8 5.6 8.0 A Supplement Experimental Results A.1 Deep Web Exploration Result In this section, we further evaluate the performance of the Tool-Star framework on web exploraton tasks. Specifically, we assess it on the General AI Assistants (GAIA) andHumanity’s Last Exam (HLE) datasets with the same experimental settings as in main result. As shown in Table 3, In detail, we observe that the Tool-Star framework achieves strong performance even on web search tasks. Specifically, on the GAIA dataset, our method achieves the highest score, outperforming the second-best by over 60%. On the HLE dataset, our method also demonstrates strong competitiveness. This fully validates the effectiveness of our approach in the deep web exploration task. In terms of task difficulty, GAIA consists of complex deep web exploration tasks that typically require both information retrieval and reasoning. To address such challenges, Tool-Star often employs a multi-tool collaboration strategy to accomplish the subtasks. Examples of multi-tool collaborative reasoning are shown in Table 6. A.2 Different RL Algorithm: REINFORCE++ VS. GRPO Table 4: Performance comparison of RL Algo- rithm on ToolStar (Qwen2.5-3B-Instruct). Models GSM8K MATH HQA. Bamb. Tool-Star (SFT) 80 79.2 47.5 48.3 + GRPO 85.0 81 51.9 52.5 + REINFORCE++ 84.4 80.5 48.8 53.1As two widely-used rule-based reinforcement learning algorithms, Tool-Star with GRPO [ 51] and Reinforce++ [ 20] are compared in Table 4 in terms of training curves and experimental perfor- mance. It can be seen that the overall differences between GRPO and Reinforce++ are relatively minor. Experimental results show that Tool-Star with GRPO slightly outperforms REINFORCE++ on computational reasoning tasks such as GSM8K and MATH. However, when it comes to knowledge- intensive reasoning tasks, REINFORCE++ slightly outperforms GRPO on Bamboogle. These results suggest that while the performance of the two methods fluctuates slightly across different domains, the overall difference remains within 3%. A.3 Analysis of Tool-Call Masking Figure 8: The result of tool-call masking 0510152025303540 Step1.0 0.8 0.6 0.4 0.2 0.00.20.4Val Score Tool-Star(3B) Tool-Star(3B) w/o maskIn our work, we apply masking to the retrieved and python execution results when computing the loss to prevent the training policy from being biased by tool outputs. There- fore, all tool call results in Tool-Star | https://arxiv.org/abs/2505.16410v1 |
are masked. Based on this design, we conduct ablation studies on Tool-Star (Qwen2.5-3B-Instruct) with or without tool-call masking in RL training. As shown in Figure 8, removing the masking leads to a rapid collapse of the model’s reward score to -1, with persistent strategy hacking. This stands in contrast to prior findings in single-tool reinforcement learning [ 80], where, 19 Algorithm 1 Multi-Tool Self-Critic Reinforcement Learning Require: Reasoning Model πθ, External Tools T, Reward Function Rϕ, Cycle Times C, GRPO StepsS 1:Input: Datasets D, Task instruction I 2:Initialize πθold←πθ 3:fori← {1, ...,C}do 4: forstep← {1, ...,S}do ▷Start Vanilla RL phrase (GRPO) 5: Sample a batch Dbfrom D 6: foreach query q∈Dbdo 7: q←I⊕q 8: Sample Gsequences for qwithT:{oj}G j=1∼πθ(· |q, T) 9: Calculate the optimization objective function Jθof GRPO according to Eq.(4) 10: Update the weight parameters of πθ:θ←θ+η· ∇θJθ 11: Randomly sample Dsample from D ▷ Start Self-Critic DPO 12: foreach query q∈Dsample do 13: q←I⊕q 14: Sample the query qforGtimes with Tto generate Gdifferent results: {oj}G j=1∼πθ(· | q, T) 15: Filter one correct result and one incorrect result from Gresults to obtain ochosen and oreject 16: Calculate the optimization objective function Jθof DPO according to Eq.(5) 17: Update the weight parameters of πθ:θ←θ+η· ∇θJθ 18:Output: Trained Model πθnew according to their experiments, masking in single-tool RL did not lead to result hacking. We consider that maybe tool feedback in multi-tool reasoning can introduce significantly greater instability during RL training. B Details of Tool-Star Framework B.1 Training Algorithm of Tool-Star In this section, we delve into the overall workflow of the our Multi-Tool Self-Critic Reinforcement Learning algorithm. The training process consists of two alternating phases over multiple cycles, which are illustrated in algorithm 1: (1) Vanilla RL stage: In each cycle, the model πθis first optimized via GRPO for a fixed number of steps. For each query sampled from the dataset, the task instruction is prepended, and the model generates multiple output sequences using external tools. These outputs are used to compute the GRPO objective (Eq. 4), and the model parameters are updated accordingly. (2) Self-Critic DPO stage: After GRPO, a self-critic phase is performed to refine the model’s decision-making. For each sampled query, the model generates multiple outputs, from which one preferred (correct) and one rejected (incorrect) response are selected. These are used to compute the DPO objective (Eq. 5), further updating the model via gradient ascent. This alternating training scheme enables the model to not only improve through reinforcement learning with tool-augmented outputs but also self-correct by distinguishing between good and bad generations. The final output is a tool-aware reasoning model πnew θoptimized for both performance and efficiency. B.2 Details of Tool Design To provide a more detailed illustration of the design and usage guideline of our tools, we present the specifics of both the training tools and inference-time tools below. Training Tools: 20 Table 5: The statistics of data sources in our main result. Dataset Type Task # Counts NuminaMath Language-only Computational Reasoning 72.4K HotpotQA [53] Language-only Knowledge-Intensive Reasoning 8.1k 2WikiMultiHopQA [19] Language-only Knowledge-Intensive Reasoning 10k SimpleDeepSearch | https://arxiv.org/abs/2505.16410v1 |
[58] TIR Knowledge-Intensive Reasoning 0.9k WebWalker [68] Language-only Knowledge-Intensive Reasoning 3k •Search Engine: During inference, the search engine executes queries to retrieve relevant infor- mation, supporting both local and web-based retrieval modes. Inspired by DeepResearch [ 81], we adopt web search during training. Empirically, we find that both using webpage snippets and incorporating a browser agent consistently improve RL performance. However, the former significantly reduces efficiency. During evaluation, we use Wikipedia5as the retrieval corpus and E5 [65] as the retriever for four open-domain QA tasks. For all other tasks, we combine web search with the browser agent to obtain higher-quality retrieval results. •Web Browser Agent: This component accesses URLs from search results, parses webpage content, extracts information relevant to the query, and summarizes it. The process includes webpage parsing, regex-based cleaning, and an auxiliary model that refines the cleaned content based on the search query. The summarized output is then fed back into the reasoning chain. Notably, the browser agent is optional—when it is not used, the search engine directly returns webpage snippets or raw text. •Code Interpreter: Code snippets generated by the language model are executed in a sandboxed environment, returning either the execution result or error messages based on correctness. Our Python interpreter implementation follows the design of ToRA [ 17], ensuring secure and accurate execution. Inference-time tool: •Code Debugger: This module automatically corrects erroneous code generated by large language models and guides the code revision process. Specifically, we design a dedicated debug prompt ( D.2) that incorporates the model-generated code and compiler error messages, enabling the model to understand and fix the errors for automated code correction. •Tool-Use Backtracer: This module identifies and reverts to the reasoning step prior to a failed tool call, allowing the model to recover and revise its reasoning path. In real experiments, we compared model-based and rule-based rewinding strategies. As the model often struggles to accurately locate the faulty tool call, we adopt the rule-based approach. Concretely, we set the rewind position at the newline token preceding the special token of the erroneous tool call, prompting the model to regenerate the reasoning chain from that point. •Reasoning Chain Refiner: When the model output exceeds the maximum length limit, this module prunes and refines redundant steps in the reasoning process, replacing the original chain with a more concise and coherent version to complete the task. We design a refinement prompt ( D.3) that enables the model to identify and remove errors or repetitive thoughts in the original reasoning chain and generate a streamlined version. Template examples can be found in Table D.3. To ensure fairness, all deployed models of tools in the experiments (e.g., the browser agent’s information refinement process, code debugger, and Reasoning Chain Refiner) use instruct-version models with the same parameter size as the test model. Therefore, our inference-time tool can also be regarded as a self-play inference-time scaling process. 5https://archive.org/details/enwiki-20181220 21 C Implementation Details C.1 The Statistics of Data Sources In this section, we sample data from sources including Numina-MATH, 2Wiki, HotpotQA, with the detailed breakdown and corresponding data sizes presented in | https://arxiv.org/abs/2505.16410v1 |
Table 5. We strictly adhere to all copyright and licensing terms associated with the datasets and open-source code. For the computational reasoning dataset, we use 72.4K samples aligned with Numina-MATH TIR. For knowledge-based reasoning, we include 10.6K samples from HotpotQA and 2Wiki, aligned with R1 Searcher and Research respectively, along with 0.9K samples from the SimpleDeepSearch dataset and 3K samples randomly selected from the Webwalker training set. Finally, we obtained approximately 100K samples containing 1% TIR data. It is important to note that we only use the queries from these language-only datasets for reasoning chain sampling, without relying on the reasoning chains generated by stronger models of original datasets. Our cold-start data is sampled using Qwen2.5-3B-Instruct, which serves as the backbone model to match the parameter scale of the main table. Following the START settings [ 28], we set the temperature to 0.7 and the top-p value to 0.95 during sampling. Due to resource constraints, each question is sampled three times only. In the tool-use quality normalization stage, our tool-call frequency threshold βis set to 5. Ultimately, following multiple stages of quality filtering in Section 3.1, we curated approximately 54K samples from categories 1, 2, and 3 for cold-start training. Furthermore, 10K challenging samples were randomly selected from categories-4 samples for the RL phase to enable progressive alignment from simpler to more difficult tasks. C.2 Training Details of Tool-Star Cold-Start Supervised Finetuning. After applying our sampling strategies to the original datasets, we use approximately 54k samples for cold-start supervised fine-tuning. During the supervised fine-tuning stage, we fine-tune a Qwen2.5-3B-Instruct model on the training data with a learning rate of 7e-6. We utilize DeepSpeed ZeRO-3 [ 50] and FlashAttention2 [ 8] for training. The batch size is set to 128, weight decay to 0.1, and we train for 3 epoch. Mixed precision training with BF16 is adopted, and the maximum input length is set to 4096. Multi-Tool Self-Critic Reinforcement Learning. After SFT, we further train the model using the GRPO algorithm under the VERL framework6. Similarly, we randomly select around 10k samples from DRL toolfor reinforcement learning. The learning rate is set to 8, with 8 rollouts per sample. The total training batch size is 128, and the mini-batch size is 16. The maximum output length is set to 4096 to ensure completeness of exploration. To accelerate the training speed, following DAPO [ 77], we set the maximum number of tool invocations during training to 3. To stabilize training, we set the KL divergence coefficient in GRPO to 0. The reinforcement learning phase lasts for 2 epochs. All experiments are conducted on 8 NVIDIA A800 GPUs. To ensure reproducibility, all experimental results are averages of three independent trials. We have approximately 300 samples in the validation set during the RL stage, which includes AIME25, 25, MATH500, HotpotQA, 2wikiMultiHopQA, MuSiQue and Bamboogle. It is worth noting that all tool invocation results in Tool-Star are masked . As illustrated in the Figure 3, these masked segments are excluded from the loss computation to prevent the training policy from being biased toward the tool outputs. Only | https://arxiv.org/abs/2505.16410v1 |
the tokens from the text-based reasoning process and tool requests are considered during loss calculation. Self-Critic DPO Training. In this phase, the learning rate is set to 5e-7 with a cosine scheduler and a 0.1 warm-up ratio. We use DeepSpeed ZeRO Stage 3 and Flash-Attention 2 for efficiency, with a global batch size of 64. Training utilizes a sigmoid loss function with a beta value of 0.3 and spans 2 epochs. Mixed precision training with bf16 is employed, and the maximum context length is 4096 tokens. We perform self-critic training every 0.5 epoch during the standard reinforcement learning 6https://github.com/volcengine/verl 22 process (approximately every 50 steps), and use the resulting model to resume the subsequent vanilla reinforcement learning training. The detailed data format are listed in Appendix E.2 C.3 Details of Retrieval During training, we use the Bing Web Search API as the retriever, configured with the US-EN region setting. For each query, we retrieve 10 web pages as supporting documents. During inference, we use Bing Web Search for the Math Reasoning and Complex Reasoning datasets. For open-domain QA, we employ Wikipedia as the retriever. The Wikipedia retriever is based on the Wikipedia 2018 Corpus provided by FlashRAG [24] and uses E5-base-v2 as the retriever. C.4 Baseline Details Proprietary Models •Qwen2.5 Series [70,49], developed by Alibaba, is available in several sizes: Qwen2.5-0.5B /1.5B /7B and 72B. This model is trained on data sources spanning 29 kinds of languages, enabling it to perform exceptionally well in multilingual tasks. Additionally, Qwen2.5 exhibits strong capabilities in coding and mathematics. Qwen2.5-72B-Instruct is notable for its ability to handle input windows of up to 128K tokens in length, making it exceptionally well-suited for processing long texts and tackling complex tasks. •LLaMA 3 Series [14], developed by Meta AI, is released in multiple sub-series, including LLaMA 3, LLaMA 3.1, LLaMA 3.2 and LLaMA 3.3. LLaMA series models offer two primary variants for each model size: the base model and the instruct model. These models are trained on a mixture of datasets, covering a wide range of domains and languages. The LLaMA series models achieve strong performance across reasoning, coding, and multilingual understanding benchmarks. With continued improvements in alignment and context length, they are increasingly suitable for real-world tasks. Search-Enhanced Baselines •RAG [ 26]is a basic baseline that demonstrates the model’s ability to directly utilize retrieved information. In our implementation, we directly used the retriever to obtain the top-5 results and inserted them into a carefully designed prompt to let the model generate the answer. •Search-o1 [ 31]is a reasoning-model-driven RAG method. Its core idea is to allow the reasoning model to autonomously decide whether to perform a search during its reasoning process. It generates queries, retrieves relevant information, organizes the retrieved results, and incorporates them into the model’s reasoning chain. The model then continues generation based on this chain until a final answer is reached. In our experiments, we modified the official code of Search-o1 by replacing its base model with one that has the same number of parameters as ours, and we also used the same retriever to | https://arxiv.org/abs/2505.16410v1 |
ensure consistency. •Search-R1 [ 23]is a reinforcement learning method that trains a model from scratch to call the search engine during reasoning. Its overall process is similar to Search-o1 [ 31]: the model generates necessary queries during reasoning, retrieves relevant information, and inserts it into the reasoning chain. We follow the original prompt used in the method to ensure its effectiveness. Code-Enhanced Baselines •ToRL [ 35]is a tool-integrated framework trained with reinforcement learning methods. This framework enables the model to autonomously invoke code interpreter to solve mathematical reasoning problems. Compared to prompt-based methods and supervised fine-tuning training approaches, ToRL encourages the model to explore extensively to find optimal tool usage strategies. This approach shows significant improvements over traditional methods. At the same time, the framework demonstrates the broad potential applications of tool integration in mathematical reasoning. •DotaMath [ 27]models are fine-tuned on mathematical instruction datasets. After training, the DotaMath model demonstrates strong capabilities in solving complex mathematical reasoning problems. By training on the DotaMathQA dataset built for mathematical problems, the model 23 has acquired the abilities of thought decomposition and self-correction during reasoning. This paradigm opens up a new direction for solving complex math problems. Multi-tool Enhanced Baselines •Multi-Tool Prompting method refers to the approach where the model learns to use external tools to solve problems solely through prompts. In this setup, we primarily aim to explore whether the model has the capability to autonomously call relevant tools to complete tasks. For the prompt format, we use the same structure as the one employed by Tool-Star. The specific prompt can be found in Appendix D.1. •ReCall [ 1]method makes significant improvements over the ReSearch framework. Specifically, compared to the purely search-based framework, it further allows the model to call code interpreter to solve complex reasoning problems. This setup expands the tasks that ReSearch could handle to mathematical reasoning, while still maintaining the original capability of ReSearch to call search engines. Additionally, ReCall introduces a new method for generating data towards complex multi-step tasks, enabling the LLM to develop sophisticated reasoning abilities based on multiple tools. C.5 Datasets Details We provide a detailed introduction to the computational reasoning benchmarks evaluated by Tool-Star: •AIME247A dataset of 30 challenging math problems from the 2024 American Invitational Mathe- matics Examination (AIME I & II), covering algebra, geometry, number theory, and combinatorics. Each JSONL-formatted entry includes the problem ID, statement, detailed solution steps, and the final integer answer. Designed to evaluate large language models on complex mathematical reason- ing, the dataset features high difficulty, rigorous multi-step solutions, and unique answers—making it a strong benchmark for assessing advanced problem-solving capabilities. •AIME258is a dataset comprising 30 challenging mathematical problems sourced from the 2025 American Invitational Mathematics Examination (AIME I & II). These problems encompass a diverse range of topics, including algebra, combinatorics, geometry, number theory, and probability. AIME25 is designed to evaluate the mathematical reasoning capabilities of LLMs. It serves as a benchmark for assessing how well these models can tackle complex mathematical problems requiring multi-step reasoning and understanding of advanced mathematical concepts. •GSM8K [ 5]is a widely adopted | https://arxiv.org/abs/2505.16410v1 |
benchmark for assessing the mathematical reasoning abilities of large language models through grade-school level word problems. In our experiments, we utilize its standard test set and report performance based on greedy decoding accuracy (maj@1). •MATH [ 18]is a dataset consisting of challenging high school-level mathematics problems, span- ning topics such as Prealgebra, Algebra, Number Theory, Counting & Probability, Geometry, Intermediate Algebra, and Precalculus. Compared to GSM8K, MATH offers greater difficulty and topic diversity. •MATH500 [ 37]is a subset of 500 problems drawn from the MATH benchmark developed by OpenAI. This dataset is designed to assess the mathematical reasoning capabilities of large language models. The problems in MATH500 cover a range of mathematical topics including algebra, geometry, trigonometry, and probability. Each entry in the dataset includes the problem statement, a detailed step-by-step solution, and the final answer, allowing for thorough model training and evaluation. Furthermore, we also conduct a series experiments on follow knowledge-intensive reasoning bench- marks: •WebWalker [ 68]s a dataset consisting of queries derived from various web pages, where answering a single query may require gathering information from one or multiple pages. In our experiments, we adopt an open-domain setting by using a search engine for retrieval rather than the root URLs provided in the dataset. 7https://huggingface.co/datasets/HuggingFaceH4/aime_2024 8https://huggingface.co/datasets/math-ai/aime25 24 •HotPotQA [ 73]is a widely-used question answering dataset that features multi-hop questions built from Wikipedia articles. In our experiments, we use its validation set for evaluation. •2WikiMultihopQA [ 19]is a large-scale multi-hop QA dataset that combines structured knowledge from Wikidata and unstructured text from Wikipedia. It includes diverse question types and annotated reasoning paths to support explainable multi-step inference. •Bamboogle [ 44]is a challenging multi-hop QA dataset that contains hand-crafted questions requiring up to four reasoning hops. It tests a model’s ability to perform complex and compositional reasoning across multiple facts. •Musique [ 63]is a multi hop dataset that requires inference. The problem inside is a dependency based multi hop problem, where one inference step depends on the information obtained from the previous inference step, making the dataset very difficult. We conducted experiments using Musique’s dev set. D Instruction Templates D.1 Instructions for Tool-Star Instruction for Tool-Star You are a helpful assistant that can solve the given question step by step with the help of the wikipedia search tool and python interpreter tool. Given a question, you need to first think about the reasoning process in the mind and then provide the answer. During thinking, you can invoke the wikipedia search tool to search and python interpreter tool to calculate the math problem for fact information about specific topics if needed. The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags respectively, and the search query and result are enclosed within <search> </search> and <result> </result> tags respectively. After receiving the search or python result, you should continue your reasoning process begin with <think>. For example, <think> This is the reasoning process. </think> <search> search query here </search> <result> search result here </result> <think> This is the reasoning process. </think> <python> python code here </python> <result> python interpreter | https://arxiv.org/abs/2505.16410v1 |
result here </result> <think> This is the reasoning process. </think> <answer> The final answer is answer here </answer>. In the last part of the answer, the final exact answer is enclosed within \boxed{} with latex format. D.2 Instructions for Code Debugger Instruction for Code Debugger You are a code expert. I need you to debug the following code. Below are the code originally generated by the model and the error information that occurred during code execution. Please output ONLY the corrected Python code, without any explanation or markdown formatting: **Inputs:** **Original Code:** {code} **Execution Error:** {error} Output the corrected Python code only, without any explanation or markdown formatting: 25 D.3 Instructions for Reasoning Chain Refiner Instruction for Reasoning Chain Refiner You are an expert in response refinement. Given a prompt and its corresponding response, your task is to compress and restructure the response by removing redundant, repetitive, or irrelevant content. Preserve all key information needed to directly and accurately address the original prompt. Only output your revised response and do not output anything else. **Original Prompt:** {prompt} **Original Response:** {response} **Revised Response:** D.4 Instructions for Standard RAG Instruction for Standard RAG Find the useful content from the provided documents, then answer the question. Answer the question directly. Your response should be very concise. Please provide use ’So the final answer is:’ as a prefix for the final answer. The following are given documents. Documents: {reference} Question: {question} D.5 Instructions for Knowledge-Intensive Reasoning Model Instruction for Sampling and Inferencing Knowledge-Intensive Datasets You are a helpful assistant that can solve the given question step by step with the help of the python interpreter tool. Given a question, you need to first think about the reasoning process in the mind and then provide the answer. During thinking, you can invoke the python interpreter tool to calculate the math problem for fact information about specific topics if needed. The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags respectively. For example, <think> This is the reasoning process. </think> <python> python code here </python> <result> python interpreter result here </result> <think> This is the reasoning process. </think> <answer> The final answer is answer here </answer>. In the last part of the answer, the final exact answer is enclosed within \boxed{} with latex format. D.6 Instructions for Computational Reasoning Model D.6.1 Instructions for Direct Sampling and Reasoning Computational Datasets Instruction for Sampling and Reasoning Computational Datasets A conversation between User and Assistant. The user asks a question, and the Assistant solves it. Please integrate natural language reasoning with programs to solve the problem above, and put your final answer within \boxed{}. {question} D.6.2 Instructions for Direct Sampling and Reasoning Knowledge-Intensive Datasets Instructions for Sampling and Reasoning Knowledge-Intensive Datasets A conversation between User and Assistant. The user asks a question, and the Assistant solves it. Please refer to the given document to solve the problem above, and put your final answer within \boxed{}. Question: {question} Reference documents: {reference} 26 E Case Study E.1 TIR Cases of Tool-Star To better understand how Tool-Star performs tool-based collaborative | https://arxiv.org/abs/2505.16410v1 |
reasoning, we conducted a case study to clearly illustrate its reasoning process. The model-generated think process are enclosed within <think> and </think> . The model-generated search queries are enclosed within <search> and </search> , and the model-generated Python codes are enclosed within <python> and </python> . Search results, Python execution results and final answers are enclosed within <result> ,</result> ,<answer> and </answer> . Our analysis reveals the following: In terms of task difficulty, GAIA is complex deep web exploration tasks that typically involve both information retrieval and reasoning. When handling such tasks, Tool-Star often adopts a multi- tool collaboration strategy to complete the subtasks (as shown in the Table 6). In contrast, for domain-specific datasets like HotpotQA and AIME, which are more focused on knowledge retrieval or mathematical computation, Tool-Star can accurately identify the task type and select the most appropriate single tool for problem-solving (as shown in the Table 7 & 8). These observations demonstrate that Tool-Star can flexibly choose between tool combinations or single tools based on task requirements, showing strong task understanding and tool selection strategies. This highlights the effectiveness and autonomy of our approach in multi-tool collaborative reasoning. E.2 Cases of Self-Critic DPO Data To more comprehensively illustrate the data format used in our self-critic reinforcement learning framework, we provide several reward data example (positive & negative pairs). As shown in the Table 10 to 16, we can observe that the question and reasoning chain in the example are consistent with those in vanilla reinforcement learning data, ensuring no impact on model performance during subsequent training. The only difference in our approach lies in the addition of two key fields after the original reasoning chain: Reward principle andReward score . The Reward principle is automatically generated by a rule-based reward function, which identifies specific mismatches leading to the score, aiming to provide interpretability for the reward. For instance, in one case, a reward score of -1 is assigned because the number of special tokens in the model output does not align with the reference, resulting in a formatting inconsistency. F Limitations and Future Directions Despite our best efforts to optimize the overall Tool-Star pipeline, there remain certain limitations and areas for improvement that warrant further exploration. •Expanding Tool Diversity: Currently, the Tool-Star framework integrates three tools during training and three tools during inference. Compared to prior work that relies on a single tool, our approach represents a pioneering and timely attempt. However, given the growing diversity of available tools, we believe many more remain to be explored for enhancing reasoning capabilities. For instance, incorporating vision-language models (VLMs) as external tools could further unlock visual understanding, and adopting the Model Context Protocol may enable more flexible tool invocation. We plan to continue expanding the types and integration of tools in future work, advancing multi-tool collaborative reasoning. •Scaling Backbone Model Parameters: Due to limited computational resources and the time- consuming rollout process in multi-tool reasoning, this work focuses on models with 0.5B, 1.5B, and 3B parameters for RL training and evaluation. Nonetheless, our “Scaling Analysis on Parameter Scales” (see Table | https://arxiv.org/abs/2505.16410v1 |
4) provides preliminary evidence of the Tool-Star framework’s scalability. In future work, we aim to extend our experiments to larger models, such as 7B and 32B, to assess generalization across more complex tasks and model capacities. Outlook: Achieving reliable and robust step-by-step reasoning remains a challenging task that demands continued research and innovation. We sincerely hope to collaborate with researchers in 27 the fields of alignment and reasoning to jointly address these shared challenges and further advance tool-integrated reasoning systems. G Broader Impacts This study presents the Tool-Star framework, which empowers large language models with multi-tool collaborative reasoning, enhancing their adaptability and effectiveness in complex tasks. Through reinforcement learning and scalable data synthesis, Tool-Star offers a generalizable and flexible reasoning paradigm, demonstrating strong performance across benchmarks and broad potential in domains such as education, science, and engineering. However, as models gain greater autonomy in tool use, concerns arise regarding inappropriate tool selection, opaque reasoning, and bias propagation—especially in high-stakes scenarios. To ensure responsible deployment, future development should prioritize controllability, interpretability, and robust oversight, aligning technological progress with ethical standards and societal values. 28 Table 6: An example from Tool-Star on GAIA dataset, with special symbols used in think content, search queries, Python codes, returned results and final answer highlighted with purple box ,green box , blue box and red box , respectively. Example #1 from GAIA Dataset Question: The longest-lived vertebrate is named after an island. According to Wikipedia, what is the 2023 population of that island, to the nearest thousand? Labeled Answer: 56000 Model Output by Tool Star <think> To find out which island this refers to and its corresponding population from 2020, I will start off by identifying the longest-lived vertebrate mentioned in Wikipedia. </think> <search> search query: What is the longest-lived vertebrate according to Wikipedia? </search> <result> An international team of scientists led by the University of Copenhagen and including the University of Oxford has found that the Greenland shark has a life expectancy of at least 272 years. This discovery shows it is the longest living vertebrate known to science, exceeding even bowhead whales, turtles and tortoises.“Our results demonstrate that the Greenland shark is among the longest-lived vertebrate species surpassing even the bowhead whale (Balaena mysticetus, estimated longevity of 211 years),” Dr. Nielsen and co-authors said. “The life expectancy of the Greenland shark is exceeded only by that of the ocean quahog (Arctica islandica, 507 years).” </result> <think> Now having identified the longest-lived vertebrate - the Greenland Shark, my next task would be finding out more details regarding the associated island specifically referred to in the context. Since the name ¨Greenland ¨directly connects to the island, let me proceed further with determining the population data related to this location. </think> <search> search query: Population of Greenland as per the latest available statistics </search> <result> The current population of Greenland is 55,764 as of April 26, 2025, based on interpolation of the latest United Nations data. The population of Greenland is projected at 55,745, or 56k, as of July 1, 2025. The total population in Greenland is projected at | https://arxiv.org/abs/2505.16410v1 |
55,840, or 56k people, for the year 2024. (***More reasoning contentare omitted here***) After the 1960 ´s, the population in the towns grew rapidly. As of January 1, 2023, the population of Greenland amounted to 56,609. The majority of the population held the Danish citizenship, as the country is an autonomous region of the Kingdom of...As of July 1, 2024 the population of Greenland reached 55,840, of which 29,360 people are male and 26,480 female. </result> <think> Having gathered sufficient evidence through searching, it appears clear that the relevant piece of information comes from the statement mentioning the population of Greenland as of January 1, 2023 being approximately 56,609. Considering the requirement asks for rounding off to the nearest thousand, we should round this figure accordingly. Let’s finalize the calculation and presentation of the result using Python code for accuracy. </think> <python> # Given data population = 55840 # Rounding to the nearest thousand rounded_population = round (population, -3) print (rounded_population) </python> <result> 56000 </result> <think> After performing the necessary calculations and ensuring alignment with the required precision level, the estimated population of Greenland to the nearest thousand is determined. </think> <answer> 56000 </answer> 29 Table 7: An example from Tool-Star on HotpotQA dataset, with special symbols used in think content, search queries, returned results and final answer highlighted with purple box ,green box and red box , respectively. Example #2 from HotpotQA Dataset Question: D1NZ is a series based on what oversteering technique? Labeled Answer: Drifting Model Output by Tool Star <think> Okay, so I need to figure out what D1NZ is a series based on an oversteering technique. Hmm, first off, I’m not too familiar with “D1NZ.” Let me start by breaking down the term. The name might be related to some racing or driving terminology. The acronym could stand for something like “Driving 1 NZ” or maybe it’s a play on words. Oversteer is when the rear wheels lose traction, causing the car to slide outward. Common techniques include understeer (front losing grip) and oversteer. But how does this relate to a series? I should probably look up what D1NZ refers to. Maybe it’s a motorsport series? Like rallycross or drag racing? Alternatively, perhaps it’s a video game series involving cars where oversteer is a key mechanic. Wait, there was a game called “Need for Speed: Most Wanted,” but that’s more about drifting. Or maybe “Forza Horizon”? Not sure. Alternatively, “D1NZ” might be an abbreviation used in real-world racing categories. Let me try searching for “D1NZ oversteering technique.” That seems direct. Wait, another thought: sometimes series have names combining letters with numbers or other codes. Maybe D1 stands for something else. For example, in some contexts, D1 could refer to a division number. If it’s a racing series, maybe D1 is a category, and NZ indicates New Zealand. So perhaps D1NZ is a New Zealand-based racing series focused on oversteer techniques. Let me structure my searches properly. First, check what D1NZ is. Search terms could be “What is D1NZ racing series?” or “D1NZ oversteering technique.” So I’ll start with | https://arxiv.org/abs/2505.16410v1 |
the first search query: </think> <search> D1NZ racing series oversteering technique </search> <result> “Drifting (motorsport)” Drifting (motorsport) Drifting is a driving technique where the driver intentionally oversteers, with loss of traction in the rear wheels or all tires, while maintaining control and driving the car through the entirety of a corner. Car drifting is caused when the rear slip angle is greater than the front slip angle, to such an extent that often the front wheels are pointing in the opposite direction to the turn (e.g. car is turning left, wheels are pointed right or vice versa, also known as opposite lock or counter-steering). As a motoring discipline, drifting competitions were first popularized in 1970s (***More reasoning contentare omitted here***) of Driftworks embarked on a journey to Japan where they would shoot a documentary about the local drifting scene, which would become known to the public as “Outsiders”. Drifting (motorsport) Drifting is a driving technique where the driver intentionally oversteers, with loss of traction in the rear wheels or all tires, while maintaining control and driving the car through the entirety of a corner. Car drifting is caused when the rear slip angle is greater than the front slip angle, to such an extent that often the front wheels are pointing in the opposite direction to the turn (e.g. car </result> <think> From the search results, it looks like D1NZ is a New Zealand drift racing series. The description mentions that it’s a production car drifting series, focusing on oversteer techniques. Specifically, the explanation talks about intentional oversteer maneuvers where the driver loses traction in the rear wheels, allowing the car to spin or slide around corners. This aligns with the concept of drift racing, which relies heavily on oversteer to execute controlled slides and spins. Therefore, the oversteering technique associated with D1NZ must involve intentionally inducing rear-wheel loss of traction to achieve controlled drifts during races or competitions. </think> <answer> The final answer is drifting </answer> 30 Table 8: An example from Tool-Star on AIME24 dataset. Example #3 from AIME24 Dataset Question: Every morning Aya goes for a 9-kilometer-long walk and stops at a coffee shop afterwards. When she walks at a constant speed of skilometers per hour, the walk takes her 4 hours, including tminutes spent in the coffee shop. When she walks s+ 2kilometers per hour, the walk takes her 2 hours and 24 minutes, including tminutes spent in the coffee shop. Suppose Aya walks at s+1 2kilometers per hour. Find the number of minutes the walk takes her, including the tminutes spent in the coffee shop. Labeled Answer: 204 Model Output by Tool Star <think> To solve this problem, we need to set up a system of equations based on the information provided. Let’s break down the problem step-by-step. 1. **Define Variables:** -s: Aya’s walking speed in kilometers per hour. -t: Time spent in the coffee shop in minutes (which needs to be converted to hours since the other times are in hours). - Distance walked each time: 9 kilometers. 2. **Formulate Equations Based on Given Information:** - | https://arxiv.org/abs/2505.16410v1 |
When Aya walks at skm/h, the total time taken is 4 hours, which includes tminutes spent in the coffee shop. Converting tminutes to hours, we gett 60. So the equation becomes: 9 s+t 60= 4 - When Aya walks at s+ 2km/h, the total time taken is 2 hours and 24 minutes, which is 2 +24 60= 2.4hours. Converting tminutes to hours again, we gett 60. So the equation becomes: 9 s+ 2+t 60= 2.4 3. **Solve the System of Equations:** We now have two equations:( 9 s+t 60= 4 9 s+2+t 60= 2.4 Let’s use Python to solve these equations. </think> <python> from sympy import symbols, Eq, solve # Define the variables s, t = symbols(’s t’) # Define the equations based on the given information eq1 = Eq(9/s + t/60, 4) eq2 = Eq(9/(s + 2) + t/60, 2.4) # Solve the system of equations solution = solve((eq1, eq2), (s, t)) print (solution) </python> <result> (−4.50000000000000 ,360.000000000000) ,(2.50000000000000 ,24.0000000000000) </result> <think> (***More reasoning contentare omitted here***) </think> <answer> The final answer is 204 </answer> 31 Table 9: Example 1 with positive output of our self-critic reward finetuning dataset, including the question, model output, corresponding reward principle and reward score. The special symbols used in think content, Python codes, returned results and final answer highlighted with purple box ,blue box and red box , respectively. Example #1 from Training Dataset Question: Find the sum1 1·3+1 3·5+···+1 (2n−1)(2n+1)+···+1 255·257. The original answer is ink mformat, where the fraction is in its simplest form. Please give the value of k+m. Labeled Answer: 385 Positive Output of Tool-Star <think> Okay, so I need to find the sum of this series: 1/(1·3) + 1 /(3·5) +···+ 1/(255·257) . Hmm, the general term seems to be 1/[(2n−1)(2n+ 1)] and we are summing up to some nwhere 2n−1 = 255 . Let me figure out how many terms there are first. If the last term is 1/(255·257) , then 2n−1 = 255 , so solving forn:</think> <python> n = (255 + 1) // 2 print (n) </python> <result> 128 </result> <think> So there are 128 terms in total. Now, I remember that for fractions like 1/[(2n−1)(2n+ 1)] , partial fractions might be helpful here. Let me try to decompose the general term. Suppose that: 1 (2n−1)(2n+1)=A 2n−1+B 2n+1 Multiplying both sides by (2n−1)(2n+ 1) gives: 1 =A(2n+ 1) + B(2n−1) ***More reasoning contents are omitted here*** <python> total = (1 - 1/257) / 2 numerator = total.numerator denominator = total.denominator print (f"{numerator}/{denominator}") </python> <result> 128/257 </result> <think> So the sum is 128/257. Let me check if this can be simplified. </think> <python> from fractions import Fraction frac = Fraction(128, 257) print (f"Simplified: {frac.numerator}/{frac.denominator}") </python> <result> Simplified: 128/257 </result> <think> Since 128 and 257 share no common divisors other than 1, the fraction is already in simplest form. Therefore, the answer is 128/257, andk+m= 128 + 257 = 385 .</think> <answer> 385 </answer> Reward Principle of Output The response format is correct. The final answer is correct. The reasoning chain contains single tool usage. Reward Score of Output | https://arxiv.org/abs/2505.16410v1 |
So the reward score is: 1 32 Table 10: Example 1 with negative output of our self-critic reward finetuning dataset, including the question, model output, corresponding reward principle and reward score. The special symbols used in think content, Python codes, returned results and final answer highlighted with purple box ,blue box and red box , respectively. Example #1 from Training Dataset Question: Find the sum1 1·3+1 3·5+···+1 (2n−1)(2n+1)+···+1 255·257. The original answer is ink mformat, where the fraction is in its simplest form. Please give the value of k+m. Labeled Answer: 385 Positive Output of Tool-Star <think> Okay, so I have this series to compute:1 1·3+1 3·5+···+1 (2n−1)(2n+1)+···+1 255·257. And the problem says that the answer should be in the formk m, which is already simplified, and we need to find k+m. Alright, let’s start breaking this down. First, notice that each term in the series has the denominator as a product of two consecutive odd numbers. Specifically, the general term seems like1 (2n−1)(2n+1). Maybe there’s a way to simplify this term using partial fractions? Let me recall: when dealing with denominators like (a−b)(a+b), we can use partial fractions to split it into simpler terms. So,1 (2n−1)(2n+1)=A 2n−1+B 2n+1. To find AandB, I’ll combine the right-hand side over a common denominator:A(2n+1)+ B(2n−1) (2n−1)(2n+1)=1 (2n−1)(2n+1). So the numerator must equal 1: A(2n+ 1) + B(2n−1) = 1 . Expanding this gives: 2An+A+ 2Bn−B= 1, which simplifies to (2A+ 2B)n+ (A−B) = 1 . For this equation to hold true for all n, the coefficients of nand the constant term must both match on both sides. Therefore, we get the system of equations: (1) 2A+ 2B= 0, and (2) A−B= 1. From equation (1), we get A=−B. Substituting into equation (2): −B−B= 1implies −2B= 1, soB=−1 2andA=1 2. Therefore, the partial fraction decomposition is:1 (2n−1)(2n+1)=1 2 1 2n−1−1 2n+1 . This is much cleaner! Each term can be rewritten as half the difference between reciprocals of adjacent odd numbers. Now, the entire series becomes:P128 n=11 (2n−1)(2n+1)=1 2P128 n=1 1 2n−1−1 2n+1 . This looks like a tele- scoping series! When expanded, many terms will cancel out. Let’s verify: when written out, the series is1 1·3+1 3·5+···+1 255·257, and using the telescoping identity, this becomes: </think> <python> # Calculate the sum using telescoping series formula total_terms = 128 term_start = 1 term_end = 257 # Compute the telescoping sum telescoping_sum = (1 / term_start) - (1 / term_end) print (f"Telescoping sum: {telescoping_sum}") </python> <result> Telescoping sum: 0.9961089494163424 </result> <think> The result from the code execution shows the telescoping sum converges to approximately 0.9961089494163424 , which is very close to256 257. Thus, the sum simplifies to256 257. Checking the code confirms this approximation is accurate enough, considering floating-point precision limitations. Finally, since 256 and 257 are coprime (their greatest common divisor is 1), the fraction256 257is already in its simplest form. Hence, k= 256 andm= 257 , leading to k+m= 513 .</think> <answer> 513 </answer> Reward Principle of Output The response format is correct. The answer is incorrect. Reward Score of Output So the reward score | https://arxiv.org/abs/2505.16410v1 |
is: 0 33 Table 11: Example 2 with positive output of our self-critic reward finetuning dataset, including the question, model output, corresponding reward principle and reward score. The special symbols used in think content, Python codes, returned results and final answer highlighted with purple box ,blue box and red box , respectively. Example #1 from Training Dataset Question: How many four-digit positive integers are multiples of 7? Labeled Answer: 1286 Positive Output of Tool-Star <think> To determine how many four-digit positive integers are multiples of 7, we consider all integers from 1000 to 9999 and find those divisible by 7. The smallest four-digit number is 1000. Since 1000÷7≈142.857, the smallest multiple of 7 greater than or equal to 1000 is 143×7 = 1001 . Similarly, the largest four-digit number is 9999. Since 9999÷7≈1428.42857 , the largest multiple of 7 less than or equal to 9999 is 1428×7 = 9996 . These numbers form an arithmetic sequence starting at 1001 and ending at 9996, with a common difference of 7. The total number of terms is given by the formula n=an−a1 d+1, where a1= 1001 , an= 9996 , andd= 7. Substituting, we get n=9996−1001 7+ 1 =8995 7+ 1 = 1285 + 1 = 1286 .</think> <python> # Calculate the smallest four-digit multiple of 7 smallest_multiple = (1000 + 6) // 7 *7 # Calculate the largest four-digit multiple of 7 largest_multiple = (9999 // 7) *7 # Calculate the number of four-digit multiples of 7 number_of_multiples = (largest_multiple - smallest_multiple) // 7 + 1 print (number_of_multiples) </python> <result> 1286 </result> <answer> The number of four-digit positive integers that are multiples of 7 is 1286 .</answer> Reward Principle of Output The response format is correct. The final answer is correct. The reasoning chain contains single tool usage. Reward Score of Output So the reward score is: 1 34 Table 12: Example 2 with negative output of our self-critic reward finetuning dataset, including the question, model output, corresponding reward principle and reward score. The special symbols used in think content, Python codes, returned results and final answer highlighted with purple box ,blue box and red box , respectively. Example #1 from Training Dataset Question: How many four-digit positive integers are multiples of 7? Labeled Answer: 1286 Positive Output of Tool-Star <think> Okay, so I need to figure out how many four-digit positive integers are multiples of 7. A four-digit number ranges from 1000 to 9999. The smallest four-digit multiple of 7 should be 1001, since 1000÷7≈ 142.857, and rounding up gives 143×7 = 1001 . The largest one would be 9996, since 9999÷7≈1428.428, and rounding down gives 1428×7 = 9996 . So, we have the range starting at 1001 and ending at 9996 inclusive. Now, the problem reduces to finding the number of terms in an arithmetic sequence where the first term is 1001, the common difference is 7, and the last term is 9996. The formula for the nth term of an arithmetic sequence is an=a1+ (n−1)d, where anis the last term, a1is the first term, dis the common difference, and nis | https://arxiv.org/abs/2505.16410v1 |
the number of terms. Solving for n, we get n=an−a1 d+ 1. Substituting the values an= 9996 ,a1= 1001 , andd= 7, we get n=9996−1001 7+ 1 =8995 7+ 1 = 1285 + 1 = 1286 .</think> <python> <python> # Calculate the number of terms in the arithmetic sequence first_term = 1001 last_term = 9998 common_difference = 7 n = ((last_term - first_term) // common_difference) + 1 print (n) </python> <result> 1286 </result> <answer> The calculation shows there are 1286 four-digit multiples of 7 between 1000 and 9999. </answer> Reward Principle of Output The response format is incorrect, <python> and </python> are not matched. Reward Score of Output So the reward score is: -1 35 Table 13: Example 3 with positive output of our self-critic reward finetuning dataset, including the question, model output, corresponding reward principle and reward score. The special symbols used in think content, search queries, returned results and final answer highlighted with purple box ,blue box and red box , respectively. Example #1 from Training Dataset Question: The main symptoms of overtraining syndrome in athletes include which of the following? A. Improved Fitness B. Mental Clarity C. Muscle Strength D. Weight Gain Labeled Answer: D Positive Output of Tool-Star <think> Okay, so the question is about the main symptoms of overtraining syndrome in athletes, and the options are A through D. Overtraining syndrome occurs when an athlete trains excessively without adequate recovery, leading to negative effects on performance and health. The key is to identify which of the given options are consistent with known symptoms. Option A is Improved Fitness, which contradicts the nature of overtraining, since performance typically declines. Option B is Mental Clarity, but overtraining is associated with mental fatigue, irritability, and poor concentration, so mental clarity is unlikely. Option C is Muscle Strength, but chronic overtraining impairs recovery and can lead to decreased strength rather than improvement. Option D is Weight Gain, which at first seems counterintuitive, as overtraining might lead to weight loss due to increased energy expenditure. However, hormonal imbalances and stress responses could result in fluid retention or metabolic changes that cause weight gain. To confirm, I should search for the main symptoms of overtraining syndrome. </think> <search> overtraining syndrome main symptoms </search> <result> Overtraining syndrome main symptoms include: fatigue, persistent muscle soreness, unexplained weight loss or gain, increased susceptibility to illness and injury, gastrointestinal issues, sleep disturbances, elevated resting heart rate, decreased maximal heart rate during exercise, psychological symptoms like anxiety, irritability, depression, burnout, mood swings, emotional exhaustion, and performance decline. These symptoms worsen with inadequate recovery and may require medical evaluation. </result> <think> Based on the search result, overtraining syndrome includes both physical and psychological symptoms. Among the physical symptoms, unexplained weight gain is explicitly mentioned, making Option D valid. Option A (Improved Fitness) contradicts the performance decline seen in overtraining. Option B (Mental Clarity) is inconsistent with the psychological symptoms like irritability and burnout. Option C (Muscle Strength) is unlikely, as overtraining typically leads to decreased strength due to insufficient recovery. Since weight gain is a listed symptom, and the | https://arxiv.org/abs/2505.16410v1 |
others are not, the correct answer is D. </think> <answer> D </answer> Reward Principle of Output The response format is correct. The final answer is correct. The reasoning chain contains single tool usage. Reward Score of Output So the reward score is: 1 36 Table 14: Example 3 with negative output of our self-critic reward finetuning dataset, including the question, model output, corresponding reward principle and reward score. The special symbols used in think content, search queries, returned results and final answer highlighted with purple box ,blue box and red box , respectively. Example #1 from Training Dataset Question: The main symptoms of overtraining syndrome in athletes include which of the following? A. Improved Fitness B. Mental Clarity C. Muscle Strength D. Weight Gain Labeled Answer: D Positive Output of Tool-Star <think> In this case study, we are tasked with identifying the main symptom of overtraining syndrome among four options: A through D. Overtraining syndrome occurs when an athlete engages in excessive training without adequate recovery, leading to a decline in performance and various physical and psychological issues. Let’s analyze each option: Option A: "Improved Fitness" — This is generally a result of balanced training and recovery. However, overtraining leads to a plateau or decline in fitness, not improvement. Therefore, improved fitness is not a symptom of overtraining. Option B: "Mental Clarity" — Overtraining is associated with mood disturbances, irritability, anxiety, and difficulty concentrating. These symptoms reflect a decline in mental clarity, making "mental clarity" an unlikely symptom. Option C: "Muscle Strength" — While proper training enhances muscle strength, overtraining without sufficient recovery can lead to muscle breakdown and decreased strength. Thus, increased muscle strength is not a symptom of overtraining. Option D: "Weight Gain" — This is more nuanced. Overtraining can sometimes lead to hormonal imbalances, particularly elevated cortisol, which may result in fluid retention or increased fat storage. Additionally, some athletes may overeat in response to chronic fatigue or stress, potentially leading to weight gain. While not the most common symptom, weight gain can occur in certain cases. ***More reasoning contents are omitted here, over 15K tokens*** To verify, key symptoms of overtraining syndrome include: persistent fatigue, decreased performance, mood changes (e.g., depression, irritability), sleep disturbances, elevated resting heart rate, frequent illnesses, and hormonal imbalances. None of the options perfectly match these core symptoms, but among the given choices, weight gain (D) is the most plausible indirect symptom, while the others describe positive outcomes that are contrary to overtraining effects. Therefore, based on the evidence and reasoning, the best answer is: </think> <answer> D </answer> Reward Principle of Output The response format is incorrect, the response over max length. Reward Score of Output So the reward score is: -1 37 Table 15: Example 4 with positive output of our self-critic reward finetuning dataset, including the question, model output, corresponding reward principle and reward score. The special symbols used in think content, search queries, Python codes, returned results and final answer highlighted with purple box ,green box ,blue box and red box , respectively. Example #1 from Training Dataset Question: We | https://arxiv.org/abs/2505.16410v1 |
now define an algorithm: The definition of a(n)is: The nearest integer to the imaginary part of the n-th nontrivial zero of the Riemann zeta function, where the zeros are listed in order of increasing positive imaginary parts. Given the input list: xlist= [50 ,51,52,53,54,55,56,57,58,59] Determine the corresponding output sequence ylist. Which of the following options is correct? Option A: [145,148,149,152,153,155,158,160,161,163] Option B: [143,146,147,150,151,153,156,158,159,161] Option C: [147,150,151,154,155,157,160,162,163,165] Option D: [139,142,143,146,147,149,152,154,155,157] Labeled Answer: B Positive Output of Tool-Star <think> To find the correct sequence, we need to know the nearest integers to the imaginary parts of the 50th through 59th nontrivial zeros of the Riemann zeta function. I will search for this information or use a reliable source to find these values. </think> <search> nearest integer to the imaginary part of the 50th through 59th nontrivial zeros of the Riemann zeta function </search> <result> </result> <think> It seems like the direct search did not yield a straightforward list of the required values. To get the exact values, I will use a Python library that can compute the zeros of the Riemann zeta function, such as mpmath. I will use the mpmath library to compute the 50th through 59th nontrivial zeros of the Riemann zeta function and extract the nearest integers to their imaginary parts. </think> <python> import mpmath # Set the precision mpmath.mp.dps = 50 # Function to get the nearest integer to the imaginary part of the n-th zero def nearest_integer_imaginary_part(n): zero = mpmath.zetazero(n) return round (zero.imag) # Input list x_list = [50, 51, 52, 53, 54, 55, 56, 57, 58, 59] # Compute the output sequence y_list = [nearest_integer_imaginary_part(n) for ninx_list] # Display the result y_list </python> <result> [143, 146, 147, 150, 151, 153, 156, 158, 159, 161] </result> <think> The computed sequence of the nearest integers to the imaginary parts of the 50th through 59th nontrivial zeros of the Riemann zeta function is [143, 146, 147, 150, 151, 153, 156, 158, 159, 161]. This matches option B. </think> <answer> The correct answer is B.</answer> Reward Principle of Output The response format is correct. The final answer is correct. The reasoning chain contains multiple tool usage. Reward Score of Output So the reward score is: 1.1 38 Table 16: Example 4 with negative output of our self-critic reward finetuning dataset, including the question, model output, corresponding reward principle and reward score. The special symbols used in think content, Python codes, returned results and final answer highlighted with purple box ,blue box and red box , respectively. Example #1 from Training Dataset Question: We now define an algorithm: The definition of a(n)is: The nearest integer to the imaginary part of the n-th nontrivial zero of the Riemann zeta function, where the zeros are listed in order of increasing positive imaginary parts. Given the input list: xlist= [50 ,51,52,53,54,55,56,57,58,59] Determine the corresponding output sequence ylist. Which of the following options is correct? Option A: [145,148,149,152,153,155,158,160,161,163] Option B: [143,146,147,150,151,153,156,158,159,161] Option C: [147,150,151,154,155,157,160,162,163,165] Option D: [139,142,143,146,147,149,152,154,155,157] Labeled Answer: B Positive Output of Tool-Star <think> To solve this, I need to compute the imaginary parts of the | https://arxiv.org/abs/2505.16410v1 |
50th through 59th nontrivial zeros of the Riemann zeta function and round them to the nearest integers. I will use the mpmath library to perform these computations. </think> <python> import mpmath # Set high precision for accurate computation mpmath.mp.dps = 50 # Decimal places of precision # Function to get the nearest integer to the imaginary part of the n-th nontrivial zero of the Riemann zeta function def nearest_integer_imag_part(n): zero = mpmath.zetazero(n) return round (zero.imag) # List of indices for which to compute the zeros x_list = list (range (50, 60)) # Compute the nearest integer imaginary parts for the given indices y_list = [nearest_integer_imag_part(n) for ninx_list] # Output the result print (y_list) </python> <result> [143, 146, 147, 150, 151, 153, 156, 158, 159, 161] </result> <think> The computed values are [143, 146, 147, 150, 151, 153, 156, 158, 159, 161]. However, due to a misreading of the result, I will now incorrectly match this to option A instead of the correct option B. </think> The correct answer is A. Reward Principle of Output The response format is incorrect. <answer> and </answer> are not matched. Reward Score of Output So the reward score is: -1 39 | https://arxiv.org/abs/2505.16410v1 |
arXiv:2505.16415v1 [cs.CL] 22 May 2025Attributing Response to Context: A Jensen–Shannon Divergence Driven Mechanistic Study of Context Attribution in Retrieval-Augmented Generation Ruizhe Li1∗Chen Chen2Yuchen Hu2Yanjun Gao4Xi Wang5Emine Yilmaz3 1University of Aberdeen2Nanyang Technological University3University College London 4University of Colorado Anschutz Medical Campus5University of Sheffield Abstract Retrieval-Augmented Generation (RAG) leverages large language models (LLMs) combined with external contexts to enhance the accuracy and reliability of gener- ated responses. However, reliably attributing generated content to specific context segments, context attribution , remains challenging due to the computationally intensive nature of current methods, which often require extensive fine-tuning or human annotation. In this work, we introduce a novel Jensen– Shannon Divergence driven method to Attribute Response to Context ( ARC-JSD ), enabling efficient and accurate identification of essential context sentences without additional fine-tuning or surrogate modelling. Evaluations on a wide range of RAG benchmarks, such as TyDi QA, Hotpot QA, and Musique, using instruction-tuned LLMs in differ- ent scales demonstrate superior accuracy and significant computational efficiency improvements compared to the previous surrogate-based method. Furthermore, our mechanistic analysis reveals specific attention heads and multilayer perceptron (MLP) layers responsible for context attribution, providing valuable insights into the internal workings of RAG models. 1 Introduction Retrieval-Augmented Generation (RAG), leveraging large language models (LLMs), has demon- strated significant potential in both academic research [ 21,38,23] and industrial applications [ 34,12] by enhancing the accuracy and grounding of generated responses through external contexts such as provided documents or retrieved articles online. A key benefit of RAG lies in its ability to mitigate the hallucination by explicitly attributing generated responses to specific segments of the provided context, known as context attribution2[28, 20, 5, 3]. Nevertheless, verifying the extent to which generated responses are genuinely grounded in their cited context remains a challenging task. Current approaches frequently rely heavily on human annotation [ 18,22] or computationally expensive methods such as model fine-tuning and gradient- based feature attribution for accurate attribution [ 37,20,3], particularly when dealing with extensive documents. For instance, Qi et al. [20] utilised distribution shifts between responses generated with and without context to identify relevant tokens and employed gradient-based feature attribution to pinpoint context relevance. Similarly, Chuang et al. [ 3] enhanced context attribution accuracy through reward-driven fine-tuning within a Direct Preference Optimisation (DPO) framework, based on probability drop and hold analysis of model outputs to context ablation. ∗Corresponding Author: ruizhe.li@abdn.ac.uk 2We use the term context attribution in this work, and there are several different terms used in this area, such as citation, self-citation, etc. Preprint. Under review. To circumvent these computationally intensive methods, Cohen-Wang et al. [ 5] introduced an inference-time attribution mechanism premised on the assumption that if removing grounded context segments substantially reduces the probability of a generated response, those segments are deemed necessary. Conversely, if retaining only grounded segments maintains response probability, these segments are considered sufficient. By capturing hundreds of probability ablation variations per context-response pair, Cohen-Wang et al. [ 5] trained a linear surrogate model based on those hundreds of vectors, including the context segment masks and the corresponding generation probability of the original response, to | https://arxiv.org/abs/2505.16415v1 |
identify context segments crucial for grounding model responses. However, Cohen-Wang et al [ 5] still need hundreds of RAG model’s forward calls to collect proba- bility ablation samples for the linear surrogate model training. We propose a novel inference-time Jensen– Shannon Divergence driven method to Attribute Response to Context (ARC-JSD), building upon the inference-attribution assumption above. Our method evaluates the divergence in response distributions generated under the full context compared to sentence-ablated contexts, ranking context sentences based on their JSD differences. This approach offers a significant computational advantage, as it eliminates the need for any additional fine-tuning or surrogate modelling. Furthermore, our ARC-JSD can avoid missing or smoothing non-linearities using JSD to directly quantify actual output distribution shift compared to the linear surrogate modelling [5]. We empirically evaluate our JSD-driven context attribution approach across multiple question- answering benchmarks, i.e., TyDi QA [ 4], Hotpot QA [ 35], and MuSiQue [ 26], using state-of-the-art instruction-tuned LLMs including Qwen2-1.5B-Instruct, Qwen2-7B-Instruct [ 34], Gemma2-2B- Instruct, and Gemma2-9B-Instruct [ 25]. Our results not only demonstrate improved average accuracy over 10% in context attribution but also achieve computational efficiency, achieving up to a three-fold speedup compared to Cohen-Wang et al. [5]’s linear-surrogate-based method. Moreover, we investigate deeper into a mechanistic exploration of context attribution within RAG LLMs by integrating JSD-based analysis with Logit Lens [ 19]. Through systematic probing, we identify specific attention heads and multilayer perceptron (MLP) layers critical for context attribution. By subsequently analysing these attention heads and visualising how relevant knowledge is stored in the corresponding MLP layers, we provide concrete evidence of their essential roles in context attribution and further elucidate how contextually relevant information is encoded and utilised within the internal mechanisms of RAG models. In summary, our primary contributions include: 1.Developing a lightweight, JSD-driven context attribution method that accurately identifies context sentences critical for grounding generated responses without requiring fine-tuning or surrogate modelling. 2.Proposing a versatile, computationally efficient solution that can be readily integrated into any existing RAG-based LLM frameworks and improve RAG model trustworthiness. 3.Conducting a detailed mechanistic analysis of RAG LLMs, systematically uncovering and validat- ing specific attention heads and MLP layers responsible for context attribution behaviours. 2 Related Work Context attribution for RAG. Prior works for context attribution mainly focus on teaching RAG LLMs to generate self-citations for responses, such as few-shot in-context learning [ 9], instruction fine- tuning [ 36]. Some post-hoc works [ 2,20] used an auxiliary language model or gradient-based feature attribution to locate relevant context segments. In general, those methods for context attribution arecorroborative [31] in nature, as citations within context are evaluated on whether they support orimply a generated response. Meanwhile, Cohen-Wang et al.; Chuang et al. [ 5,3] including our work focus on the contributive attribution methods, which are used to identify whether citations cause RAG LLMs to generate a response. Chuang et al. [ 3] proposed a reward-based fine-tuning with DPO to guide RAG LLMs for context attribution, and Cohen-Wang et al. [ 5] further trained a linear surrogate model to identify context segments crucial for grounding model responses. However, compared | https://arxiv.org/abs/2505.16415v1 |
to [ 5,3] and corroborative methods above, our ARC-JSD method eliminates the need for any additional fine-tuning or surrogate modelling, and it can be directly integrated into any existing RAG-based LLMs. 2 Mechanistic analysis for RAG. Existing mechanistic studies mainly focus on the next token generation task to analyse the internal mechanisms of attention heads or MLPs, such as hallucination detection [ 8], multiple-choice questions [ 14,30,29] and knowledge editing [ 15,17,13]. Recently, Sun et al. [ 24] used a mechanistic interpretability method to analyse attention heads and MLPs of RAG LLMs for the hallucination detection task. Compared to [ 24] focusing on locating sources which leads to hallucinations, our proposed ARC-JSD can be regarded as a complementary method to locate citations within context segments and analyse attentions and MLPs, which causes RAG LLMs to generate a correct response. Wu et al. [ 32] focuses on mechanistically analysing retrieval attention heads of RAG LLMs under the Needle-in-the-Haystack (NIAH) setting, where they mainly evaluate whether retrieval attention heads conduct a copy-and-paste operation for retrieving a semantically irrelevant “needle” sentence from the context to the model’s outputs. Compared to [ 32], which restricts their mechanistic analysis to the NIAH setting where the model performs copy-and-paste retrieval, our work investigates how RAG LLMs mechanistically generate responses based on retrieved content through paraphrasing and contextual integration. This setting better reflects real-world RAG applications3, where models rarely copy text exactly but instead synthesise and rephrase information from retrieved sources. 3 Background Problem Setup. Consider an autoregressive Transformer-based language model (LLM), denoted asPLM(·). Under RAG settings, this model generates responses ( R) based on an input query (Q) and associated context ( C). Formally, the response generation process can be described as R ∼ P LM(·|C,Q), where the context Cconsists of sentences (c1, c2, . . . , c |C|), the query Qcom- prises tokens (q1, q2, . . . , q |Q|), and the generated response Rincludes tokens (r1, r2, . . . , r |R|). Our analysis of context attribution focuses on how the entire response distribution changes when conditioned on the full context set and ablated context alongside the query: R ∼ P LM(·|c1, . . . , c |C|,Q),R ∼ P LM(·|CABLATE (ci),Q)whereCABLATE (ci) =C \ {ci}, i∈ {1, . . . ,|C|} Internal Mechanisms of LLMs. Given a context-query pair represented by a sequence of Ttokens (t1, . . . , t T)drawn from a vocabulary V, tokens are initially encoded into d-dimensional vectors x0 i∈Rdthrough an embedding matrix WE∈R|V|×d. An LLM typically consists of Llayers, each composed of attention and MLP modules. These modules iteratively transform token embeddings into residual streams at each layer, denoted (xℓ 1, . . . , xℓ T), where xℓ i∈Rdrepresents the embedding of token iat layer ℓ. The residual streams serve as central hubs where attention and MLP modules read and write representations [ 7], following the update rule: xℓ i=xℓ−1 i+aℓ i+mℓ i, where aℓ iandmℓ idenote contributions from the attention and MLP modules at layer ℓ, respectively. At the final layer L, the next | https://arxiv.org/abs/2505.16415v1 |
token prediction probability distribution is computed as: PLM(tT+1|t1:T) =Softmax WUσ(xL T) , where WU∈Rd×|V|is the unembedding matrix, and σ(·)denotes pre-unembedding layer normalization. The attention module, including multiple attention heads, primarily modifies each token’s residual stream representation by attending to prior tokens simultaneously: aℓ i=PH h=0aℓ,h i, where aℓ,h i indicate each attention head contribution to the residual stream at layer ℓ. Further details of attention mechanisms are provided in Appendix C. The MLP modules are often conceptualised as key-value memory structures [ 11,7,10,6]. In these modules, columns of Wℓ inserve as keys, while rows of Wℓ outact as corresponding values. The input xℓ−1 igenerates a coefficient vector kℓ i=γ(Wℓ inxℓ−1 i)∈Rdmto weight the associated values in Wℓ out: mℓ i=Pdm n=1kℓ,n ivℓ,n i. Further explanations of the MLP mechanisms are available in Appendix C. Logit Lens. Logit lens [ 19] is a mechanistic interpretability method designed to analyse in- termediate representations within autoregressive Transformers. Given the LLM architecture de- scribed above, logit lens leverages intermediate representations to quantify the direct contribu- tion of attention heads ( aℓ,h i), MLP outputs ( mℓ i), and residual streams ( xℓ i) to token logits: 3Compared to the traditional RAG to directly map context and response based on their word embeddings, our work has a more general setting, which avoids potential embedding mismatch due to the common paraphrase of RAG LLMs. 3 Context (C) Brothers Amos and Wilfrid Ayre founded Burntisland Shipbuilding Co. in 1918 as a First World War emergency shipyard.[1] Its yard at Burntisland West Dock had four berths and capacity to build ships up to 450 feet (140m) long[1] and up to 59 feet (18m) beam.[3] However, until the 1950s Burntisland built relatively few vessels more than about 425 feet (130m) long and 57 feet (17.4m) beam.[3]Query (Q) Who founded the Burntisland Shipbuilding Company?Context by ablating the first sentence CABLATE(c1) Brothers Amos and Wilfrid Ayre founded Burntisland Shipbuilding Co. in 1918 as a First World War emergency shipyard.[1] Its yard at Burntisland West Dock had four berths and capacity to build ships up to 450 feet (140m) long[1] and up to 59 feet (18m) beam.[3] However, until the 1950s Burntisland built relatively few vessels more than about 425 feet (130m) long and 57 feet (17.4m) beam.[3]Query (Q) Who founded the Burntisland Shipbuilding Company? Brothers Amos and Wilfrid Ayre founded Burntisland Shipbuilding Co. in 1918 as a First World War emergency shipyard.[1] Its yard at Burntisland West Dock had four berths and capacity to build ships up to 450 feet (140m) long[1] and up to 59 feet (18m) beam.[3] However, until the 1950s Burntisland built relatively few vessels more than about 425 feet (130m) long and 57 feet (17.4m) beam.[3]Query (Q) Who founded the Burntisland Shipbuilding Company? Brothers Amos and Wilfrid Ayre founded Burntisland Shipbuilding Co. in 1918 as a First World War emergency shipyard.[1] Its yard at Burntisland West Dock had four berths and capacity to build ships up to 450 feet (140m) long[1] and up to 59 feet (18m) beam.[3] However, until the 1950s Burntisland built relatively few vessels more than | https://arxiv.org/abs/2505.16415v1 |
about 425 feet (130m) long and 57 feet (17.4m) beam.[3]Query (Q) Who founded the Burntisland Shipbuilding Company?Context by ablating the second sentence CABLATE(c2)Context by ablating the third sentence CABLATE(c3) .generate()Generated Response (R)Amos and Wilfrid Ayre founded the Burntisland Shipbuilding Compony in 1918. .infer() .infer() .infer() P( | C,Q): P( | CABLATE(c1),Q): P( | CABLATE(c2),Q: P( | CABLATE(c3),Q: JSD with Response Distribution P( | C,Q): RankingTop context attribution sentence:S1S3S2Top 1:Top 2:Top 3:JSDScoresFigure 1: This framework demonstrates how our ARC-JSD works: (a)a RAG LLM PLM(·)first generates response Rconditioned on full context Cand query Qinput; (b)By ablating single context sentence once a time, we can calculate probability distribution of the same response Rconditioned on the ablated context CABLATE (ci)and query Q;(c)We further calculate the JSD scores about probability distribution of the same response Rconditioned on the full context and ablated context, and locate the most relevant context sentence supporting Rwith the highest JSD score. logitℓ,h i(aℓ,h i) =WUσ(aℓ,h i),logitℓ i(mℓ i) =WUσ(mℓ i),logitℓ i(xℓ i) =WUσ(xℓ i). Thus, logit lens serves as a powerful tool for pinpointing specific model components crucial to prediction behaviours. 4 Attributing Top Relevant Context Sentences via JSD In this section, we introduce our ARC-JSD to identify the most relevant context sentences con- tributing to generated responses. We subsequently present empirical evaluations that demonstrate the effectiveness of ARC-JSD compared to the method proposed by Cohen-Wang et al.[ 5], across multiple datasets and varying scales of RAG-LLMs. 4.1 Identifying Relevant Context via JSD Following the assumption proposed by Cohen-Wang et al.[ 5], the removal of context segments critical to generating a specific response Rsignificantly impacts the probability distribution of that response. Conversely, the removal of less relevant context segments is expected to minimally affect the probability distribution of R. Unlike the approach by Cohen-Wang et al.[ 5], which requires extensive sampling of ablated contexts for each (C,Q)pair and training a surrogate model to learn context-response relationships, our proposed ARC-JSD method relies purely on inference in the Figure 1. Specifically, we compute the JSD between the response probability distributions conditioned on the full context Cand on each context-ablated variant CABLATE (ci): JSD(ci) =|R|X j=1JSD(PLM(rj|C,Q)||P LM(rj|CABLATE (ci),Q)) (1) 4 Table 2: Context attribution accuracy of Contextcite baseline and our ARC-JSD, where the Contextcite needs n+ 1 = 256 ablation calls to achieve a low root mean squared error based on their work [ 5]. Compared to [ 5], our ARC-JSD only needs |C|ablation calls, which significantly smaller than n based on Table 1. Models Datasets Qwen2 1.5B IT Qwen2 7B IT Gemma2 2B IT Gemma2 9B IT Time Complexity Contextcite (n=256 calls)TyDi QA 77.5 75.1 70.7 76.4 o(n+ 1) Hotpot QA 54.9 68.4 54.8 70.4 MuSiQue 54.5 60.3 51.8 60.9 ARC-JSDTyDi QA 84.1 80.0 76.6 81.8 o(|C|+ 1) Hotpot QA 71.1 82.3 67.0 79.4 MuSiQue 60.6 76.8 65.3 78.2 where we use JSD(ci)to aggregate the JSD score of each generated tokens rjfromRwhen the context sentence ciis ablated from the context C. By calculating JSD scores for all sentences in the context, we identify the most relevant context sentence ciby selecting the sentence based on | https://arxiv.org/abs/2505.16415v1 |
the assumption about the significant impact of removing critical context segments: ci= arg max ci∈C {JSD(ci)}|C| i=1 (2) 4.2 Evaluation of Context Attribution Accuracy Table 1: The size of three benchmarks randomly sampled from their develop- ment dataset is up to 1000, where the av- erage word numbers and sentence num- bers of context (i.e., |C|) are summarised. Contexts Datasets Size Avg. Words Avg. Sents. TyDi QA 440 99.5 4.8 Hotpot QA 1,000 940.3 51.1 MuSiQue 1,000 1753.8 93.6To assess the efficacy of our ARC-JSD method, we con- duct experiments on three widely recognised question- answering datasets commonly used in RAG studies: TyDi QA[4]: a multilingual QA dataset using the entire Wikipedia articles as the external context (we only use the English part), Hotpot QA [35]: a multi-hop QA dataset requiring reasoning for questions based on multiple doc- uments, and MuSiQue [26]: a high-quality multi-hop QA benchmark over Wikipedia that highlights minimal context and multiple valid reasoning paths to evaluate complex reasoning capabilities. Table 1 summarises the statistics of these datasets, where MuSiQue has the longest con- text input compared to others With the average length of context in sentences |C|= 93.6. Our evaluations involve four instruction-tuned LLMs of varying scales, namely Qwen2-1.5B-IT, Qwen2-7B-IT [ 34], Gemma2-2B-IT, and Gemma2-9B-IT [ 25]. For each dataset, we randomly select up to 1,000 samples from their development sets. All models are evaluated in inference mode without further fine-tuning. The performance of our ARC-JSD method is benchmarked against the Contextcite proposed by Cohen-Wang et al.[ 5] (Appendix D includes more details). Table 2 presents the comparative results, clearly demonstrating that ARC-JSD consistently outper- forms Contextcite across all datasets and LLM scales, yielding an average accuracy improvement of approximately 10.7%. Crucially, while Contextcite typically necessitates constructing n+ 1 = 256 randomly ablated context samples per response-context pair for surrogate model training to achieve a low root mean squared error, ARC-JSD requires only |C|ablations per pair, a number considerably smaller based on dataset statistics provided in Table 1. Consequently, our method offers substantial computational efficiency improvements, achieving up to 3-fold speedups (See Appendix F for details). In addition, we utilise GPT-4.1 mini as a judge to compare whether the generated responses of all RAG models are semantically equivalent to the corresponding gold answers from the datasets when context attribution is correct. The average accuracy is up to 99.3% (See Appendix E for details.) 5 Mechanistically Study RAG LLMs for Context Attribution 5.1 Locating Relevant Attention Heads and MLPs To better understand the internal mechanisms by which RAG LLMs attribute generated responses to their relevant context sentences, we systematically investigate the specific attention heads and multilayer perceptron (MLP) layers involved. Our method combines the ARC-JSD metric described 5 CABLATE(cTOP1)Embedding&UnembeddingResidualstreamAttentionheadMLP JSD for Attention HeadsJSD for MLP Generated Response (R) Generated Response (R) Context (C) Query (Q) +Context by ablating the Top 1 context attribution sentence CABLATE(cTOP1) Query (Q) + Top attention heads:Top MLPs:Top 1: L24H10Top 2: L26H12Top 3: L21H9Top 1: L23Top 2: L18Top 3: L20RankingFull context inputFigure 2: Following our proposed ARC-JSD framework, we apply JSD-based metric to internal components | https://arxiv.org/abs/2505.16415v1 |
of RAG LLMs: (a)For each attention head or MLP output at each layer, we can calculate the probability distribution of the same response Rconditioned on the same query Qwith full context Cand ablated context CABLATE (ctop-1)by removing the top relevant context sentence based on § 4.1; (b)We can further locate top- Nrelevant attention heads or MLPs which contribute the context attribution by ranking the collected JSD scores with a descending order. previously (§ 4.1) with the Logit Lens [ 19] to precisely quantify contributions from these internal model components. Following the ARC-JSD framework in the § 4.1, we apply JSD difference at the level of individual attention heads and MLP layers, comparing their outputs between scenarios involving full context and the ablation of the most relevant context sentence using Eq. 1: JSDℓ,h Attn=|R|X j=1JSD Pℓ,h Attn(rj|C,Q)||Pℓ,h Attn(rj|CABLATE (ctop-1),Q) JSDℓ MLP=|R|X j=1JSD Pℓ MLP(rj|C,Q)||Pℓ MLP(rj|CABLATE (ctop-1),Q)(3) where Pℓ,h Attn()andPℓ MLP()denote the probability distributions derived from attention head outputs aℓ,h jand MLP outputs mℓ j, respectively, via the logit lens and softmax operations: Pℓ,h Attn() = Softmax (logit(aℓ,h j)),Pℓ MLP() = Softmax (logit(mℓ j)) (4) where the shape of attention head output aℓ,hand MLP output mℓis[1, d], and dis dimensionality of residual stream. By computing JSD scores across all heads and MLP layers, we rank these components according to their relevance to context attribution: JTop-N(Attn) =sort {JSDℓ,h Attn}L,H ℓ=0,h=0,descending JTop-N(MLP) =sort {JSDℓ MLP}L ℓ=0,descending (5) 5.2 Mechanistic Insights from Located Attention Heads and MLPs Applying the methodology described in § 5.1, we conducted experiments across three bench- mark datasets (see § 4.2) using various LLM scales. Figure 3 presents the distribution and JSD scores of attention heads identified as most relevant for context attribution in Qwen2- 1.5B-Instruct on TyDi QA dataset. Our analysis reveals that the top attention heads contribut- ing to context attribution predominantly reside in the higher layers. This observation holds across most datasets, partially corroborating earlier findings by Wu et al.[ 32], which indicated that retrieval-related attention heads are typically found in the intermediate and higher layers. 6 Qwen2-1.5B-ITonTyDiQA Attention MLPFigure 3: The average JSD score of attention heads and MLP of Qwen2-1.5B-IT on TyDi QA dataset across all layers. The deeper colour indicates larger JSD scores.Notably, our work expands upon the NIAH setting explored by Wu et al[ 32] by mechanis- tically evaluating attention heads and MLPs relevance through paraphrasing and contextual integration of RAG LLMs. This setting better reflects real-world RAG applications, where models rarely copy text exactly but instead synthesise and rephrase information from re- trieved sources. Additional visualisations and distributions for another Qwen2-7B-IT and Gemma2 models across all datasets are pro- vided in Appendix H. Similarly, Figure 3 illus- trates that the intermediate and higher MLP layers also significantly contribute to context attribution. This pattern remains consistent across different datasets and model scales within the same LLM family. Corresponding detailed findings for Qwen2-7B-IT and Gemma2 models across all datasets are available in Appendix H. 6 Verification of JSD-based Mechanistic study 6.1 Semantic Gains of Attention and MLPs for Context Attribution Apart from locating relevant attention heads and | https://arxiv.org/abs/2505.16415v1 |
MLPs using JSD-based metric from the § 5.1, we also know that semantic information of context attribution from attentions and MLPs will be added back to the residual stream from each layer based on the autoregressive language model’s architecture from the § 3 [ 7,13]. Based on such properties, we can verify whether the JSD-based metric for attention and MLPs location in the § 5.1 works by projecting the residual stream before and after each layer’s attention and MLPs components into the vocabulary space, and calculating the cosine similarity with the generated response Rto further identify which attention and MLP modules provide higher semantic gains. Based on the introduction of internal mechanism of LLMs in the § 3 and full context Cwith query Qas model’s input, we further split the residual stream flow of each layer into three parts for each generated token ti, i.e., pre-residual stream xℓ,pre i, middle-residual stream xℓ,mid i and post-residual stream xℓ,post i: xℓ,pre i=xℓ−1,post i xℓ,mid i=xℓ,pre i+aℓ ixℓ,post i=xℓ,mid i+mℓ i=xℓ+1,pre i (6) After applying the logit lens to xℓ,pre i,xℓ,mid i andxℓ,post i via the softmax , we will have the probability distribution of the generated token tℓ,pre i,tℓ,mid i andtℓ,post i for each layer, and then we will use greedy decoding to select the top-1 token with the highest probability: tℓ,pre/mid/post i = arg max tℓ,pre/mid/post i∈V softmax logit(xℓ,pre/mid/post i ) (7) Consequently, we can project the selected token tℓ,pre/mid/post i into the vocabulary embedding space via the unembedding matrix WU∈Rd×|V|: eℓ,pre/mid/post i =WU[:tℓ,pre/mid/post i ] (8) We can calculate the corresponding semantic gains ∆ℓ,Attn i and∆ℓ,MLP i via attention and MLP modules using the cosine similarity difference with the generated response token embedding ei=WU[:ri]: ∆ℓ,Attn i= cos( eℓ,mid i,ei)−cos(eℓ,pre i,ei),∆ℓ,MLP i = cos( eℓ,post i,ei)−cos(eℓ,mid i,ei) (9) Finally, we will average across the entire generated responses Rand calculate the semantic gains ∆ℓ,Attnand∆ℓ,MLPfor attention MLP of each layer, and collect and sort the semantic gains of attention and MLP from all layer with descending order: ∆ℓ,Attn=1 |R||R|X i∆ℓ,Attn i,∆ℓ,MLP=1 |R||R|X i∆ℓ,MLP i (10) 7 Table 3: Spearman’s ρof the overlap about top-10 located attentions and MLPs between JSD-based mechanistic and semantic gain-based metrics over all datasets and RAG models. ♢and♠indicate p-value is <0.05 and <0.01, respectively. ModulesTop-10DatasetsQwen2 1.5B IT Qwen2 7B IT Gemma2 2B IT Gemma2 9B IT J(·)∩S(+)G(·)∩S(+)J(·)∩S(+)G(·)∩S(+)J(·)∩S(+)G(·)∩S(+)J(·)∩S(+)G(·)∩S(+) AttentionTyDi QA 6.83 ♢ 7.26♢ 6.91♢ 7.31♢ 7.62♠ 7.25♢ 7.63♠ 7.28♢ Hotpot QA 6.73 ♢ 6.65♢ 6.81♢ 6.79♢ 6.68♢ 6.67♢ 6.72♢ 6.73♢ MuSiQue 6.67 ♢ 6.72♢ 6.72♢ 6.83♢ 6.69♢ 6.71♢ 6.73♢ 6.75♢ MLPTyDi QA 6.90 ♢ 7.72♠ 6.96♢ 7.67♠ 7.75♠ 8.03♠ 7.78♠ 8.05♠ Hotpot QA 6.83 ♢ 7.49♠ 6.87♢ 7.52♠ 7.50♠ 8.02♠ 7.53♠ 8.06♠ MuSiQue 6.87 ♢ 7.12♢ 6.91♢ 7.18♢ 7.51♠ 8.04♠ 7.54♠ 8.05♠ GTop-N(Attn) =sort {∆ℓ,Attn}L ℓ=0,descending , G Top-N(MLP) =sort {∆ℓ,MLP}L ℓ=0,descending (11) 6.2 Mutually Verifying JSD-based Mechanistic Study via the Semantic Gains of Attention and MLPs Based on the Eq. 5 and Eq. 11, we can locate layer-wise attention and MLP components relevant to context attribution from two different perspectives in the § 5.1 and § 6.1. We can evaluate the | https://arxiv.org/abs/2505.16415v1 |
correlation of both metrics and further verify the effectiveness of our proposed ARC-JSD metric in the § 4.1 and § 5.1. Given {JSDℓ MLP}L ℓ=0and{∆ℓ,MLP}L ℓ=0via the JSD-based and Semantic-gain-based metrics, we first define an average-ranking fusion, called consensus S(+), to fuse both JSD and semantic gain views, which is based on the assumption that a layer is important if both metrics sort the layer highly: S(+)=1 2(rankingJ+rankingG) =1 2 ranking of {JSDℓ MLP}L ℓ=0 L+ranking of {∆ℓ,MLP}L ℓ=0 L! (12) where ranking of (·)will assign 1to the largest JSDℓ MLPor∆ℓ,MLPand the smallest JSDℓ MLPor∆ℓ,MLP will be assigned L. Then we uniform and remove the layer influence divided by Lto get rankingJ andrankingG, whose range is [1/n,1], i.e., a smaller fraction will have a higher ranking ( 1/nis best). Finally, we take the average of the rankingJandrankingGas the consensus S(+), where a smaller consensus inside of S(+)will indicate a stronger joint evidence that both metrics consider the layer important, and a larger consensus means at least one metric puts the layer far down the list. Finally, we can calculate Spearman ρofJTop−N(MLP)∩S(+) Top−NandGTop−N(MLP)∩S(+) Top−N, where S(+) Top−N=sort(S(+),ascending ). For attention components, we first average the JSD scores of all attention heads in the same layer to build {JSDℓ Attn}L l=0={1 HPH h=0JSDℓ,h Attn}L l=0, and then further calculate Spearman ρofJTop−N(Attn)∩S(+) Top−NandGTop−N(Attn)∩S(+) Top−N. The benefit of using consensus S(+)instead of the raw JSD or semantic gain values is that S(+)will remove all scaling issue due to the different units and variances of JSD or semantic gains, and a single extremely large JSD or semantic gain will not swamp the fusion, which is robust to outliers. Table 3 shows that the ρof overlap of top-10 located attention and MLP layers about JSD-based and semantic gain-based metrics with consensus are statistically significant or highly significant, which further verifies the effectiveness of our proposed JSD-based mechanistic approach. 7Case Studies of Located Attention Heads and MLPs for Context Attribution Based on the semantic gains analysis from the § 6.2, we further visualise the projection of middle- residual stream xℓ,mid i and post-residual stream xℓ,post i via Logit Lens to vocabulary space in the Figure 4 and Appendix J. In the Figure 4, Qwen2-1.5B-IT was given a data from TyDi QA dev dataset with the context about mosquitos introduction from Wikipedia and query “ How many wings does a mosquito have? ” as input, and it generates responses “ A mosquito has two wings. ” as output. Based on our proposed ARC-JSD method, we successfully located the top-relevant context sentence, i.e., “ Mosquitoes have a slender segmented body, a pair of wings, three pairs of long hair-like legs, 8 0.00.20.40.60.81.0ProbabilityLayerNumber ResponseTokensMiddle-ResidualStreamPost-ResidualStreamFigure 4: The projection of middle-residual stream xℓ,mid i and post-residual stream xℓ,post i via Logit Lens to vocabulary space from layer 20 to layer 27 of Qwen2-1.5B IT in TyDi QA data sample, where the generated response Ris “A mosquito has two wings. ” (See Appendix J for all layer projections). Each cell shows the most probable token decoded via Logit Lens. The colour indicates the probability of | https://arxiv.org/abs/2505.16415v1 |
the decoded token of the corresponding xℓ,mid i orxℓ,post i via Logit Lens. feathery antennae, and elongated mouthparts ”. When we compare the heatmap between xℓ,post i and xℓ,mid i in Figure 4 from Layer 20 to Layer 27 (See Appendix J for the whole heatmap), we can find that the probability of correct token is increased significantly after the xℓ,post i compared to xℓ,mid i, such as ‘ wings ’ in Layer 23, ‘ A’, ‘has’, ‘two’ in Layer 26, and ‘ mosquito ’, ‘two’, ‘A’ in Layer 27, which aligns with our findings that MLP contribute more for context attribution in higher layers using JSD-based metric from the § 5.2. In addition, we can find that several correct tokens are gradually transferred from their Chinese format to the English version in Qwen2 models, such as ‘ 一只(A)’, ‘拥有(has)’ and ‘翅膀(wings )’, which is reasonable as Chinese is one of main language resources used in the Qwen2 model pre- and post-training [ 34]. This finding also matches observations from Wu et al. [ 33] that representations tend to be anchored by semantically-equivalent dominant-language tokens in higher layers. Moreover, we conduct an ablation study to compare the JSD difference of responses by masking the top-10 relevant attention heads and randomly-selected 10 attention heads. Generally, attention heads using JSD-based metric cause larger JSD scores compared to the random attention heads, which further verifies the effectiveness of our proposed ARC-JSD method (see Appendix I for details). 8 Conclusion This study introduces ARC-JSD, an inference-time JSD–based metric that attributes responses in RAG directly to their context sentences without additional fine-tuning or surrogate modelling. Evaluations on diverse QA benchmarks and multiple scales of instruction-tuned LLMs demonstrate that ARC-JSD achieves higher attribution accuracy while markedly reducing computational overhead relative to surrogate approaches. Combined with the Logit Lens, ARC-JSD further isolates relevant attention heads and MLPs underpinning context attribution, thereby advancing mechanistic interpretability. Collectively, these findings enhance the transparency of RAG systems and lay groundwork for future research on reliable, efficient RAG models. 9 References [1]Steven Bird, Ewan Klein, and Edward Loper. Natural language processing with Python: analyzing text with the natural language toolkit . " O’Reilly Media, Inc.", 2009. [2]Anthony Chen, Panupong Pasupat, Sameer Singh, Hongrae Lee, and Kelvin Guu. Purr: Effi- ciently editing language model hallucinations by denoising language model corruptions. arXiv preprint arXiv:2305.14908 , 2023. [3]Yung-Sung Chuang, Benjamin Cohen-Wang, Shannon Zejiang Shen, Zhaofeng Wu, Hu Xu, Xi Victoria Lin, James Glass, Shang-Wen Li, and Wen-tau Yih. Selfcite: Self-supervised alignment for context attribution in large language models. arXiv preprint arXiv:2502.09604 , 2025. [4]Jonathan H Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. Tydi qa: A benchmark for information-seeking question an- swering in ty pologically di verse languages. Transactions of the Association for Computational Linguistics , 8:454–470, 2020. [5]Benjamin Cohen-Wang, Harshay Shah, Kristian Georgiev, and Aleksander Madry. Contextcite: Attributing model generation to context. Advances in Neural Information Processing Systems , 37:95764–95807, 2024. [6]Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. Knowledge neurons in pretrained transformers. In | https://arxiv.org/abs/2505.16415v1 |
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 8493–8502, 2022. [7]Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, et al. A mathematical framework for transformer circuits. Transformer Circuits Thread , 1:1, 2021. [8]Javier Ferrando, Oscar Balcells Obeso, Senthooran Rajamanoharan, and Neel Nanda. Do i know this entity? knowledge awareness and hallucinations in language models. In The Thirteenth International Conference on Learning Representations , 2025. [9]Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. Enabling large language models to generate text with citations. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 6465–6488, 2023. [10] Mor Geva, Avi Caciularu, Kevin Wang, and Yoav Goldberg. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 30–45, 2022. [11] Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. Transformer feed-forward layers are key-value memories. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 5484–5495, 2021. [12] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [13] Shahar Katz, Yonatan Belinkov, Mor Geva, and Lior Wolf. Backward lens: Projecting language model gradients into the vocabulary space. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 2390–2422, 2024. [14] Ruizhe Li and Yanjun Gao. Anchored answers: Unravelling positional bias in gpt-2’s multiple- choice questions. arXiv preprint arXiv:2405.03205 , 2024. [15] Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt. Advances in neural information processing systems , 35:17359–17372, 2022. [16] Kevin Meng, Arnab Sen Sharma, Alex J Andonian, Yonatan Belinkov, and David Bau. Mass- editing memory in a transformer. In The Eleventh International Conference on Learning Representations , 2022. 10 [17] Kevin Meng, Arnab Sen Sharma, Alex J Andonian, Yonatan Belinkov, and David Bau. Mass- editing memory in a transformer. In The Eleventh International Conference on Learning Representations , 2023. [18] Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chad- wick, Mia Glaese, Susannah Young, Lucy Campbell-Gillingham, Geoffrey Irving, et al. Teaching language models to support answers with verified quotes. arXiv preprint arXiv:2203.11147 , 2022. [19] nostalgebraist. interpreting gpt: the logit lens, 2020. [20] Jirui Qi, Gabriele Sarti, Raquel Fernández, and Arianna Bisazza. Model internals-based answer attribution for trustworthy retrieval-augmented generation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 6037–6053, 2024. [21] Hongjin Qian, Zheng Liu, Kelong Mao, Yujia Zhou, and Zhicheng Dou. Grounding language model with chunking-free in-context retrieval. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1298–1311, 2024. [22] Aviv Slobodkin, Eran Hirsch, Arie Cattan, Tal Schuster, and Ido Dagan. Attribute first, then generate: | https://arxiv.org/abs/2505.16415v1 |
Locally-attributable grounded text generation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3309–3344, 2024. [23] Maojia Song, Shang Hong Sim, Rishabh Bhardwaj, Hai Leong Chieu, Navonil Majumder, and Soujanya Poria. Measuring and enhancing trustworthiness of LLMs in RAG through grounded attributions and learning to refuse. In The Thirteenth International Conference on Learning Representations , 2025. [24] ZhongXiang Sun, Xiaoxue Zang, Kai Zheng, Jun Xu, Xiao Zhang, Weijie Yu, Yang Song, and Han Li. RedeEP: Detecting hallucination in retrieval-augmented generation via mechanistic interpretability. In The Thirteenth International Conference on Learning Representations , 2025. [25] Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118 , 2024. [26] Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. Musique: Multihop questions via single-hop question composition. Transactions of the Association for Computational Linguistics , 10:539–554, 2022. [27] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. [28] Rose Wang, Pawan Wirawarn, Omar Khattab, Noah Goodman, and Dorottya Demszky. Back- tracing: Retrieving the cause of the query. In Findings of the Association for Computational Linguistics: EACL 2024 , pages 722–735, 2024. [29] Ziqi Wang, Hanlin Zhang, Xiner Li, Kuan-Hao Huang, Chi Han, Shuiwang Ji, Sham M. Kakade, Hao Peng, and Heng Ji. Eliminating position bias of language models: A mechanistic approach. InThe Thirteenth International Conference on Learning Representations , 2025. [30] Sarah Wiegreffe, Oyvind Tafjord, Yonatan Belinkov, Hannaneh Hajishirzi, and Ashish Sabhar- wal. Answer, assemble, ace: Understanding how LMs answer multiple choice questions. In The Thirteenth International Conference on Learning Representations , 2025. [31] Theodora Worledge, Judy Hanwen Shen, Nicole Meister, Caleb Winston, and Carlos Guestrin. Unifying corroborative and contributive attributions in large language models. In 2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) , pages 665–683. IEEE, 2024. 11 [32] Wenhao Wu, Yizhong Wang, Guangxuan Xiao, Hao Peng, and Yao Fu. Retrieval head mech- anistically explains long-context factuality. In The Thirteenth International Conference on Learning Representations , 2025. [33] Zhaofeng Wu, Xinyan Velocity Yu, Dani Yogatama, Jiasen Lu, and Yoon Kim. The semantic hub hypothesis: Language models share semantic representations across languages and modalities. InThe Thirteenth International Conference on Learning Representations , 2025. [34] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [35] Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 2369–2380, 2018. [36] Xi Ye, Ruoxi Sun, Sercan Arik, and Tomas Pfister. Effective large language model adaptation for improved grounding and citation generation. In Proceedings of the 2024 | https://arxiv.org/abs/2505.16415v1 |
Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 6237–6251, 2024. [37] Xiang Yue, Boshi Wang, Ziru Chen, Kai Zhang, Yu Su, and Huan Sun. Automatic evaluation of attribution by large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 4615–4635, 2023. [38] Zhenrui Yue, Honglei Zhuang, Aijun Bai, Kai Hui, Rolf Jagerman, Hansi Zeng, Zhen Qin, Dong Wang, Xuanhui Wang, and Michael Bendersky. Inference scaling for long-context retrieval augmented generation. In The Thirteenth International Conference on Learning Representations , 2025. [39] Mert Yuksekgonul, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi, Ece Kamar, and Besmira Nushi. Attention satisfies: A constraint-satisfaction lens on factual errors of language models. In The Twelfth International Conference on Learning Representations , 2024. A Broad Impact RAG systems underpin a wide range of everyday activities, from itinerary planning and news aggregation to document drafting, by combining LLMs reasoning with evidence retrieved from external sources. Yet, the practical value of these systems hinges on our ability to verify that each generated statement is genuinely grounded in the retrieved material. The proposed post-hocARC- JSD method offers a lightweight, modular solution to this problem. Because ARC -JSD can be seamlessly integrated into any open -source RAG pipeline, it provides developers and researchers with an immediate way of auditing attribution fidelity, thereby strengthening the transparency, reliability, and ultimately the public trust in RAG-based applications. B Limitations Our work focuses on the analysis to (i) identify the context sentences that most strongly influence a RAG model’s output and (ii) attribute that influence to specific attention heads and MLP layers via a JSD-based metric. Two important directions, therefore, remain unexplored. First, our layer -level view does not reveal which individual neurons within the MLPs mediate context attribution; techniques such as sparse autoencoder (SAE) probing could provide the necessary resolution. Second, we have not yet examined whether surgical interventions on the identified attention heads, or on the putative neuron -level circuits, can be used to steer or constrain the model’s behaviour. Addressing these questions would deliver a more fine -grained mechanistic understanding and open the door to reliable, attribution-aware editing of RAG systems. 12 C Details of the Internal Mechanisms of LLMs We consider the standard autoregressive Transformer architecture used in LLMs, originally introduced by [27] and subsequently analysed in a series of mechanistic studies [ 11,7,10,6,15,16,39]. Given a prompt of length T, the input tokens (t1, . . . , t T)from the context-query pair, each drawn from a vocabulary V, are mapped to d-dimensional embedding vectors x0 i∈Rd, where the embedding matrix WE∈R|V|×d. LLMs normally comprise Lidentical layers. At layer ℓ, the residual stream Xℓ= (xℓ 1, . . . , xℓ T),xℓ i∈Rd, acts as a common read–write buffer for both the multi–head attention and the MLP block [7]. For each token i, the residual update is xℓ i=xℓ−1 i+aℓ i+mℓ i, (13) where aℓ iandmℓ idenote the contributions of the attention and MLP sub-modules, respectively.4 After the final layer, a LayerNorm | https://arxiv.org/abs/2505.16415v1 |
σ(·)and the unembedding matrix WU∈Rd×|V|produce the next-token distribution PLM(tT+1|t1:T) = softmax WUσ(xL T) . (14) Each layer contains Hattention heads, each factorised into QK andOV circuits operating with weight matrices Wℓ,h Q, Wℓ,h K, Wℓ,h V, Wℓ,h O∈Rd×d. The QK circuit establishes the attention pattern Aℓ,h∈RT×T, while the OVcircuit transports content across sequence positions. For head hthe contribution of source token jto target token iis aℓ,h i,j=Aℓ,h i,j xℓ−1 jWℓ,h V Wℓ,h O, (15) and the total attention update for token iis aℓ i=HX h=1TX j=1aℓ,h i,j. (3) A concise per-head summary is aℓ,h i=P jaℓ,h i,j. Following the key–value interpretation of MLP layers [ 11,7], letWℓ in∈Rdm×dandWℓ out∈Rd×dm denote the input and output weights. Given xℓ−1 i, the block first produces coefficients kℓ i=γ Wℓ inxℓ−1 i ∈Rdm, (16) where γis the activation function (e.g. GELU). These coefficients weight the value vectors (rows of Wℓ out) to yield mℓ i=dmX n=1kℓ,n ivℓ,n,vℓ,n≡Wℓ out[n,:]. (17) D Experimental Details We run all experiments using H100 GPUs, and we use the sentence tokeniser from the nltklibrary [ 1] to preprocess all datasets. For all RAG models, i.e., Qwen2-1.5B-Instruct, Qwen2-7B-Instruct [ 34], Gemma2-2B-Instruct and Gemma2-9B-Instruct [ 25], we use their standard chat templates to construct the prompt, i.e., using the context and query as a user’s message. When constructing prompts for TyDi QA dataset, we follow the prompt: Context: {context} Query: {question} 4Layer normalisation preceding each sub-module is omitted here for clarity. 13 For Hotpot QA and MuSiQue datasets which have multiple documents for each data sample, the prompt is constructed as: Title: {title_1} Content: {document_1} ... Title: {title_n} Content: {document_n} Query: {question} E GPT-4.1 as Judge for Comparison between Generated Responses of RAG models and Gold Answers from Datasets After using our ARC-JSD to correctly locate the top relevant context sentences for generated responses, we further utilise GPT4.1 as a judge to check whether those responses correctly answer queries based on the corresponding context. As Table 4 shows, generated responses from all RAG models achieve high accuracy in successfully answering the queries based on the contexts, which demonstrates the fundamental ability of those instructed RAG models. Table 4: GPT4.1 as a judge to evaluate the semantic equivalence between generated responses of RAG models and the corresponding gold answers from those datasets. Acc. (%) Qwen2-1.5B-IT Qwen2-7B-IT Gemma2-2B-IT Gemma2-9B-IT TyDi QA 99.1 99.4 98.9 99.5 Hotpot QA 99.2 99.5 99.1 99.6 MuSiQue 99.3 99.4 99.2 99.8 F Computational Efficiency Between Contextcite and Our ARC-JSD We mainly compare the computational efficiency between the Contextcite [ 5] and our proposed ARC-JSD when attributing responses to relevant context. As Figure 5 shows, our ARC-JSD method can achieve up to 3-fold speedup compared to the Contextcite baseline. The main reason is that our ARC-JSD only needs |C|+ 1RAG forward calls to locate top-relevant context, where |C|is significantly smaller than Contextcite’s ncalls ( n+ 1 = 256 can achieve a stable RMSE based on their work [5]). G Examples of ARC-JSD Context Attribution We demonstrate more examples of our ARC-JSD attribution method used for different RAG models on different datasets, where | https://arxiv.org/abs/2505.16415v1 |
each example includes the query, generated responses and located top-1 sentence from the context. 14 Figure 5: Computational efficiency comparison between our ARC-JSD and Contextcite ( n= 256 calls) across all datasets and RAG models. Each bar represents the average seconds per sample for ARC-JSD and Contextcite with different RAG models and datasets. Example of ARC-JSD attributions for Qwen2-1.5B-IT on TyDi QA Query How many wings does a mosquito have? Response (attributed statement is highlighted) Amosquito hastwowings. attribution Top-1 Source (from the context) ... The word “mosquito” (formed by mosca and diminutive -ito)[2] is Spanish for “little fly”. [3] Mosquitoes have aslendersegmented body, apairofwings, three pairs oflong hair-likelegs, feath eryantennae, andelongated mouth parts. 15 Example of ARC-JSD attributions for Qwen2-7B-IT on MuSiQue Query What is the date of birth of the person who was part of Ratata? Response (attributed statement is highlighted) Thedate ofbirth forMauro Scocco, who waspartofthepopgroup Ratata, isgiven asSeptem ber11,1962. attribution Top-1 Source (from the context) ... His mother’s name was not recorded in history. Title: Mauro Scocco Mauro Scocco (born 11Septem ber1962) isaSwedish popartist ofItaliandescent. He has been described as “one of the sharpest songwriters in Sweden”. ... Example of ARC-JSD attributions for Gemma2-2B-IT on TyDi QA Query When was the friends song was released which is taken from the fourth studio album of canadian singer and song writer Justin Bieber? Response (attributed statement is highlighted) Thesong "Friends" wasreleased on**Au gust 17,2017**. It’simportanttonote that thisisfrom Justin Bieber’s fourth studioalbum, "Purpose". attribution Top-1 Source (from the context) ... It was written by Bieber, Julia Michaels, Justin Tranter and BloodPop, with production handled by BloodPop. Thesong wasreleased through GEN POP Corporation, RBMG Records, School Boy Records, DefJam Record ings and RepublicRecords onAugust 17,2017. It is Bieber’s first single credited as lead artist to be released in over a year, the previous being "Company", which was released in March 2016 for his studio album "Purpose" (2015). ... 16 Example of ARC-JSD attributions for Gemma2-9B-IT on Hotpot QA Query Tom Thumb Food & Pharmacy is part of the southern division of a grocery company based in what city? Response (attributed statement is highlighted) Theanswer is**Boise, Idaho**. Thetextstates thatTom Thumb Food &Pharmacy ispartoftheSouth erndivision ofAlbertsons, andAlbertsons Com panies LLC is based inBoise, Idaho. attribution Top-1 Source (from the context) ... It is also known in Brazil as "Chapolin", "Vermelhinho" ("Little Red One") and "Polegar Vermelho" ("Red Thumb") in allusion to the famous fairy tale character Tom Thumb. Title: Albertsons Albertsons Com panies LLC isan Amer icangrocery company founded andbased inBoise, Idaho. It is privately owned and operated by investors, including Cerberus Capital Management. H JSD-based Mechanistic Insights for Located Attention Heads and MLPs We visualise more attention heads and MLP heatmaps using our JSD-based mechanistic approach, where we can find that most RAG models include attribution-relevant attention heads and MLPs across the intermediate and higher layers. On the Hotpot QA and MuSiQue datasets, Gemma2-2B-IT has some relevant attention heads on the lower layers. Attention MLPQwen2-1.5B-ITonHotpotQA Figure 6: The average JSD score of attention heads and MLP of Qwen2-1.5B-IT on Hotpot QA dataset across all | https://arxiv.org/abs/2505.16415v1 |
layers. The deeper colour indicates larger JSD scores. 17 MLPQwen2-1.5B-ITonMuSiQueAttentionFigure 7: The average JSD score of attention heads and MLP of Qwen2-1.5B-IT on MuSiQue dataset across all layers. The deeper colour indicates larger JSD scores. Attention MLPGemma2-2B-ITonTyDiQA Figure 8: The average JSD score of attention heads and MLP of Gemma2-2B-IT on TyDi QA dataset across all layers. The deeper colour indicates larger JSD scores. I JSD Comparison between Masking Located Attention Heads and Random Attention Heads We conducted an ablation study to compare the JSD difference by masking the top-10 relevant attention heads and randomly-selected 10 attention heads. Results show that top-10 attention heads located by the JSD-based metric have higher JSD scores of the same responses while masking in the Table 5. 18 Attention MLPGemma2-2B-ITonHotpotQAFigure 9: The average JSD score of attention heads and MLP of Gemma2-2B-IT on Hotpot QA dataset across all layers. The deeper colour indicates larger JSD scores. Attention MLPGemma2-2B-ITonMuSiQue Figure 10: The average JSD score of attention heads and MLP of Gemma2-2B-IT on MuSiQue dataset across all layers. The deeper colour indicates larger JSD scores. J Case Studies of Attention and MLP’s Contribution for Each Response Token 19 Attention MLPGemma2-9B-ITonTyDiQAFigure 11: The average JSD score of attention heads and MLP of Gemma2-9B-IT on TyDi dataset across all layers. The deeper colour indicates larger JSD scores. Table 5: Comparison of average JSD scores between masking top-10 relevant attention heads and randomly masking 10 attention heads using all RAG models on all datasets. Masking Top-10 Relevant Attention Heads Randomly Masking 10 Attention Heads 2.23±0.12 1.53±0.76 20 0.00.20.40.60.81.0ProbabilityMiddle-ResidualStreamPost-ResidualStreamFigure 12: The projection of middle-residual stream xℓ,mid i and post-residual stream xℓ,post i via Logit Lens to vocabulary space from layer 20 to layer 27 of Qwen2-1.5B IT in TyDi QA data sample, where the generated response Ris “A mosquito has two wings. ”. Each cell shows the most probable token decoded via Logit Lens. The colour indicates the probability of the decoded token of the corresponding xℓ,mid i orxℓ,post i via Logit Lens. 21 0.00.20.40.60.81.0ProbabilityMiddle-ResidualStreamPost-ResidualStreamFigure 13: The projection of xℓ,mid i andxℓ,post i via Logit Lens to vocabulary space from layer 20 to layer 27 of Qwen2-7B IT in TyDi QA data sample, where the generated response Ris “Peter the Great had fourteen children. ”. Each cell shows the most probable token decoded via Logit Lens. 22 0.00.20.40.60.81.0ProbabilityMiddle-ResidualStreamPost-ResidualStreamFigure 14: The projection of middle-residual stream xℓ,mid i and post-residual stream xℓ,post i via Logit Lens to vocabulary space from layer 20 to layer 27 of Qwen2-1.5B IT in TyDi QA data sample, where the generated response Ris “The Meiji Restoration took place in Japan. ”. Each cell shows the most probable token decoded via Logit Lens. 23 | https://arxiv.org/abs/2505.16415v1 |
arXiv:2505.16418v1 [cs.CL] 22 May 2025Exploring the Relationship Between Diversity and Quality in Ad Text Generation Yoichi Aoki1,2, Soichiro Murakami3, Ukyo Honda3, Akihiko Kato3 1Tohoku University,2RIKEN,3CyberAgent youichi.aoki.p2@dc.tohoku.ac.jp, {murakami_soichiro,honda_ukyo,kato_akihiko}@cyberagent.co.jp, Abstract In natural language generation for advertis- ing, creating diverse and engaging ad texts is crucial for capturing a broad audience and avoiding advertising fatigue. Regardless of the importance of diversity, the impact of the diversity-enhancing methods in ad text generation—mainly tested on tasks such as summarization and machine translation—has not been thoroughly explored. Ad text gener- ation significantly differs from these tasks ow- ing to the text style and requirements. This research explores the relationship between di- versity and ad quality in ad text generation by considering multiple factors, such as diversity- enhancing methods, their hyperparameters, input–output formats, and the models. 1 Introduction Advertising is crucial for companies to promote their products and services to a broad audience. Research on natural language generation for ad- vertising has advanced using language models to meet the demand for automating the ad cre- ation (Hughes et al., 2019; Kamigaito et al., 2021; Golobokov et al., 2022; Murakami et al., 2023). Diversity is a key metric in ad text generation. Repeatedly displaying the same ad to users can lead to boredom or advertising fatigue (Pechman and Stewart, 1988; Schmidt and Eisend, 2015). Additionally, creating diverse ad texts will likely appeal to a broad range of customers. Therefore, a need exists for technology capable of generating diverse ad texts. However, the impact of enhancing diversity on ad quality and other metrics is poorly understood when diversity-enhancing methods such as beam search (Lowerre, 1976; Graves, 2012; Rush et al., 2013) or sampling (Ackley et al., 1985; Fan et al., 2018; Holtzman et al., 2020) in ad text gener- ation with language models. Previous studies 箱根温泉の宿予約Book a Hot Spring Inn in HakoneAd ContentAd textLLM1. 【今すぐ予約】箱根の温泉宿BOOK NOW! Hot Spring Inn in Hakone2. 箱根の温泉宿探しならFind a Hot Spring Inn in HakoneFigure 1: We input ad content into large language mod- els to generate multiple ad texts. We measure the diver- sity and quality of the generated ads and elucidate the relationship between these aspects. demonstrated that the relationship between diver- sity and quality varies significantly depending on the task (Meister et al., 2022). While tasks such as summarization and machine translation (MT) ex- hibit a trade-off, other such as story generation ex- hibit interdependencies. Dialog tasks exhibit in- termediate trends. In contrast, ad text generation differs from these tasks in the following ways: 1. Measures such as ad performance and text length constraints are emphasized while mi- nor grammatical errors are tolerable (Zhang et al., 2024). 2. Owing to the necessity of conveying mes- sages effectively in limited spaces (including slogans or banner ads), unique expressions involving symbols or keywords are preva- lent (Murakami et al., 2025). The relationship between diversity and ad qual- ity is non-trivial and requires systematic investiga- tion, owing to these evaluation criteria and stylistic differences. This study reveals that enhancing diversity in- volves trade-offs across multiple ad quality mea- sures, and that sampling and beam search behave differently depending | https://arxiv.org/abs/2505.16418v1 |
on the number of few-shot examples and outputs. These findings highlight the need for the careful use of diversity-enhancing methods in ad text generation. Moreover, we also provide promising directions for enhancing diver- sity while maintaining ad quality by combining the outputs of a set of different models. 2 Problem Setting 2.1 Input and Output Following Murakami et al. (2025), we input the Japanese ad text dataset CAMERA (Mita et al., 2024) as the ad content to a large language model and generated five diverse ad texts using diversity- enhancing methods (Figure 1). For in-context learning, we supplied three input–output examples to the model. 2.2 Diversity-Enhancing Methods The primary decoding methods for achieving di- versity include sampling and beam search (Zarrieß et al., 2021). In this study, we use nucleus sam- pling (Holtzman et al., 2020), temperature sam- pling (Ackley et al., 1985), beam search (Low- erre, 1976; Graves, 2012; Rush et al., 2013), and diverse beam search (Li et al., 2016) as typical sampling and beam search methods for generating diverse ad texts. We also employ Diverse MBR (DMBR) and k-medoids MBR (KMBR) (Jinnai et al., 2024), which offer better trade-offs in tasks such as translation and captioning. 2.3 Evaluation We measure the diversity among the five generated ad texts and the quality of each ad text. 2.3.1 Ad Diversity Common contexts for diversity have two main as- pects: surface diversity (how to say?) , and se- mantic diversity (what to say?) (Murakami et al., 2023). In advertising, product details must be fixed. Therefore, we do not consider seman- tic diversity and evaluat only the surface diver- sity among the outputs in this study. We use the pairwise-BLEU similarity measure (Papineni et al., 2002), ranging from 0to1, based on n-gram matching, to evaluate the output diversity. Specif- ically, the diversity is calculated as 1−Pairwise- BLEU.2.3.2 Ad Quality We evaluate ad quality based on three representa- tive metrics defined by (Zhang et al., 2024): ad performance, consistency, and acceptability. The average quality of the five generated outputs is considered the ad quality for one input. Ad Performance: Following Mita et al. (2024), this study simulates customer behavior through click-through rates (CTR) from past distribution history to measure ad performance.1In the ex- periments, we report the performance ratio of the generated ads to reference human-written ads (generated/referenced) included in the CAMERA dataset. Ad Consistency: Generating ad texts with dif- ferent ad content from the input potentially dam- ages advertisers. For instance, converting the information "paid" to "free" can be false ad- vertising. We evaluate the consistency between the ad content and the generated ad text using BERTScore (Zhang et al., 2020). Ad Acceptability: Ad platforms often impose length restrictions. We determine acceptability based on text fitting within 15 full-width or 30 half-width characters.2 3 Experiment In this experiment, we used calm3-22b- chat (calm3; Ishigami, 2024), Llama-3-ELYZA- JP-8B (ELYZA; Hirakawa et al., 2024), Mistral- Small-24B-Instruct-2501 (Mistral; Mistral AI team, 2025), Llama-3.1-Swallow-70B-Instruct- v0.3 (Swallow; Fujii et al., 2024; Okazaki et al., 2024; Ma et al., 2025), GPT-4o (Hurst et al., 2024). Hereinafter, we described only | https://arxiv.org/abs/2505.16418v1 |
calm3’s results and those of the other models in the Appendix B. 3.1 Diversity vs. Ad Quality Across Diversity-Enhancing Methods As shown in Figure 2, we found a trade-off be- tween diversity and ad quality, with each value representing the average across the dataset. This suggests that the nature of ad generation differs 1We utilize 極予測TD (Kiwami Yosoku TD), the CTR prediction model. The prediction aligns with CTR. cf. https://cyberagent.ai/products/ 2These are general restrictions in Japanese ad text on ad platforms such as Google Ads. cf. https://support. google.com/google-ads/answer/1704389?hl=ja 0.972 0.974 0.976 0.978 0.980 Ad Performance0.700.750.800.850.900.95DiversitySampling Beam Searchdiverse beam searchDMBRKMBR Best Fit line 0.79 0.80 0.81 0.82 0.83 Ad Consistency0.700.750.800.850.900.95DiversitySampling Beam Searchdiverse beam searchDMBRKMBR 0.90 0.91 0.92 0.93 0.94 0.95 Ad Acceptability0.700.750.800.850.900.95DiversitySampling Beam Searchdiverse beam searchDMBRKMBRFigure 2: This figure shows the relationship between diversity and ad quality in ad text generation. Ad performance is the ratio of the performance of the generated ad text to the reference human-written ad text (generated/reference). Ad consistency is the BERT Score, whereas Ad acceptability is the percentage that meets the sentence length constraint. The upper right of the figure indicates higher diversity and ad quality. from that of tasks such as dialogue and story gen- eration (Meister et al., 2022). This difference may be due to a more creative and dynamic dia- logue and story generation. Compared with beam search, diverse beam search improved diversity while maintaining advertising performance and acceptability. Similar results were observed when the prompts or models were varied (see Appen- dices B and C). An exception was GPT-4o, which balanced improvements in diversity and accept- ability. The human evaluation results were con- sistent with these results (see Appendix D). 3.2 Changing Parameters of Diversity-Enhancing Methods and Their Effects on Diversity and Ad Quality The relationship between diversity and ad qual- ity when varying parameters for typical diversity- enhancing methods, such as sampling and beam search, is summarized in Table 1. The sampling exhibited a trade-off. Beam search improved the performance and consistency up to a certain beam width, unlike previous studies on MT (Koehn and Knowles, 2017). This may be because ad gener- ation produces shorter text than MT. On the other hand, beyond five beam widths, the beam width of the diverse beam search did not enhance the di- verse beam search results. 3.3 Effect of Number of Shots on Diversity and Ad Quality Table 2 lists the effects of changing the number of few-shot examples. As diverse beam search re-sampling param. Div. Per. Con. Accept. P=0.5,T=1 0.515 0.981 0.832 0.940 P=1,T=1 0.890 0.974 0.808 0.909 P=1,T=1.5 0.967 0.967 0.773 0.809 P=1,T=2 0.991 0.963 0.740 0.619 beam search param. Div. Per. Con. Accept. W = 5 0.682 0.979 0.803 0.937 W = 7 0.697 0.980 0.832 0.939 W = 10 0.702 0.980 0.831 0.932 W = 12 0.699 0.980 0.832 0.933 diverse beam search param. Div. Per. Con. Accept. W,G = 5 0.824 0.981 0.826 0.947 W,G = 7 0.814 0.980 0.825 0.941 W,G = 10 0.762 0.980 0.826 0.927 W,G = 12 0.739 0.980 0.827 0.913 Table 1: The diversity and | https://arxiv.org/abs/2505.16418v1 |
ad quality when varying the parameter of diversity-enhancing methods. Diversity, Ad Performance, Ad Consistency, and Ad Acceptabil- ity are denoted as Div., Per., Con., and Accept., respec- tively. The same notation is used in the following ta- bles. We varied the p-value (P) and temperature (T) for sampling, the beam width (W) for beam search, and both the beam width (W) and group number (G) for di- verse beam search. sults were akin to a beam search, we described the results in Appendix F. In sampling, diver- sity did not improve with increasing the number of shots. In contrast, in beam search and diverse beam search, diversity improved as the number of shots increased. This result is likely because see- ing diverse examples led to changes such that the sampling shot num Div. Per. Con. Accept. 3 0.890 0.974 0.808 0.909 9 0.872 0.985 0.822 0.975 15 0.876 0.987 0.825 0.982 beam search shot num Div. Per. Con. Accept. 3 0.682 0.979 0.830 0.937 9 0.738 0.989 0.845 0.982 15 0.747 0.992 0.849 0.987 Table 2: Diversity and ad quality when varying the number of shots sampling output num Div. Per. Con. Accept. 2 0.890 0.975 0.807 0.900 5 0.890 0.974 0.808 0.909 10 0.890 0.975 0.807 0.902 beam search output num Div. Per. Con. Accept. 2 0.630 0.981 0.836 0.937 5 0.699 0.900 0.832 0.933 10 0.737 0.979 0.829 0.925 Table 3: Diversity and ad quality when varying the out- put number vocabulary with high output probability becomes more diverse. 3.4 Effects of Output Number on Diversity and Ad Quality We investigated the diversity and ad quality when varying the number of outputs from five. The re- sults are listed in Table 3. Contrary to our in- tuition, consistent diversity and ad quality were maintained in the sampling, irrespective of the out- put number. In contrast, beam search improved diversity with more output but diminished quality, likely owing to the low quality of low probabil- ity candidate texts. Diverse beam search exhibited trends similar to those of the beam search (§ F). 3.5 Effect of Changing Output Strategy The settings for generating multiple texts are not only not limited to the repeated inference ap- proach illustrated in Figure 1, but multiple sen- tences can also be generated consecutively within a single inference step. We call the former one- at-once and the latter all-at-once, respectively. Ta- ble 8 shows the actual prompt used in this method.strategy Div. Per. Con. Accept. one-at-once 0.876 0.987 0.825 0.982 all-at-once 0.900 0.984 0.821 0.968 Table 4: Diversity and ad quality of sampling in all-at- once and one-at-once settings. Model Div. Per. Con. Accept. calm3 0.890 0.974 0.808 0.909 ELYZA 0.877 0.970 0.781 0.864 Mistral 0.886 0.984 0.833 0.877 Swallow 0.868 0.962 0.795 0.503 GPT-4o 0.777 0.990 0.823 0.992 5 models 0.929 0.975 0.808 0.827 Table 5: Diversity and ad quality with multi-model Beam search and other methods are strictly dif- ferent techniques when their output strategies are different. Therefore, only the sampling results for the two output strategies are summarized in Ta- ble 4. all-at-once exhibited | https://arxiv.org/abs/2505.16418v1 |
improved diversity but worsened quality compared with one-at-once, in- dicating a trade-off. This diversity improvement could be because all-at-once can output ad text by reviewing previous outputs. 3.6 Diverse Outputs via Multiple Models We compared generating five texts with a single model to generating one text with the five mod- els described above (five models). The results are summarized in Table 5. Although the ad quality of the five models averaged each model’s perfor- mance, diversity reached its highest value. This re- sult suggests that generation from multiple models is a strategy for producing more diverse advertis- ing texts. 4 Conclusion In this study, we revealed that improving diver- sity often involves compromises across multiple ad quality metrics, and that sampling and beam search behaved differently depending on the num- ber of few-shot examples and outputs. These findings highlight the need for the careful use of diversity-enhancing methods in ad text generation. Moreover, we also provide promising directions for enhancing diversity while maintaining ad qual- ity by combining the outputs of a set of different models. We believe that these insights will support future research in advancing both the diversity and quality of ad text generation. Limitations We do not cover all methods for changing out- put diversity. Nevertheless, widely used sampling, beam search, and state-of-the-art MBR decoding methods have been employed to provide valuable insights on the relationship between diversity and ad quality. Our analyses are based on Japanese ad texts. Therefore, please note that some features such as character types are unique to the Japanese lan- guage, and we do not intend to apply our analysis results to all languages. However, we believe that other languages such as English and Chinese also have unique linguistic features. In this study, we focused on Japanese ad texts; however, we hope that this will pave the way for their development in different languages. Ethics statement This study will not raise particular ethical con- cerns, considering that (i) no human experiments are conducted and (ii) our tasks do not involve eth- ically sensitive topics. Acknowledgments This work was supported by JST SPRING Grant Number JPMJSP2114. This work was written in- dependently, with minor phrasing assistance from a large language model (ChatGPT). References David H. Ackley, Geoffrey E. Hinton, and Terrence J. Sejnowski. 1985. A learning algorithm for boltz- mann machines. Cogn. Sci. , 9(1):147–169. Angela Fan, Mike Lewis, and Yann N. Dauphin. 2018. Hierarchical neural story generation. In Proceed- ings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics, ACL 2018, Mel- bourne, Australia, July 15-20, 2018, Volume 1: Long Papers , pages 889–898. Association for Computa- tional Linguistics. Kazuki Fujii, Taishi Nakamura, Mengsay Loem, Hiroki Iida, Masanari Ohi, Kakeru Hattori, Hirai Shota, Sakae Mizuki, Rio Yokota, and Naoaki Okazaki. 2024. Continual pre-training for cross-lingual llm adaptation: Enhancing japanese language capabili- ties. In Proceedings of the First Conference on Lan- guage Modeling , COLM, page (to appear), Univer- sity of Pennsylvania, USA. Konstantin Golobokov, Junyi Chai, Victor Ye Dong, Mandy Gu, Bingyu Chi, Jie Cao, Yulan Yan, andYi Liu. 2022. Deepgen: | https://arxiv.org/abs/2505.16418v1 |
Diverse search ad genera- tion and real-time customization. In Proceedings of the The 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 - Sys- tem Demonstrations, Abu Dhabi, UAE, December 7- 11, 2022 , pages 191–199. Association for Computa- tional Linguistics. Alex Graves. 2012. Sequence transduction with recur- rent neural networks. CoRR , abs/1211.3711. Masato Hirakawa, Shintaro Horie, Tomoaki Nakamura, Daisuke Oba, Sam Passaglia, and Akira Sasaki. 2024. elyza/llama-3-elyza-jp-8b. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neu- ral text degeneration. In 8th International Confer- ence on Learning Representations, ICLR 2020, Ad- dis Ababa, Ethiopia, April 26-30, 2020 . OpenRe- view.net. J. Weston Hughes, Keng-hao Chang, and Ruofei Zhang. 2019. Generating better search engine text advertisements with deep reinforcement learning. InProceedings of the 25th ACM SIGKDD Interna- tional Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019 , pages 2269–2277. ACM. Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Alek- sander Madry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kir- illov, Alex Nichol, Alex Paino, and 79 others. 2024. Gpt-4o system card. CoRR , abs/2410.21276. Ryosuke Ishigami. 2024. cyberagent/calm3-22b-chat. Yuu Jinnai, Ukyo Honda, Tetsuro Morimura, and Peinan Zhang. 2024. Generating diverse and high- quality texts by minimum bayes risk decoding. In Findings of the Association for Computational Lin- guistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024 , pages 8494–8525. As- sociation for Computational Linguistics. Hidetaka Kamigaito, Peinan Zhang, Hiroya Takamura, and Manabu Okumura. 2021. An empirical study of generating texts for search engine advertising. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies: Industry Papers, NAACL-HLT 2021, Online, June 6- 11, 2021 , pages 255–262. Association for Computa- tional Linguistics. Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Pro- ceedings of the First Workshop on Neural Machine Translation, NMT@ACL 2017, Vancouver, Canada, August 4, 2017 , pages 28–39. Association for Com- putational Linguistics. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. A sim- ple, fast diverse decoding algorithm for neural gen- eration. CoRR , abs/1611.08562. Bruce T. Lowerre. 1976. The harpy speech recognition system. Ph.D. thesis, Carnegie Mellon University, USA. AAI7619331. Youmi Ma, Sakae Mizuki, Kazuki Fujii, Taishi Naka- mura, Masanari Ohi, Hinari Shimada, Taihei Sh- iotani, Koshiro Saito, Koki Maeda, Kakeru Hattori, Takumi Okamoto, Shigeki Ishida, Rio Yokota, Hi- roya Takamura, and Naoaki Okazaki. 2025. Build- ing instruction-tuning datasets from human-written instructions with open-weight large language mod- els.Preprint , arXiv:2503.23714. Clara Meister, Gian Wiher, and Ryan Cotterell. 2022. On decoding strategies for neural text generators. Trans. Assoc. Comput. Linguistics , 10:997–1012. Mistral AI team. 2025. Mistral small 3. Mistral AI News . Masato Mita, Soichiro Murakami, Akihiko Kato, and Peinan Zhang. 2024. Striking gold in advertising: Standardization and exploration of ad text genera- tion. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics | https://arxiv.org/abs/2505.16418v1 |
(Vol- ume 1: Long Papers), ACL 2024, Bangkok, Thai- land, August 11-16, 2024 , pages 955–972. Associa- tion for Computational Linguistics. Soichiro Murakami, Sho Hoshino, and Peinan Zhang. 2023. Natural language generation for advertising: A survey. CoRR , abs/2306.12719. Soichiro Murakami, Peinan Zhang, Hidetaka Kami- gaito, Hiroya Takamura, and Manabu Okumura. 2025. AdParaphrase: Paraphrase dataset for analyz- ing linguistic features toward generating attractive ad texts. In Findings of the Association for Com- putational Linguistics: NAACL 2025 , pages 1426– 1439, Albuquerque, New Mexico. Association for Computational Linguistics. Naoaki Okazaki, Kakeru Hattori, Hirai Shota, Hiroki Iida, Masanari Ohi, Kazuki Fujii, Taishi Nakamura, Mengsay Loem, Rio Yokota, and Sakae Mizuki. 2024. Building a large japanese web corpus for large language models. In Proceedings of the First Con- ference on Language Modeling , COLM, page (to ap- pear), University of Pennsylvania, USA. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, July 6-12, 2002, Philadelphia, PA, USA , pages 311–318. ACL. Karl Pearson and Francis Galton. 1895. Vii. note on regression and inheritance in the case of two parents. Proceedings of the Royal Society of London , 58(347- 352):240–242.Corneilia Pechman and David W. Stewart. 1988. Ad- vertising Repetition: A Critical Review of Wearin and Wearout. Current issues and research in adver- tising , 11(1-2):285–329. Alexander M. Rush, Yin-Wen Chang, and Michael Collins. 2013. Optimal beam search for machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Pro- cessing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL , pages 210–221. ACL. Susanne Schmidt and Martin Eisend. 2015. Adver- tising Repetition: A Meta-Analysis on Effective Frequency in Advertising. Journal of Advertising , 44(4):415–428. Sina Zarrieß, Henrik V oigt, and Simeon Schüz. 2021. Decoding methods in neural language generation: A survey. Inf., 12(9):355. Peinan Zhang, Yusuke Sakai, Masato Mita, Hiroki Ouchi, and Taro Watanabe. 2024. Adtec: A uni- fied benchmark for evaluating text quality in search engine advertising. CoRR , abs/2408.05906. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with BERT. In 8th Inter- national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020 . OpenReview.net. A Detailed Experiment Settings We used 798 ad texts from the CAMERA dataset, each containing up to 30 half-width charac- ters or 15 full-width characters. We used the dataset under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0, In- ternational License3. In our experiments, we utilized the open models (calm3, ELYZA, Mis- tral, and Swallow) from HuggingFace. As for the closed model (GPT-4o), we accessed it via the Python library for the OpenAI API. The model inputs are listed in Table 7. The § 3.5 used the prompt in Table 8. The sampling methods performed inference five times and output one ad text each time. On the other hand, the beam search approach per- | https://arxiv.org/abs/2505.16418v1 |
formed inference once and output the top five candidates obtained. The parameters for each diversity-enhancing method used in the experi- ment are listed in Table 6. Note that in § 3.4, the beam search and diverse beam search had a fixed beam width of 15. We used NVIDIA A100 (80GB) GPUs for the generation. method parameter samplingtop_p=1.0, temperture=1.0 DMBR div_pen = 1.0 beam search num_beams = 5 diverse beam searchnum_beams = 5, num_beam_groups = 5, diversity_penalty=1.0 Table 6: Parameter of diversity-enhancing methods B Results of Other Models The results for Llama-3-ELYZA-JP-8B (ELYZA), Mistral-Small-24B-Instruct-2501 (Mistral), Llama-3.1-Swallow-70B-Instruct-v0.3 (Swallow), and gpt-4o-2024-08-06 snapshot of OpenAI ’s GPT-4o (GPT-4o) are shown in Figures 3, 4, 5, and 6, respectively. Consistently, we observed a trade-off between diversity and ad quality across multiple models. However, only GPT-4o balanced the enhancements in diversity and acceptability. 3https://creativecommons.org/licenses/ by-nc-sa/4.0/C Results Upon Changing the Prompt We examined the relationship between diversity and quality when varying the prompts, including (1) switching the order of few-shot examples and (2) altering explanations. The prompts for (2) are listed in Table 9. The results are shown in Fig- ures 7 and 8. Even when the prompts changed, it consistently demonstrated that a trade-off existed between diversity and ad quality. D Consistency With Human and Automatic Evaluation We evaluated the alignment between the auto- matic metrics and the human judgment used in our experiments. Five annotators performed a human evaluation of 1,000 generated advertising texts. For ad performance, the annotators selected a more attractive text from the reference and gen- erated versions. Although the ad performance and attractiveness were not identical, the agreement rate between attractiveness as evaluated by hu- mans and ad performance measured by automatic metrics was 61%. For consistency, the annotators determined whether the generated text preserved the semantics of the input. Because consistency is not a relative evaluation, we report the Pearson correlation (Pearson and Galton, 1895) between the human and automatic scores, which was 0.55, indicating a moderate correlation. The human evaluation was performed via Ya- hoo crowdsourcing, with a total compensation of 5,500 yen. The annotators received the instruc- tions listed in Tables 10 and 11, and provided con- sent for the results of their evaluations to be used in this study. E Changing Parameters of DMBR and Their Effects on Diversity and Ad Quality We showed the relationship between diversity and ad quality when varying parameters of DMBR in Table 12. Similar to sampling, DMBR exhibited a trade-off. You are a professional ad copywriter. You are responsible for creating search-linked ads. Please rephrase the provided ad copy according to the following conditions. # Conditions - Write within 15 full-width characters. - Do not add new information to or remove existing information from the ad copy. - Below are examples of rephrasing. Please use these examples as a reference to diversify your rephrasing. Input: Personalized tutoring for strong performance in junior high exams # 中学受験に強い個別指導塾 Output: Aim for junior high success with personal tutoring # 個別塾で中学合格を目指す Input: Earn high income with side jobs in Kichijoji # 高収入を得る吉祥寺の副業で稼ぐ Output: Make money with Kichijoji | https://arxiv.org/abs/2505.16418v1 |
side jobs # 吉祥寺の副業でお金儲け Input: Corporate-focused [Immediate Strength Recruitment] # 法人向け【即戦力採用】 Output: Seeking immediate strength! Corporate recruitment # 即戦力求む!法人採用 Input: Ad content Table 7: Model input for ad text generation. For visibility, Japanese prompt is translated into English. On the right side of each input–output example, the actual input–output examples are shown. Subsequent prompts will also be presented in English. 0.968 0.970 0.972 Ad Performance0.750.800.850.900.95DiversitySampling Beam Searchdiverse beam searchDMBR KMBRBest Fit line 0.76 0.77 0.78 0.79 0.80 Ad Consistency0.750.800.850.900.95DiversitySampling Beam Searchdiverse beam searchDMBR KMBR 0.84 0.86 0.88 0.90 Ad Acceptability0.750.800.850.900.95DiversitySampling Beam Searchdiverse beam searchDMBR KMBR Figure 3: Relationship between diversity and ad quality in ad text generation with ELYZA param. Div. Per. Con. Accept. D = 0.1 0.767 0.978 0.826 0.922 D = 0.5 0.927 0.974 0.802 0.904 D = 1 0.954 0.973 0.792 0.9 D = 2 0.957 0.973 0.79 0.897 Table 12: Diversity and ad quality of sampling when varying the parameter of DMBR. We vary diversity penalty (D) for DMBR. F Results of Diverse Beam Search The relationship between diversity and ad quality when varying few-shot numbers and output num- bers are summarized in Tables 13 and 14. Overall results mirrored those of beam search. Increasing the number of shots impacted diversity increase. In contrast, increasing the number of outputs im- proved diversity but reduced quality.shot num Div. Per. Con. Attempt. 3 0.824 0.981 0.826 0.947 9 0.832 0.990 0.838 0.987 15 0.830 0.991 0.842 0.991 Table 13: Diversity and ad quality of diverse beam search when varying the few-shot number output num Div. Per. Con. Accept. 2 0.781 0.981 0.829 0.932 5 0.736 0.979 0.825 0.905 10 0.801 0.977 0.819 0.878 Table 14: Diversity and ad quality of diverse beam search when varying the output number G Sampling in All-at-Once Setting This study also measured the diversity and ad quality of sampling by varying the parameters of You are a professional copywriter specializing in search-linked advertisements. Paraphrase the provided ad text according to the following conditions. # Conditions - Write within 15 full-width characters - Do not add new information or remove existing information from the ad text - Below are examples of paraphrasing. Use them as a reference to variably paraphrase the ad text. Input: Personalized tutoring for strong performance in junior high exams # 中学受験に強い個別指導塾 Output1: Aim for junior high success with personal tutoring # 個別塾で中学合格を目指す Output2: The road to junior high success, tutoring # 中学合格の道、個別塾 Output3: Tutoring for junior high exam preparation # 個別塾、中学試験対策 Output4: Personal tutoring for junior high success! # 個別で中学合格へ! Output5: Personalized tutoring for junior high entrance exams # 中学入試対策の個別塾 Input: Earn high income with side jobs in Kichijoji # 高収入を得る吉祥寺の副業で稼ぐ Output1: Make money with Kichijoji side jobs # 吉祥寺の副業でお金儲け Output2: Side income in Kichijoji # 吉祥寺で副業収入 Output3: High-paying side job in Kichijoji # 高収入の吉祥寺副業 Output4: Earn in a side job in Kichijoji # 副業で稼ぐin吉祥寺 Output5: High-paying side job @ Kichijoji # 高収入副業@吉祥寺 Input: Corporate-focused [Immediate Strength Recruitment] # 法人向け【即戦力採用】 Output1: Seeking immediate strength! Corporate recruitment # 即戦力求む!法人採用 Output2: Immediate impact with corporate hiring # 即戦力で法人採用 Output3: Immediate impact personnel for corporate | https://arxiv.org/abs/2505.16418v1 |
# 法人向けの人材即戦力 Output4: Corporate immediate impact hiring # 法人用即戦力採用 Output5: Immediate impact hiring for companies # 企業向け即戦力採用 Input: Ad content Table 8: Model input for ad text generation in an all-at-once setting. the diversity-enhancing methods in an all-at-once setting. The results are presented in Figure 15. The results when varying the few-shot numbers and parameters were similar in the all-at-once and one-at-once settings. parameter Div. Per. Con. Accept. P=0.5,T=1 0.753 0.99 0.843 0.979 P=1,T=1 0.900 0.984 0.821 0.968 P=1,T=1.5 0.960 0.976 0.787 0.908 P=1,T=2 0.988 0.969 0.746 0.724 Table 15: Diversity and ad quality of sampling in an all-at-once setting when varying the parameter of diversity-enhancing methods. H Usage of AI assistants We used AI assistants (e.g., GPT4-o and GitHub Copilot) to write this paper and provide the source code for the experiments. However, its use was limited to code completion, translation, text edit- ing, and table creation, and all the content was based solely on the authors’ ideas. 0.975 0.980 0.985 0.990 Ad Performance0.60.70.80.9DiversitySampling Beam Searchdiverse beam searchDMBR KMBR Best Fit line 0.80 0.82 0.84 0.86 0.88 Ad Consistency0.60.70.80.9DiversitySampling Beam Searchdiverse beam searchDMBR KMBR 0.80 0.85 0.90 Ad Acceptability0.60.70.80.9DiversitySampling Beam Searchdiverse beam searchDMBR KMBRFigure 4: Relationship between diversity and ad quality in ad text generation with Mistral 0.954 0.956 0.958 0.960 0.962 0.964 Ad Performance0.50.60.70.80.9DiversitySampling Beam Searchdiverse beam searchDMBRKMBR Best Fit line 0.76 0.77 0.78 0.79 0.80 0.81 Ad Consistency0.50.60.70.80.9DiversitySampling Beam Searchdiverse beam searchDMBRKMBR 0.40 0.45 0.50 0.55 Ad Acceptability0.50.60.70.80.9DiversitySampling Beam Searchdiverse beam searchDMBRKMBR Figure 5: Relationship between diversity and ad quality in ad text generation with Swallow 0.988 0.989 0.990 0.991 Ad Performance0.760.780.800.820.840.860.880.900.92Diversity SamplingDMBR KMBRBestFit line 0.810 0.815 0.820 0.825 Ad Consistency0.760.780.800.820.840.860.880.900.92Diversity SamplingDMBR KMBR 0.988 0.990 0.992 0.994 0.996 0.998 Ad Acceptability0.760.780.800.820.840.860.880.900.92Diversity SamplingDMBR KMBR Figure 6: Relationship between diversity and ad quality in ad text generation with GPT-4o 0.970 0.972 0.974 0.976 0.978 Ad Performance0.650.700.750.800.850.900.95DiversitySampling Beam Searchdiverse beam searchDMBR KMBR Best Fit line 0.78 0.79 0.80 0.81 0.82 0.83 Ad Consistency0.650.700.750.800.850.900.95DiversitySampling Beam Searchdiverse beam searchDMBR KMBR 0.88 0.90 0.92 Ad Acceptability0.650.700.750.800.850.900.95DiversitySampling Beam Searchdiverse beam searchDMBR KMBR Figure 7: Diversity and ad quality of sampling when varying the order of few-shot examples You are an advertising expert. You have been tasked with creating listing ads. Based on the conditions below, rephrase the provided advertising text. # Conditions - Keep it within 15 full-width characters - Do not add new information or remove included information from the ad text - Rephrase the ad text diversely using the examples below as references. Input: Personalized tutoring for strong performance in junior high exams # 中学受験に強い個別指導塾 Output: Aim for junior high success with personal tutoring # 個別塾で中学合格を目指す Input: Earn high income with side jobs in Kichijoji # 高収入を得る吉祥寺の副業で稼ぐ Output: Make money with Kichijoji side jobs # 吉祥寺の副業でお金儲け Input: Corporate-focused [Immediate Strength Recruitment] # 法人向け【即戦力採用】 Output: Seeking immediate strength! Corporate recruitment # 即戦力求む!法人採用 Input: Ad content Table 9: Model input when varying the prompt. 0.972 0.974 0.976 0.978 0.980 Ad Performance0.700.750.800.850.900.95DiversitySampling Beam Searchdiverse beam searchDMBR KMBR Best Fit line 0.79 0.80 0.81 0.82 0.83 Ad Consistency0.700.750.800.850.900.95DiversitySampling Beam Searchdiverse beam searchDMBR KMBR 0.90 0.92 0.94 Ad Acceptability0.700.750.800.850.900.95DiversitySampling | https://arxiv.org/abs/2505.16418v1 |
Beam Searchdiverse beam searchDMBR KMBR Figure 8: Diversity and ad quality of sampling when varying the instruction Please select the more **attractive** ad text. If there is no difference in impression between the two ad texts, please click "No Difference". 【Evaluation Criteria 】 - Do you want to click on it? - Is it eye-catching? - Is it easy to read? 【Question】 Please select the more **attractive** ad text. - {Ad Text 1} - {Ad Text 2} - No Difference Table 10: Instruction for human evaluation of attractiveness. For visibility, Japanese instruction is translated into English. 【Question Please determine whether the information contained in the following two advertisements is **equivalent or not**. - {Ad Text 1} - {Ad Text 2} 【Evaluation Criteria 】 - Focus on differences in the information conveyed, not on variations in vocabulary or word order. - Even if certain information is not explicitly stated, information that can be inferred based on common sense is considered to be included in the ad text. 【Options】 - Equivalent - Not Equivalent Table 11: Instruction for human evaluation of ad consistency. For visibility, Japanese instruction is translated into English. | https://arxiv.org/abs/2505.16418v1 |
arXiv:2505.16421v1 [cs.CL] 22 May 2025WEBAGENT -R1: Training Web Agents via End-to-End Multi-Turn Reinforcement Learning Zhepei Wei†*, Wenlin Yao‡, Yao Liu‡, Weizhi Zhang‡, Qin Lu‡, Liang Qiu‡ Changlong Yu‡, Puyang Xu‡, Chao Zhang§, Bing Yin‡, Hyokun Yun‡, Lihong Li‡ †University of Virginia‡Amazon§Georgia Institute of Technology zhepei.wei@virginia.edu, chaozhang@gatech.edu {ywenlin, yaoliuai, zhweizhi, luqn, liangqxx, changlyu puyax, alexbyin, yunhyoku, llh}@amazon.com Abstract While reinforcement learning (RL) has demon- strated remarkable success in enhancing large language models (LLMs), it has primarily fo- cused on single-turn tasks such as solving math problems. Training effective web agents for multi-turn interactions remains challenging due to the complexity of long-horizon decision- making across dynamic web interfaces. In this work, we present WEBAGENT -R1, a simple yet effective end-to-end multi-turn RL framework for training web agents. It learns directly from online interactions with web environments by asynchronously generating diverse trajectories, entirely guided by binary rewards depending on task success. Experiments on the WebArena- Lite benchmark demonstrate the effectiveness ofWEBAGENT -R1, boosting the task success rate of Qwen-2.5-3B from 6.1% to 33.9% and Llama-3.1-8B from 8.5% to 44.8%, signifi- cantly outperforming existing state-of-the-art methods and strong proprietary models such as OpenAI o3. In-depth analyses reveal the effectiveness of the thinking-based prompting strategy and test-time scaling through increased interactions for web tasks. We further investi- gate different RL initialization policies by in- troducing two variants, namely WEBAGENT - R1-Z ERO andWEBAGENT -R1-C OT, which highlight the importance of the warm-up train- ing stage ( i.e., behavior cloning) and provide insights on incorporating long chain-of-thought (CoT) reasoning in web agents.1 1 Introduction Reinforcement learning (RL) has emerged as a promising approach for training large language models (LLMs), as exemplified by recent ad- vances such as DeepSeek-R1 (Guo et al., 2025; Team et al., 2025; Yang et al., 2025a). How- ever, existing works have primarily focused on *Work done during internship at Amazon. 1Code and artifacts are available at https://github. com/weizhepei/WebAgent-R1single-turn, non-interactive tasks such as mathe- matical reasoning (Shao et al., 2024; Zeng et al., 2025). Their effectiveness in multi-turn, interac- tive environments—particularly in complex sce- narios requiring long-horizon decision-making and domain-specific skills, such as web browsing (Zhou et al., 2024a; He et al., 2024a; Chae et al., 2025)— still remains underexplored. Unlike static environments, web tasks pose unique challenges for LLM agents due to their dy- namic nature and diverse solution spaces. Early works on web agents primarily relied on prompting- based methods (Wang et al., 2024b; Sodhi et al., 2024; Fu et al., 2024; Zhang et al., 2025; Yang et al., 2025b) or behavior cloning (BC), which imi- tates demonstrated trajectories via supervised fine- tuning (Yin et al., 2024; Hong et al., 2024; Lai et al., 2024; He et al., 2024b; Putta et al., 2024). Despite their initial success, these methods lack the ability to explore diverse strategies or learn from trial and error, limiting the generalizability of web agents. To address this issue, recent works explored apply- ing RL for better policy training. However, most of this line of research has heavily relied on offline or iterative off-policy RL solutions (Peng et al., 2019; | https://arxiv.org/abs/2505.16421v1 |
Pan et al., 2024; Qi et al., 2025), which break the end-to-end interaction between the web agent and environment, and introduce additional complexi- ties such as trajectory filtering (Bai et al., 2024), outcome reward model training (Qi et al., 2025), or iterative optimization procedures (Zhou et al., 2024b). These constraints hinder their practicality for real-world deployment. Meanwhile, several concurrent works have ex- plored end-to-end RL with on-policy updates for training LLM agents in multi-turn interactive sce- narios, such as simulated games and coding en- vironments (Wang et al., 2025; Cao et al., 2025). Unlike off-policy RL that trains on data generated by older versions of the agent, on-policy RL col- lects training data directly from the agent’s current 1 behavior. This ensures that the learning process is better aligned with the agent’s most recent actions, often leading to more stable and effective learn- ing (Schulman et al., 2015, 2017). It also eliminates the need for additional overheads in off-policy RL (e.g., maintaining a replay buffer and filtering out- dated trajectories), and enables the agent to behave adaptively based on its own past decisions—a key advantage in interactive environments where early decisions can significantly affect next steps. These benefits are particularly desirable in online web environments, which often involve complex interplay between tasks due to dynamic changes of the environment. For instance, consider a situ- ation where the agent is first tasked to log out of a user account and then to edit the user’s profile. These tasks are inherently interdependent: once the agent logs out, it loses access to the profile page. If the agent is trained using off-policy data collected from an earlier version that never logged out, it has no opportunity to learn the login behavior and may incorrectly assume continued access and generate invalid actions, ultimately leading to task failure. End-to-end RL helps avoid such pitfalls by allow- ing the agent to learn proper behaviors in response to environmental state changes on-the-fly. In light of this, we propose WEBAGENT -R1, an end-to-end multi-turn RL framework for training web agents. Specifically, our design addresses sev- eral key challenges in this setting. First, at each step, the environmental observation ( e.g., HTML content) can span thousands of tokens, causing the accumulated context over long horizons to in- cur substantial memory overheads. To mitigate this, we introduce a dynamic context compression mechanism, which adaptively adjusts the contexts across turns, ensuring scalability and preventing out-of-memory issues. Second, existing RL solu- tions for LLM agents are not well-suited for multi- turn scenarios. Inspired by group relative policy optimization (GRPO) (Shao et al., 2024), we ex- tend it to multi-turn settings (M-GRPO) and em- ploy an asynchronous trajectory rollout strategy to further improve training efficiency by generat- ing multiple trajectories in parallel. These designs enable efficient RL training and lead to state-of-the- art performance on the WebArena-Lite benchmark, as shown in Figure 1. Extensive ablations further validate our key design choices, reveal an effective test-time scaling strategy for web tasks, and offer insights into the roles of behavior cloning and long CoT reasoning in RL-based | https://arxiv.org/abs/2505.16421v1 |
web agent training. 3B 8B 32B Unknown Model Size01020304050Success Rate (%) Qwen2.5-3BLlama3.1-8BQwen2.5-32BQwQ-32B GPT-4oGPT-4o-TurboOpenAI-o3 OpenAI-o4-mini Filtered BC (Llama3.1-8B)AWR (Llama3.1-8B)DigiRL (Llama3.1-8B)WebRL (Llama3.1-8B) SFT (Qwen2.5-3B) SFT (Llama3.1-8B)WebAgent-R1 (Qwen2.5-3B)WebAgent-R1 (Llama3.1-8B)Prompting Finetuning Ours (finetuned)Figure 1: Comparison between existing methods and ourWEBAGENT -R1on the WebArena-Lite benchmark. Our method outperforms both strong prompting-based and finetuned baselines, achieving superior performance across various model sizes. Our contributions are summarized as follows: •We implement an end-to-end multi-turn RL framework for training web agents, with dy- namic context compression and asynchronous trajectory rollout mechanisms to achieve train- ing efficiency. •Based on the proposed M-GRPO algorithm, our method substantially improves task suc- cess rates of web agents—boosting Qwen-2.5- 3B from 6.1% to 33.9% and Llama-3.1-8B from 8.5% to 44.8%—surpassing previous state-of-the-art results on the WebArena-Lite benchmark. •Extensive analyses and ablation studies un- derscore the crucial role of behavior cloning, validate the effectiveness of thinking-based prompting and test-time scaling strategies, and provide actionable insights on incorpo- rating long-CoT reasoning in web agents. 2 WebAgent-R1 2.1 Problem Formulation We formulate the web task as a Partially Observ- able Markov Decision Process (POMDP), defined by the tuple (S,A,T,R). At each time step t, the agent first observes a state st∈ Sfrom the environ- mentE, represented as the text-only HTML content of the current web page. Then, it generates an ac- tionatfrom a predefined action space A, which includes commonly used web operations. The en- vironment dynamics T(st+1|st, at)represent how the web page changes in response to actions. The 2 Web: [Task Instruction] + [HTML content] Agent: <think> thinking process </think> <answer> do(‘Scroll Down’) </answer> Web: [HTML content] Agent: <think> thinking process </think> <answer> do(‘Click’, element=‘24’) </answer> Web: [HTML content] CMS Reddit Gitlab Map ShoppingEnvironment Click Type Select Enter Switch Search Wait Exit… Scroll up/downAction Space TaskAsync Trajectory Rollout Reward History ObservationDynamic Context Compression AgentWebActionsObservationsRewardsWebAgent-R1<think> … </think> <answer> … </answer> String MatchReward FunctionsHistory Action Agent-Web Multi-turn InteractionInputOutputURL MatchProgram ExecutionMulti-turn GRPO Step 1Step 2Step 3Step 4Env CorrectWrongStopped Observation 1Action 1Observation k-1Observation kAction k-1 … Agent: <think> thinking process </think> <answer> exit(message=‘Task Completed’) </answer> Figure 2: ( Top): Overview of the end-to-end multi-turn RL training framework used in WEBAGENT -R1. (Bottom ): An input/output example of agent–web interaction at the k-th step. The interaction continues until either the maximum number of steps is reached or the agent generates an exit() action to signal task completion. agent interacts with the environment until either the task is successfully completed or the maximum number of steps is reached. At the end, the agent receives a binary outcome reward rt∈ {0,1}from reward functions R. Following prior work (Qi et al., 2025), we adopt WebArena (Zhou et al., 2024a) as the web environment over other simulated or static envi- ronments such as WebShop (Yao et al., 2022) or Mind2Web (Deng et al., 2023) for greater practicality—It provides a realistic, self-hostable environment for web agents, along with rule-based rubrics that automatically check for indicators of success in the final state ( e.g., confirmation mes- sages or expected content on the page). Note that some prior works (Liu et al., 2025; He et al., 2024a) incorporate web | https://arxiv.org/abs/2505.16421v1 |
page screenshots as additional vi- sual inputs, whereas our work focuses solely on text-based decision-making over HTML. Other ef- forts, such as Yang et al. (2025b), explore opti-mizing the action space or prompt design without model fine-tuning. These directions are orthogonal to our investigated problem and can be conceptu- ally integrated with our method as future work. 2.2 Behavior Cloning To initialize the web agent, we first apply behav- ior cloning (BC) using a fixed dataset of expert demonstrations D={(ht, at)}, where htdenotes the full interaction history up to time step t, defined asht= (s1, a1, s2, a2, . . . , s t). The policy πθis trained via supervised fine-tuning (SFT) to imitate expert actions conditioned on this history: LBC=−E(ht,at)∼D[logπθ(at|ht)] This warm-up stage enables the agent to acquire ba- sic web interaction skills defined in the action space. As indicated in our ablation study (§ 3.4), this BC- trained policy provides a crucial foundation for subsequent reinforcement learning optimization. 3 Table 1: Comparison of different methods for training web agents. Trial-and-Error indicates whether the method supports learning through interactions with the environment ( i.e., reinforcement learning). On-Policy denotes whether the training data is collected from the current policy. Replay Buffer Free indicates methods that do not require selectively sampling trajectories from a replay buffer, a complexity common in off-policy RL. Self-Sufficient means no external training signals required ( e.g., WebRL trains an additional outcome reward model to label new data generated by GPT-4). As shown, our method is the only one that enables end-to-end RL with on-policy updates while avoiding additional complexities such as maintaining a replay buffer and being free from external supervision. Method Trial-and-Error On-Policy Replay Buffer Free Self-Sufficient Behavior Cloning (SFT) ✘ ✘ ✓ ✓ AWR (Peng et al., 2019) ✘ ✘ ✘ ✓ DigiRL (Bai et al., 2024) ✓ ✘ ✘ ✓ WebRL (Qi et al., 2025) ✓ ✘ ✘ ✘ WEBAGENT -R1 ✓ ✓ ✓ ✓ 2.3 End-to-End Multi-Turn Reinforcement Learning As illustrated in Figure 2, our end-to-end multi- turn RL framework trains web agents through on- line interactions guided by rule-based outcome re- wards. To enable efficient and scalable training, we implemented two key mechanisms: dynamic con- text compression to reduce memory overhead, and asynchronous trajectory rollout to improve sam- pling efficiency. Based on the BC-trained policy, we further fine-tune the agent using an extension of GRPO (Qi et al., 2025) in the multi-turn set- tings, termed M-GRPO . Our implementation can be viewed as a minimalist approach that supports efficient multi-turn RL training while maintaining generality, with potential for future extensions ( e.g., incorporating fine-grained reward shaping mecha- nisms for intermediate steps). Dynamic Context Compression In web tasks, each observation stoften contains thousands of tokens. Across multi-turn interactions, the accumu- lated context grows rapidly, leading to excessive memory usage and potential out-of-memory issues, making training impractical. To address this, we propose a dynamic context compression strategy. As new observations arrive, earlier ones are simpli- fied to reduce the context length while preserving the complete action history. Let the interaction history at step tbeht= (s′ | https://arxiv.org/abs/2505.16421v1 |
1, a1, s′ 2, a2, . . . , s t), where each s′ iis a simplified template ( e.g., “Simplified HTML” ) representing prior observa- tions. When the agent executes an action atand re- ceives a new observation st+1, the updated history becomes ht+1= (s′ 1, a1, s′ 2, a2, . . . , s′ t, at, st+1), where stis replaced by its simplified version s′ t. This allows the agent to maintain a compact yet informative context of past interactions. Since thecontext evolves dynamically, we also update the loss masks accordingly to ensure that the loss is cor- rectly computed only on the action tokens during the M-GRPO optimization. Multi-turn GRPO Inspired by GRPO, we ex- tend its standard form to multi-turn RL settings and introduce multi-turn group relative policy optimiza- tion (M-GRPO). Specifically, for each task q, we first sample a group of trajectories {τ1, τ2,···, τG} and then optimize the policy model πθby minimiz- ing the following loss: LM-GRPO (θ) =−1 GGX i=11 |τi||τi|X j=1 1 |ai,j||ai,j|X t=1h ˜Ai,j,t−βDKL(θ)i where τi={ai,1, ai,2,···, ai,|τi|}is the sequence of generated actions in the i-th trajectory, ˜Ai,j,t= min{ri,j,t(θ)Ai,j,clip(ri,j,t(θ),1−ϵ,1+ϵ)Ai,j}is the advantage for the t-th token in action ai,jof trajectory τi,ri,j,t(θ) =πθ(ai,j,t|q,ai,j,<t ) πold(ai,j,t|q,ai,j,<t )denotes the importance sampling term, ϵandβare hyper- parameters, and Ai,j=ri−mean(r) std(r)is the group rel- ative advantage, computed using a group of rewards r={r1, r2, . . . , r G}produced by rule-based re- ward functions. Asynchronous Trajectory Rollout Generating a group of trajectories requires repeated interaction with the environment and can be time-consuming. To address this, we introduce an asynchronous tra- jectory rollout strategy, where multiple indepen- dent browser instances {E1,E2,···,EG}are instan- tiated, each maintaining its own context ( e.g., cook- ies). For each task, all instances are initialized with the same starting page, but the agent interacts with them independently, resulting in diverse histories and trajectories. This asynchronous design enables efficient trajectory generation in M-GRPO. 4 Table 2: Task success rate (SR) comparison across different methods on various websites in WebArena-Lite (Liu et al., 2025; Qi et al., 2025; Zhou et al., 2024a). Baseline performance is reported as the higher value between our reproduced results and those reported in the literature (Qi et al., 2025). The best scores are highlighted in bold. Method Reddit GitLab CMS Map Shopping Average SR Prompting Method General Model Qwen2.5-3B 5.3 13.3 5.7 0 4.4 6.1 Llama3.1-8B 5.3 10.0 5.7 15.4 8.9 8.5 Qwen2.5-32B 10.5 20.0 20.0 19.2 17.8 16.9 GPT-4o 10.5 10.0 20.0 20.0 11.1 13.9 GPT-4o-Turbo 10.5 16.7 14.3 36.7 13.3 17.6 Reasoning Model QwQ-32B 15.8 33.3 25.7 15.4 20.0 22.4 OpenAI-o3 36.8 46.7 45.7 38.5 33.3 39.4 OpenAI-o4-mini 47.4 43.3 45.7 26.9 28.9 36.9 Finetuning Method Qwen2.5-3B Behavior Cloning 42.1 16.7 22.9 26.9 11.1 20.0 WEBAGENT -R1 26.3 53.3 48.6 26.9 24.4 33.9 Llama3.1-8B Behavior Cloning 36.8 6.7 20.0 33.3 17.8 20.6 Filtered BC (Pan et al., 2024) 52.6 20.0 31.4 23.3 8.9 23.0 AWR (Peng et al., 2019) 57.9 26.7 31.4 26.7 17.8 28.5 DigiRL (Bai et al., 2024) 57.9 26.7 37.1 33.3 17.8 30.3 WebRL (Qi et al., | https://arxiv.org/abs/2505.16421v1 |
2025) 63.2 46.7 54.3 36.7 31.1 42.4 WEBAGENT -R1 47.4 56.7 57.1 23.1 44.4 44.8 Reward Design We use the default rule-based reward functions in the web environment, which assign binary rewards ( r=1for success, r=0other- wise) based on task-specific criteria ( e.g., reaching a target page). This eliminates the need for out- come reward models (Qi et al., 2025), ensuring a simple and generalizable training setup. 3 Experiments 3.1 Experimental Setup Web Environment Like prior works (Liu et al., 2025; Qi et al., 2025), we focus on web agents for real-world scenarios, specifically utilizing We- bArena (Zhou et al., 2024a), a self-hostable and realistic web environment that supports practical tasks across diverse domains: social forums (Red- dit), collaborative coding (GitLab), e-commerce content management systems (CMS), open street maps (Map), and online shopping (Shopping). Dataset and Evaluation Metrics Following Qi et al. (2025), we use the public 9,460 trajectories for behavior cloning, and adopt WebArena-Lite, a human-verified version of WebArena, for more re- liable evaluation. Specifically, we use 165 verified tasks for evaluation and 647 remaining tasks for RL training. Task success rate is calculated using the built-in rule-based rubrics.Baselines For prompting baselines, we provide a comprehensive comparison with both open-source and proprietary models, including general-purpose models ( e.g., Qwen2.5, Llama3.1, GPT-4) and reasoning-specialized models ( e.g., QwQ, OpenAI o3 (OpenAI, 2025)), covering various model sizes. For finetuning methods, we employ Qwen2.5-3B and Llama3.1-8B as the backbone model. More details on the environment and implemen- tation are provided in Appendix A and B. We also provide the prompt templates and qualitative exam- ples in Appendix D and E. 3.2 Main Results Most LLMs still struggle with web tasks through prompting, highlighting the importance of fine- tuning for web agents. As shown in Table 2, our experiments reveal the limitations of off-the- shelf models in web tasks. Despite their strong general capabilities, state-of-the-art models such as OpenAI’s o3 achieve only a 39.4% success rate (SR). In contrast, a finetuned 3B model trained with simple behavior cloning achieves a success rate of 20%, outperforming proprietary models like GPT-4o. We speculate that the poor performance of off-the-shelf models is not due to base model size or capability, but rather to insufficient understand- ing of HTML structure and web-specific behaviors, 5 0 5 10 15 20 25 30 35 40 Training Steps0.050.100.150.200.25Reward Qwen2.5-3B LLaMA3.1-8B(a) Reward 0 10 20 30 40 Training Steps180215250285320Trajectory Length Qwen2.5-3B LLaMA3.1-8B (b) Trajectory Length 0 5 10 15 20 25 30 35 40 Training Steps7.58.08.59.09.5Number of Interactions Qwen2.5-3B LLaMA3.1-8B (c) Number of Interactions Figure 3: Training dynamics during RL, including rewards, trajectory length, and number of interactions. As indicated by the dashed vertical lines in the figure, the entire process can be broadly divided into three phases: (1) initial skill acquisition, (2) exploration for policy refinement, and (3) final policy stabilization. as evidenced by the observation that both 3B and 8B models achieve comparable performance after behavior cloning. These findings emphasize the necessity of domain-specific training on web data to develop effective LLM-based web agents. Reasoning models are better web agents. Com- | https://arxiv.org/abs/2505.16421v1 |
pared to general-purpose LLMs, models equipped with explicit thinking capabilities perform signifi- cantly better on web tasks, likely due to their ability to decompose high-level goals and explicitly lay out dynamic changes in the web interface. This gap underscores the importance of thinking in web environments, which typically require multi-turn decision-making and dynamic contextual under- standing. Motivated by this observation, we further explore the integration of thinking mechanisms into web agents through prompt design (§ 3.5) and train- ing strategies (§ 3.4), which further confirms the advantage of thinking ability for web agents. Reinforcement learning enables stronger perfor- mance for web agents. While behavior cloning via SFT can significantly improve LLM’s perfor- mance as web agents ( e.g., boosting Qwen2.5-3B from 6.1% to 20%), applying RL on top of the SFT- trained policy leads to additional substantial gains (e.g., further boosting Qwen2.5-3B from 20% to 33.9%). We attribute these improvements to RL’s ability to optimize long-horizon decision-making, explore novel strategies beyond those seen in the SFT data through trial-and-error across dynamic web interactions. While prior RL solutions for web agents, such as DigiRL and WebRL, have also shown performance gains, our method achieves even stronger results, highlighting the effectiveness of our end-to-end multi-turn RL framework. 3.3 Training Dynamics To understand how the proposed end-to-end re- inforcement learning optimizes the behavior ofthe web agents, we analyze the training dynamics across three metrics: reward, trajectory length (i.e., the number of tokens in model responses across all multi-turn interactions), and number of interac- tions. As shown in Figure 3, the learning process can be broadly divided into three distinct phases, separated by vertical dashed lines. Reward. Phase 1 shows a rapid increase in re- ward, indicating that the agent quickly learns basic skills and begins to succeed on simpler tasks. In Phase 2, the reward growth plateaus and slightly fluctuates, suggesting that the agent is exploring different strategies and refining its policy. In Phase 3, reward gradually improves again, indicating ex- ploitation and increased stability. Trajectory Length. Trajectory length increases sharply during Phase 1, then stabilizes in Phase 2. In Phase 3, a modest increase is observed again. This trend suggests that the agent initially learns to produce more detailed outputs, followed by a period of consolidation and later refinement to bal- ance verbosity with task effectiveness. Number of Interactions. The number of interac- tion rounds increases during Phase 1 as the agent becomes more proactive, followed by a reduction in Phase 2 as it learns to interact more efficiently. In Phase 3, the interaction count stabilizes, indi- cating convergence toward a more consistent and effective interaction strategy. These trends highlight a three-phase learning dy- namic commonly observed in RL: (1) initial skill acquisition, (2) exploration for policy refinement, and (3) final policy stabilization. Interestingly, both Qwen2.5-3B and Llama3.1-8B follow similar learn- ing patterns, suggesting that our end-to-end multi- turn RL framework effectively scales across model sizes and enables stable policy improvement. 6 R1-Zero R1-CoT R1010203040Success Rate 6.124.5 20.0 4.830.333.9Before RL After RL(a) Success Rate R1-Zero R1-CoT R10100200300400Response Length142388 48130384 42Before RL After RL (b) | https://arxiv.org/abs/2505.16421v1 |
Response Length R1-Zero R1-CoT R105101520Number of Interactions17 9 912 10 8Before RL After RL (c) Number of Interactions Figure 4: Ablation study on RL initialization policy by comparing WEBAGENT -R1 (R1) with two variants: WEBAGENT -R1-Z ERO (R1-Zero), initialized from an off-the-shelf model without SFT, and WEBAGENT -R1-C OT (R1-CoT), initialized from an SFT model trained with long chain-of-thought (CoT) data during behavior cloning. The comparison includes task success rate, single-turn response length, and number of interactions, evaluated both before and after applying RL. 3.4 Ablation Study To validate key design choices in our framework, we conduct a set of ablation studies using Qwen2.5- 3B as the backbone model. Specifically, we in- troduce two variants, WEBAGENT -R1-Z ERO and WEBAGENT -R1-C OT, to study the impact of be- havior cloning and long CoT for web agents. The results are presented in Figure 4. Behavior cloning is crucial for training web agents with RL. WEBAGENT -R1-Z ERO skips the behavior cloning stage and starts RL directly from an off-the-shelf model, with an initial success rate of only 6.1%. Surprisingly, the model’s per- formance even deteriorates slightly after RL. We hypothesize that this is due to the lack of knowl- edge about web tasks since the model tends to pro- duce incomplete or ill-formed actions ( e.g., missing required arguments) and rarely obtains positive re- wards during RL. This severely hampers effective exploration and learning, highlighting that behav- ior cloning is essential for initializing web agents and enabling successful subsequent RL. Incorporating long-CoT data into behavior cloning leads to more performant web agents. We first augment the behavior cloning (BC) data by generating long-CoT traces using a strong rea- soning model (see Appendix C for details), and then apply SFT to obtain a long-CoT SFT model (i.e., the WEBAGENT -R1-C OTvariant before RL). Compared to the SFT model trained on standard BC data, the long-CoT SFT model achieves a much higher task success rate (24.5% vs. 20%), demon- strating the effectiveness of long-CoT reasoning for web agents. Limited gains from RL for long-CoT SFT model. While RL shows promising improvements for both the vanilla SFT and long-CoT SFT models, it isTable 3: Analysis of prompting design. We report the average success rate (SR), single-turn response length, and number of interactions. The result reveals a novel test-time scaling paradigm by increasing the number of interactions for multi-turn interactive web tasks. Method SR Length # of Interactions W/o thinking format Qwen2.5-3B 3.2 139 6 Llama3.1-8B 4.8 43 7 o4-mini 15.9 56 5 With thinking format Qwen2.5-3B 6.1 142 17 Llama3.1-8B 8.5 39 11 o4-mini 36.9 57 10 interesting that the gain is notably smaller for the latter. Specifically, WEBAGENT -R1improves from 20% to 33.9%, whereas WEBAGENT -R1-C OTim- proves from 24.5% to only 30.3%. We hypothe- size that this is because the deterministic reasoning patterns learned during long-CoT BC may con- strain the model’s exploration space during RL, limiting its ability to discover novel strategies com- pared to standard SFT models with more flexible exploratory behaviors. 3.5 Analysis Prompting with thinking format unleashes the potential of | https://arxiv.org/abs/2505.16421v1 |
LLMs as web agents. As shown in Table 3, using the thinking format significantly improves task success rates across models, partic- ularly for stronger ones ( e.g., o4-mini improves from 15.9% to 36.9%). Interestingly, while the average single-turn response length remains simi- lar (e.g., 139→142 tokens for Qwen2.5-3B), the number of interactions increases substantially ( e.g., 6→17) with the thinking format. This observation suggests a novel test-time scaling strategy for web tasks—rather than producing longer single-turn re- sponses, the web agent can become more effective by engaging in deeper multi-turn interactions. 7 5 10 15 20 25 30 Max Number of Interactions010203040Success Rate Prompting SFT WebAgent-R1Figure 5: Analysis of test-time scaling with increased max number of interactions. Allowing more interactions enables the web agent to produce longer trajectories and consistently improves the success rate. Test-time scaling through increased interac- tions leads to better performance on web tasks. Building on the above finding, we further inves- tigate how increasing the number of interactions between the web agent and the environment af- fects performance. As shown in Figure 5, allowing more interaction turns consistently improves suc- cess rates across prompting-based, SFT, and RL- based methods. We hypothesize that this form of test-time scaling facilitates deeper exploration and yields longer trajectories, potentially enabling the agent to iteratively refine its actions and make more informed decisions through extended interactions. 4 Related Works 4.1 LLM-based Agents LLMs have demonstrated promising agentic ca- pabilities, such as breaking down complex tasks into manageable subgoals and reasoning over long horizons (Zhou et al., 2022; Huang et al., 2022; Madaan et al., 2022; Li et al., 2023a,b; Wu et al., 2024; Chu et al., 2025). Building on these capa- bilities, LLM-based agents have been applied to a variety of real-world interactive tasks, including web navigation (Nakano et al., 2021; Yao et al., 2022; Ma et al., 2023; Gur et al., 2024; Abuelsaad et al., 2024; Lutz et al., 2024; Patel et al., 2024; Putta et al., 2024), general computer use (Li et al., 2020; Deng et al., 2023; Yang et al., 2024), and em- bodied environments (Puig et al., 2018; Shridhar et al., 2020; Toyama et al., 2021; Fan et al., 2022; Huang et al., 2022). Specifically, our work focuses on text-based web agents that operate in browser- based environments purely based on HTML con- tent, which requires agentic capabilities such as tool use, memory, and decision-making under par- tial observability (Zhou et al., 2024a; Qi et al.,2025). Complementary to this line of work, GUI agents leverage additional multimodal inputs such as screenshots, enabling visual-guided interactions with the environment (Lee et al., 2023; Shaw et al., 2023; Zheng et al., 2024; He et al., 2024a,b; Koh et al., 2024; Kil et al., 2024; Lei et al., 2025; Liu et al., 2025). For a comprehensive overview, we refer readers to recent surveys (Wang et al., 2024a; Hu et al., 2025; Ning et al., 2025). 4.2 Reinforcement Learning for LLMs Recent advances like DeepSeek-R1 (Guo et al., 2025) highlight the strong potential of RL in en- hancing LLMs. However, most prior work focuses | https://arxiv.org/abs/2505.16421v1 |
on single-turn tasks such as math problems (Shao et al., 2024), with limited exploration in multi-turn settings (Zhou et al., 2024b, 2025). Recent efforts have made some progress in this direction, such as training LLM agents to repeatedly use search en- gines (Jin et al., 2025; Sun et al., 2025; Chen et al., 2025; Song et al., 2025), but typically constrain actions to simple API calls without real environ- ment interaction. A few concurrent works, such as RAGEN (Wang et al., 2025) and SkyRL (Cao et al., 2025), have applied RL to more dynamic settings like simulated games and coding envi- ronments (Jimenez et al., 2024). However, real- world web environments remain largely underex- plored. Our work fills this gap by providing a prac- tical framework and offering actionable insights for training web agents with end-to-end RL. 5 Conclusion This work introduces WEBAGENT -R1, an end-to- end multi-turn RL framework for training web agents. We extend the standard GRPO to multi- turn settings, termed M-GRPO, and implement dy- namic context compression and asynchronous tra- jectory rollout mechanisms for efficient training. Empirically, WEBAGENT -R1 achieves new state- of-the-art results on the WebArena-Lite benchmark. Our findings underscore the critical role of behav- ior cloning in initializing web agents, providing a strong foundation for effective RL. We further an- alyze training dynamics and explore the effects of thinking-based prompting and test-time scal- ing strategies, showing that increasing interaction depth consistently enhances web agents. Future work includes exploring multi-modal inputs and extending our approach to broader GUI-based tasks beyond web environments, such as computer use. 8 References Tamer Abuelsaad, Deepak Akkil, Prasenjit Dey, Ashish Jagmohan, Aditya Vempaty, and Ravi Kokku. 2024. Agent-E: From autonomous web navigation to foun- dational design principles in agentic systems. arXiv preprint arXiv:2407.13032 . Hao Bai, Yifei Zhou, Jiayi Pan, Mert Cemri, Alane Suhr, Sergey Levine, and Aviral Kumar. 2024. Di- giRL: Training in-the-wild device-control agents with autonomous reinforcement learning. Advances in Neural Information Processing Systems , 37:12461– 12495. Shiyi Cao, Sumanth Hegde, Dacheng Li, Tyler Griggs, Shu Liu, Eric Tang, Jiayi Pan, Xingyao Wang, Akshay Malik, Graham Neubig, Kourosh Hakhamaneshi, Richard Liaw, Philipp Moritz, Matei Zaharia, Joseph E. Gonzalez, and Ion Stoica. 2025. Skyrl-v0: Train real-world long-horizon agents via reinforcement learning. Hyungjoo Chae, Namyoung Kim, Kai Tzu iunn Ong, Minju Gwak, Gwanwoo Song, Jihoon Kim, Sungh- wan Kim, Dongha Lee, and Jinyoung Yeo. 2025. Web agents with world models: Learning and lever- aging environment dynamics in web navigation. In The Thirteenth International Conference on Learning Representations . Mingyang Chen, Tianpeng Li, Haoze Sun, Yijie Zhou, Chenzheng Zhu, Haofen Wang, Jeff Z. Pan, Wen Zhang, Huajun Chen, Fan Yang, Zenan Zhou, and Weipeng Chen. 2025. ReSearch: Learning to reason with search for llms via reinforcement learning. Zhendong Chu, Shen Wang, Jian Xie, Tinghui Zhu, Yibo Yan, Jinheng Ye, Aoxiao Zhong, Xuming Hu, Jing Liang, Philip S Yu, and 1 others. 2025. LLM agents for education: Advances and applications. arXiv preprint arXiv:2503.11733 . Tri Dao. 2024. FlashAttention-2: Faster attention with better parallelism and work partitioning. In The Twelfth International Conference on Learning Repre- | https://arxiv.org/abs/2505.16421v1 |
sentations . Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Sam Stevens, Boshi Wang, Huan Sun, and Yu Su. 2023. Mind2web: Towards a generalist agent for the web. Advances in Neural Information Processing Systems , 36:28091–28114. Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Man- dlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. 2022. Minedojo: Building open-ended embodied agents with internet-scale knowledge. Advances in Neural Information Processing Systems , 35:18343– 18362. Yao Fu, Dong-Ki Kim, Jaekyeom Kim, Sungryull Sohn, Lajanugen Logeswaran, Kyunghoon Bae, and Honglak Lee. 2024. Autoguide: Automated gener- ation and selection of context-aware guidelines forlarge language model agents. In The Thirty-eighth Annual Conference on Neural Information Process- ing Systems . Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shi- rong Ma, Peiyi Wang, Xiao Bi, and 1 others. 2025. Deepseek-R1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 . Izzeddin Gur, Hiroki Furuta, Austin V Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksan- dra Faust. 2024. A real-world webagent with plan- ning, long context understanding, and program syn- thesis. In The Twelfth International Conference on Learning Representations . Hongliang He, Wenlin Yao, Kaixin Ma, Wenhao Yu, Yong Dai, Hongming Zhang, Zhenzhong Lan, and Dong Yu. 2024a. Webvoyager: Building an end-to- end web agent with large multimodal models. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 6864–6890. Hongliang He, Wenlin Yao, Kaixin Ma, Wenhao Yu, Hongming Zhang, Tianqing Fang, Zhenzhong Lan, and Dong Yu. 2024b. Openwebvoyager: Building multimodal web agents via iterative real-world ex- ploration, feedback and optimization. arXiv preprint arXiv:2410.19609 . Wenyi Hong, Weihan Wang, Qingsong Lv, Jiazheng Xu, Wenmeng Yu, Junhui Ji, Yan Wang, Zihan Wang, Yuxiao Dong, Ming Ding, and 1 others. 2024. Coga- gent: A visual language model for gui agents. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 14281–14290. Xueyu Hu, Tao Xiong, Biao Yi, Zishu Wei, Ruixuan Xiao, Yurun Chen, Jiasheng Ye, Meiling Tao, Xi- angxin Zhou, Ziyu Zhao, and 1 others. 2025. Os agents: A survey on mllm-based agents for computer, phone and browser use. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics . Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. 2022. Language models as zero-shot planners: Extracting actionable knowledge for em- bodied agents. In International conference on ma- chine learning , pages 9118–9147. PMLR. Carlos E Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik R Narasimhan. 2024. SWE-bench: Can language mod- els resolve real-world github issues? In The Twelfth International Conference on Learning Representa- tions . Bowen Jin, Hansi Zeng, Zhenrui Yue, Jinsung Yoon, Sercan Arik, Dong Wang, Hamed Zamani, and Jiawei Han. 2025. Search-r1: Training llms to reason and leverage search engines with reinforcement learning. arXiv preprint arXiv:2503.09516 . 9 Jihyung Kil, Chan Hee Song, Boyuan Zheng, Xiang Deng, Yu Su, and Wei-Lun Chao. 2024. Dual-view visual contextualization for web navigation. In Pro- ceedings of the | https://arxiv.org/abs/2505.16421v1 |
IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 14445–14454. Jing Yu Koh, Robert Lo, Lawrence Jang, Vikram Duvvur, Ming Lim, Po-Yu Huang, Graham Neu- big, Shuyan Zhou, Russ Salakhutdinov, and Daniel Fried. 2024. Visualwebarena: Evaluating multimodal agents on realistic visual web tasks. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 881–905. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gon- zalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serv- ing with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles , pages 611–626. Hanyu Lai, Xiao Liu, Iat Long Iong, Shuntian Yao, Yux- uan Chen, Pengbo Shen, Hao Yu, Hanchen Zhang, Xiaohan Zhang, Yuxiao Dong, and 1 others. 2024. Autowebglm: A large language model-based web navigating agent. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 5295–5306. Kenton Lee, Mandar Joshi, Iulia Raluca Turc, Hexi- ang Hu, Fangyu Liu, Julian Martin Eisenschlos, Ur- vashi Khandelwal, Peter Shaw, Ming-Wei Chang, and Kristina Toutanova. 2023. Pix2struct: Screen- shot parsing as pretraining for visual language under- standing. In International Conference on Machine Learning , pages 18893–18912. PMLR. Xuanyu Lei, Zonghan Yang, Xinrui Chen, Peng Li, and Yang Liu. 2025. Scaffolding coordinates to pro- mote vision-language coordination in large multi- modal models. In Proceedings of the 31st Inter- national Conference on Computational Linguistics , pages 2886–2903. Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2023a. Camel: Communicative agents for" mind" exploration of large language model society. Advances in Neural Information Processing Systems , 36:51991–52008. Xinze Li, Yixin Cao, Muhao Chen, and Aixin Sun. 2023b. Take a break in the middle: Investigating subgoals towards hierarchical script generation. In Findings of the Association for Computational Lin- guistics: ACL 2023 , pages 10129–10147. Yang Li, Jiacong He, Xin Zhou, Yuan Zhang, and Jason Baldridge. 2020. Mapping natural language instruc- tions to mobile ui action sequences. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 8198–8210. Xiao Liu, Tianjie Zhang, Yu Gu, Iat Long Iong, Song XiXuan, Yifan Xu, Shudan Zhang, Hanyu Lai, JiadaiSun, Xinyue Yang, Yu Yang, Zehan Qi, Shuntian Yao, Xueqiao Sun, Siyi Cheng, Qinkai Zheng, Hao Yu, Hanchen Zhang, Wenyi Hong, and 9 others. 2025. VisualAgentBench: Towards large multimodal mod- els as visual foundation agents. In The Thirteenth International Conference on Learning Representa- tions . Michael Lutz, Arth Bohra, Manvel Saroyan, Artem Harutyunyan, and Giovanni Campagna. 2024. Wilbur: Adaptive in-context learning for ro- bust and accurate web agents. arXiv preprint arXiv:2404.05902 . Kaixin Ma, Hongming Zhang, Hongwei Wang, Xiao- man Pan, Wenhao Yu, and Dong Yu. 2023. Laser: Llm agent with state-space exploration for web navi- gation. arXiv preprint arXiv:2309.08172 . Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. 2022. Language models of code are few-shot commonsense learners. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 1384–1403. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff | https://arxiv.org/abs/2505.16421v1 |
Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, and 1 others. 2021. WebGPT: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332 . Liangbo Ning, Ziran Liang, Zhuohang Jiang, Haohao Qu, Yujuan Ding, Wenqi Fan, Xiao-yong Wei, Shanru Lin, Hui Liu, Philip S Yu, and 1 others. 2025. A sur- vey of webagents: Towards next-generation ai agents for web automation with large foundation models. arXiv preprint arXiv:2503.23350 . OpenAI. 2025. Introducing OpenAI o3 and o4-mini. Jiayi Pan, Yichi Zhang, Nicholas Tomlin, Yifei Zhou, Sergey Levine, and Alane Suhr. 2024. Autonomous evaluation and refinement of digital agents. arXiv preprint arXiv:2404.06474 . Ajay Patel, Markus Hofmarcher, Claudiu Leoveanu- Condrei, Marius-Constantin Dinu, Chris Callison- Burch, and Sepp Hochreiter. 2024. Large language models can self-improve at web agent tasks. arXiv preprint arXiv:2405.20309 . Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. 2019. Advantage-weighted regression: Sim- ple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177 . Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, and Antonio Torralba. 2018. Virtualhome: Simulating household activities via programs. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 8494–8502. 10 Pranav Putta, Edmund Mills, Naman Garg, Sumeet Motwani, Chelsea Finn, Divyansh Garg, and Rafael Rafailov. 2024. Agent Q: Advanced reasoning and learning for autonomous ai agents. arXiv preprint arXiv:2408.07199 . Zehan Qi, Xiao Liu, Iat Long Iong, Hanyu Lai, Xueqiao Sun, Jiadai Sun, Xinyue Yang, Yu Yang, Shuntian Yao, Wei Xu, Jie Tang, and Yuxiao Dong. 2025. We- bRL: Training LLM web agents via self-evolving on- line curriculum reinforcement learning. In The Thir- teenth International Conference on Learning Repre- sentations . Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Com- puting, Networking, Storage and Analysis , pages 1– 16. IEEE. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. High-dimensional continuous control using generalized advantage esti- mation. arXiv preprint arXiv:1506.02438 . John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proxi- mal policy optimization algorithms. arXiv preprint arXiv:1707.06347 . Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, and 1 others. 2024. Deepseek- math: Pushing the limits of mathematical reason- ing in open language models. arXiv preprint arXiv:2402.03300 . Peter Shaw, Mandar Joshi, James Cohan, Jonathan Be- rant, Panupong Pasupat, Hexiang Hu, Urvashi Khan- delwal, Kenton Lee, and Kristina N Toutanova. 2023. From pixels to ui actions: Learning to follow in- structions via graphical user interfaces. Advances in Neural Information Processing Systems , 36:34354– 34370. Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020. Alfred: A bench- mark for interpreting grounded instructions for ev- eryday tasks. In Proceedings of the IEEE/CVF con- ference on computer vision and pattern recognition , pages 10740–10749. Paloma Sodhi, S.R.K Branavan, Yoav Artzi, and Ryan McDonald. 2024. Step: Stacked LLM policies for web actions. In First Conference on Language | https://arxiv.org/abs/2505.16421v1 |
Mod- eling . Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, and Ji- Rong Wen. 2025. R1-searcher: Incentivizing the search capability in LLMs via reinforcement learning. arXiv preprint arXiv:2503.05592 .Hao Sun, Zile Qiao, Jiayan Guo, Xuanbo Fan, Yingyan Hou, Yong Jiang, Pengjun Xie, Fei Huang, and Yan Zhang. 2025. Zerosearch: Incentivize the search capability of llms without searching. arXiv preprint arXiv:2505.04588 . Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, and 1 others. 2025. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 . Daniel Toyama, Philippe Hamel, Anita Gergely, Ghe- orghe Comanici, Amelia Glaese, Zafarali Ahmed, Tyler Jackson, Shibl Mourad, and Doina Precup. 2021. Androidenv: A reinforcement learning plat- form for android. arXiv preprint arXiv:2105.13231 . Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, and 1 others. 2024a. A survey on large language model based autonomous agents. Frontiers of Computer Science , 18(6):186345. Zihan Wang, Kangrui Wang, Qineng Wang, Pingyue Zhang, Linjie Li, Zhengyuan Yang, Kefan Yu, Minh Nhat Nguyen, Licheng Liu, Eli Gottlieb, and 1 others. 2025. Ragen: Understanding self-evolution in llm agents via multi-turn reinforcement learning. arXiv preprint arXiv:2504.20073 . Zora Zhiruo Wang, Jiayuan Mao, Daniel Fried, and Graham Neubig. 2024b. Agent workflow memory. arXiv preprint arXiv:2409.07429 . Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, and Chi Wang. 2024. Autogen: Enabling next-gen LLM applications via multi-agent conversations. In First Conference on Language Modeling . An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayi- heng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, and 41 others. 2025a. Qwen3 technical report. John Yang, Carlos E Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik Narasimhan, and Ofir Press. 2024. Swe-agent: Agent-computer interfaces enable automated software engineering. Advances in Neural Information Processing Systems , 37:50528– 50652. Ke Yang, Yao Liu, Sapana Chaudhary, Rasool Fakoor, Pratik Chaudhari, George Karypis, and Huzefa Rang- wala. 2025b. AgentOccam: A simple yet strong base- line for LLM-based web agents. In The Thirteenth International Conference on Learning Representa- tions . 11 Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. 2022. Webshop: Towards scalable real- world web interaction with grounded language agents. Advances in Neural Information Processing Systems , 35:20744–20757. Da Yin, Faeze Brahman, Abhilasha Ravichander, Khy- athi Chandu, Kai-Wei Chang, Yejin Choi, and Bill Yuchen Lin. 2024. Agent lumos: Unified and modular training for open-source language agents. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 12380–12403. Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Ke- qing He, Zejun Ma, and Junxian He. 2025. Simplerl- zoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892 . Yao | https://arxiv.org/abs/2505.16421v1 |
Zhang, Zijian Ma, Yunpu Ma, Zhen Han, Yu Wu, and V olker Tresp. 2025. Webpilot: A versatile and autonomous multi-agent system for web task execu- tion with strategic exploration. In Proceedings of the AAAI Conference on Artificial Intelligence , vol- ume 39, pages 23378–23386. Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, and Yu Su. 2024. GPT-4v(ision) is a generalist web agent, if grounded. In Forty-first International Conference on Machine Learning . Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, and Gra- ham Neubig. 2024a. Webarena: A realistic web en- vironment for building autonomous agents. In The Twelfth International Conference on Learning Repre- sentations . Shuyan Zhou, Li Zhang, Yue Yang, Qing Lyu, Pengcheng Yin, Chris Callison-Burch, and Graham Neubig. 2022. Show me more details: Discovering hierarchies of procedures from semi-structured web data. arXiv preprint arXiv:2203.07264 . Yifei Zhou, Song Jiang, Yuandong Tian, Jason We- ston, Sergey Levine, Sainbayar Sukhbaatar, and Xian Li. 2025. Sweet-rl: Training multi-turn llm agents on collaborative reasoning tasks. arXiv preprint arXiv:2503.15478 . Yifei Zhou, Andrea Zanette, Jiayi Pan, Sergey Levine, and Aviral Kumar. 2024b. Archer: Training language model agents via hierarchical multi-turn rl. In Inter- national Conference on Machine Learning , pages 62178–62209. PMLR. 12 A Web Environment WebArena-Lite WebArena (Zhou et al., 2024a) is a realistic, self-hostable web environment for developing LLM-based agents. It comprises 812 real-world web tasks spanning diverse domains, in- cluding social forum (Reddit), collaborative coding (GitLab), e-commerce content management system (CMS), open street map (Map), and online shop- ping (OneStopShop). WebArena-Lite (Liu et al., 2025) is a curated version of WebArena designed for more reliable evaluation. It selects 165 represen- tative tasks for human verification as the evaluation set and uses the remaining 647 tasks for training. It also provides 9,460 trajectories automatically annotated by program-based solvers for behavior cloning. For each website, the authors (Liu et al., 2025) summarize the core functionalities and valid items and construct a set of task prototypes and manually implement rule-based solvers using Play- wright scripts for each prototype. The correspond- ing solvers are executed on the websites to collect ground-truth trajectories. In total, this produces 1,186 valid training samples comprising 9,460 tra- jectories, released under the Apache License 2.0. Action Space Agents interact with the environ- ment through a set of predefined actions, including: •Click : simulates a left mouse click on a web- page element. •Right Click : performs a right-click on a spec- ified element. •Type : inputs a text string into an input field. •Search : enters a search query and triggers a search operation. •Hover : moves the cursor over a specific ele- ment to reveal tooltips or hidden menus. •Scroll Up / Scroll Down : scrolls the page vertically. •Press Enter : simulates pressing the Enter key, typically after typing. •Switch Tab : changes the current browser tab. •Select Dropdown Option : selects an option from a dropdown menu. •Wait : pauses the agent’s interaction for a brief period. •Exit: terminates the current session with | https://arxiv.org/abs/2505.16421v1 |
a final message. •Go Backward / Go Forward : navigates back- ward or forward in the browser history.Rule-based Metrics In real-world web tasks, there are typically no closed-form solutions, and multiple trajectories may lead to successful task completion. Therefore, we evaluate agents solely based on whether the final goal is achieved and calculate the Success Rate (SR), which indicates whether a task is successfully completed according to the following rule-based evaluation metrics: •String Match : The agent must provide an an- swer string that matches the expected output. •URL Match : The agent is required to nav- igate to a specific webpage. Success is de- termined by comparing the final URL to a reference URL. •Program Execution : The agent must modify webpage content or configuration. Evaluation is performed by executing a rule-based script to extract and verify the final state of the page. Each task in WebArena is associated with one of these evaluation metrics, along with the corre- sponding reference answer, target URL, or valida- tion script when applicable. This diverse rule-based metric design ensures consistent evaluation across a wide range of web tasks, while accommodating different task objectives and output formats. B Implementation Details We implement our method using Qwen2.5-3B and Llama3.1-8B as the backbone models. By de- fault, we use the instruction-tuned version for both prompting and fine-tuning methods. The rein- forcement learning (RL) initialization policy is de- rived from the supervised fine-tuning (SFT) check- point obtained via behavior cloning. Since WebRL leverages additional GPT-4 generated data to train Llama3.1-8B, we ensure a fair comparison by ini- tializing our RL policy with their publicly released checkpoint and applying our end-to-end RL using only the original 647 training tasks, without intro- ducing any extra data. Our models are trained on a single node of 8 NVIDIA A100 GPUs with 80GB memory via full-parameter fine-tuning. To optimize GPU uti- lization, we adopt DeepSpeed (Rajbhandari et al., 2020) for distributed training with ZeRO-3 offload, along with gradient checkpointing, FlashAttention- 2 (Dao, 2024), and bf16 mixed precision training enabled for computation efficiency. For SFT, we use a learning rate of 5e-5 and a batch size of 128, with a cosine LR scheduler for 5% warmup steps. 13 Table 4: Comparison of model outputs from WEBAGENT -R1 andWEBAGENT -R1-C OT. We present successful trajectories from both models on the same task ( “What are the top-3 best-selling products in Jan 2023?” ), showing only the first two steps for clarity (a full trajectory is shown in Figure 6 for additional context). Compared to WEBAGENT -R1, the long-CoT variant W EBAGENT -R1-C OT exhibits a more detailed thinking process. Model Model Output WEBAGENT -R1 (Step 1)<think> # Element: the ‘REPORTS’ section on the left sidebar. </think> <answer> do(action=“Click”, element=“7”) </answer> WEBAGENT -R1 (Step 2)<think> # Element: the ‘Bestsellers’ report option under the ‘Products’ section in the ‘REPORTS’ </think> <answer> do(action=“Click”, element=“27”) </answer> WEBAGENT -R1-C OT (Step 1)<think> I need to find the top-3 best-selling products for January 2023. Let me look at the current dashboard. The left sidebar has a "Reports" section (element 7) | https://arxiv.org/abs/2505.16421v1 |
which is critical for accessing sales data. The dashboard’s main content shows a table with a product (“Sprite Yoga Strap 6 foot”) and quantity (6), but this appears to be static local sales data rather than a report. Since the user needs a specific monthly ranking, the Reports section is the logical path to business intelligence tools that can aggregate and sort data by time period. Clicking “Reports” will likely reveal options for sales analytics, performance tracking, or product rankings required to identify top-selling items in January 2023. </think> <answer> do(action=“Click”, element=“7”) </answer> WEBAGENT -R1-C OT (Step 2)<think> Alright, let’s break down my thought process here. The user wants the top-3 best-selling products for January 2023. I previously clicked the “Reports” menu (element 7) to access analytical sections. Now, analyzing the current HTML structure: 1. **Locating the Reports Hierarchy**: The Reports section’s dropdown (div data-bbox=“88,92,1072,720”) contains nested menus. Under the “Products” subsection (span data-text=“Products”), I see options like “Views”, **“Bestsellers”**, “Low Stock”, etc. 2. **Identifying the Target Element**: The element with id=“27” is explicitly labeled ‘Bestsellers’, which directly matches the user’s request. This is the most logical path to access sales performance data. 3. **Ruling Out Distractors**: Other options like “Orders” or “Tax” under the Sales section don’t specifically target product performance. The “Bestsellers” report is tailored to rank products by sales volume, making it the correct choice. 4. **Next Step Justification**: Clicking “Bestsellers” (element 27) will likely load a report listing products by sales quantity or revenue. From there, I can extract the top-3 entries for January 2023. **Decision**: Proceed to click element 27 to access the Bestsellers report, which is the most direct route to fulfill the user’s query. </think> <answer> do(action=“Click”, element=“27”) </answer> For RL training, we use a constant learning rate of 1e-6 with a batch size of 16. The KL divergence regularization coefficient βand the clip ratio ϵare set to 0.001 and 0.2, respectively. The maximum context length and number of new tokens are set to 16,384 and 1024. For efficient LLM rollouts during M-GRPO, we use vLLM (Kwon et al., 2023) with a tensor parallel size of 1 and a GPU memory uti- lization ratio of 0.7. Rollout sampling is performed with both the temperature and top-p set to 1.0. C Data Augmentation for Behavior Cloning with long-CoT Traces As introduced in the ablation study (§ 3.4), we augment the original behavior cloning data by gen- erating long-CoT traces using a strong reasoningmodel, QwQ-32B. We then apply SFT to obtain a long-CoT SFT model, followed by RL training to obtain WEBAGENT -R1-C OT. As shown in Table 4, WEBAGENT -R1-C OTis able to generate more de- tailed thinking compared to W EBAGENT -R1. D Prompt Templates The prompt used for data augmentation is shown in Table 5. We define the action space in the system prompt, which is presented in Table 6. By default, we use the version with the thinking format. E Qualitative Examples In Figures 6–10, we present a real-world successful trajectory generated by WEBAGENT -R1 for each of the five websites in WebArena. | https://arxiv.org/abs/2505.16421v1 |
14 Table 5: Prompt used for long-CoT data augmentation, with example fill-ins for the fields user intent ,action history , current observation ,next action , and remarks , all of which are available in the original behavior cloning data. The full HTML content in current observation is omitted for clarity. Long-CoT Data Augmentation Prompt You are an expert in explaining the behaviors of web agents. The agent is designed to help a human user navigate the website to complete a task. Given the user’s intent, the agent’s action history, the current HTML content of the web page, and the agent’s next action associated with optional remarks, your goal is to explain the decision-making process from the agent’s perspective using first-person narrative (as if the decision is being made in real time). User Intent: “Establish a new discussion platform called ‘VirtualRealityVanguard’. It should serve as a cutting-edge forum where VR aficionados can engage in conversations about the newest trends, games, and applications in the virtual reality realm. Please ensure the sidebar features the following tags: virtual reality, technology, trends, gaming.” Action History : [ { "round": "0", "observation": "** Simplified html **", "remarks": "# Element: the 'Forums 'link at the top center", "action": "do(action="Click", element="1")" }, { "round": "1", "observation": "** Simplified html **", "remarks": "# Element: the 'Create forum 'button next to the 'List of forums 'title", "action": "do(action="Click", element="17")" }, { "round": "2", "observation": "** Simplified html **", "remarks": "# Element: the 'Name 'field at the top of the page", "action": "do(action="Type", argument="VirtualRealityVanguard", element="12")" } ] Current Observation: <html> ... </html> Next Action: do(action=“Type”, argument=“VirtualRealityVanguard”, element=“14”) Remarks: # Element: the ‘Title’ input field in the middle of the page Now, please explain the agent’s thinking process using a first-person narrative. 15 Table 6: System prompt for web agents. By default, we use the version with the thinking format (highlighted in gray). For the variant without the thinking format (discussed in § 3.5), the gray part is simply removed. System Prompt You are a professional web browsing agent assistant that can fulfill user’s high-level instructions. Given simplified html of the browsed webpage at each step, you plan operations in python-style pseudo code using provided functions. You should first think about the reasoning process as an internal monologue and then decide an action. The reasoning process and answer are enclosed within <think> </think> and<answer> </answer> tags, respectively, i.e., responding in the following format: <think> ... </think> <answer> ... </answer> . More details about the code action: Your action should be readable, simple. Please generate **ONLY ONE ACTION** in one round. Predefined functions are as follows: def do(action, argument, element): """A single browsing operation on the webpage. Args: :param action: one of the actions from ["Click", "Right Click", "Type", "Search", "Hover", "Scroll Up", "Scroll Down", "Press Enter", "Switch Tab", "Select Dropdown Option", "Wait"]. :param argument: optional. Only for "Type", "Search", "Switch Tab", and "Select Dropdown Option", indicating the content to type in, page number (start from 0) to switch, or key to press. "Search" action is equivalent to "Type" action plus "Enter". :param element: | https://arxiv.org/abs/2505.16421v1 |
optional. Only for "Click", "Right Click", "Type", "Search", "Select Dropdown Option", and "Hover". Should be specific element id in the HTML. Returns: None. The webpage will be updated after executing the action. """ def exit(message): """Ending the browsing process if the assistant think it has fulfilled the goal. Args: :param message: optional. If user 's instruction is a question, return assistant 's answer in the message based on the browsing content. Returns: None. """ def go_backward(): """Go back to the previous page.""" def go_forward(): """Go forward to the next page.""" Examples: •<think> # Element: the ’REPORTS’ section on the left sidebar </think> <answer> do(action="Click", element="7") </answer> •<think> # Element: the ’Period’ dropdown, middle center </think> <answer> do(action="Select Dropdown Option", argument="Month", element="20") </answer> •<think> # Element: the ’From’ date picker input field, middle center </think> <answer> do(action="Type", argument="01/01/2023", element="22") </answer> REMEMBER: • You can generate **ONLY ONE ACTION** in one round. • If you have multiple potential actions to explore, you should generate other actions in separate rounds. • Don’t generate an operation element that you do not see in the screenshot. • Use“# Element” to describe the element you choose in the HTML. • Use“# Note” to record information useful to answer the instruction if needed. • If you find yourself fallen into some sort of loop, try to use another method or change your action. • If you think a page is still loading or still playing animation and you want to wait a while, use “Wait” action • You are acting in a real world, try your best not to reject user’s demand. Solve all the problem you encounter. •If you think you didn’t get expected webpage, you should try using more precise and locative description of the element. • You should **NEVER** try to use the browser’s address bar at the top of the page to navigate. • Your answer shouldn’t be in a code snippet format. Just write the function name and its arguments. •If you use do function to perform “Click” ,“Right Click” ,“Type” ,“Search” ,“Select Dropdown Option” , and “Hover” , the parame element must not be None . 16 do(action="Click", element="7")Step 1 do(action="Click", element="27") do(action="Type", argument="01/01/2023", element="22") do(action=“Type", argument="01/31/2023", element="25") do(action="Select Dropdown Option", argument="Month", element="20") do(action="Click", element="16") do(action="Scroll Down")exit(message="Top-3 best-selling products in January 2023 are Impulse Duffle, Overnight Duffle, and Hawkeye Yoga Short-32-Blue")Step 2Step 3 Step 4Step 5Step 6 Step 7Step 8 Figure 6: A real-world example of a successful trajectory generated by WEBAGENT -R1 on the CMS task: “What are the top-3 best-selling products in Jan 2023?” . do(action="Click", element="15") do(action="Type", argument="Homewood Suites Southpointe", element="13") do(action="Type", argument="PPG Paints Arena", element="14") do(action="Click", element=“16”) exit(message="The estimated driving time from Homewood Suites Southpointe to PPG Paints Arena is 34 minutes.")Step 1Step 2Step 3 Step 4Step 5 Figure 7: A real-world example of a successful trajectory generated by WEBAGENT -R1 on the Map task: “From my stay at Homewood Suites Southpointe, what’s the estimated driving time to reach PPG Paints Arena?” . 17 do(action=“Hover”, element=“14”)do(action=“Hover”, element=“16”)do(action="Click", element=“17”) do(action="Select Dropdown Option", argument="Price", element="36")do(action="Click", element=“37”)exit(message="The living room furniture category is | https://arxiv.org/abs/2505.16421v1 |
now listed by descending price.")Step 1Step 2Step 3 Step 4Step 5Step 6 Figure 8: A real-world example of a successful trajectory generated by WEBAGENT -R1on the Shopping task: “List products from living room furniture category by descending price” . do(action=“Click”, element=“17”)do(action=“Click”, element=“21”)do(action="Click", element=“30”) do(action="Scroll Down") do(action="Click", element=“66”)exit(message="TMN_bBn9Z48qVbUFZV45")Step 1Step 2Step 3 Step 4Step 5Step 6 Figure 9: A real-world example of a successful trajectory generated by W EBAGENT -R1 on the GitLab task: “Get me my RSS feed token” . 18 do(action="Search", argument="Star Trek Starfleet Academy", element="4")do(action=“Click”, element=“22”)do(action="Type", argument="Every watch makes me feel like a kid again", element="16") do(action="Scroll Down") do(action="Click", element=“8”)exit(message="The post on Star Trek Starfleet Academy series has been edited successfully, adding the line 'Every watch makes me feel like a kid again' to the body of the post.")Step 1Step 2Step 3 Step 4Step 5Step 6 Figure 10: A real-world example of a successful trajectory generated by WEBAGENT -R1on the Reddit task: “Edit my post on Star Trek Starfleet Academy series by adding a line to the body that says "Every watch makes me feel like a kid again"” . 19 | https://arxiv.org/abs/2505.16421v1 |
arXiv:2505.16425v1 [cs.CL] 22 May 2025I2G: Generating Instructional Illustrations via Text-Conditioned Diffusion Jing Bi1, Pinxin Liu1, Ali Vosoughi1, Jiarui Wu1, Jinxi He1,Chenliang Xu1 1University of Rochester Abstract The effective communication of procedural knowledge remains a significant challenge in natural language processing (NLP), as purely textual instructions often fail to convey com- plex physical actions and spatial relationships. We address this limitation by proposing a language-driven framework that translates pro- cedural text into coherent visual instructions. Our approach models the linguistic structure of instructional content by decomposing it into goal statements and sequential steps, then con- ditioning visual generation on these linguistic elements. We introduce three key innovations: (1) a constituency parser-based text encoding mechanism that preserves semantic complete- ness even with lengthy instructions, (2) a pair- wise discourse coherence model that maintains consistency across instruction sequences, and (3) a novel evaluation protocol specifically de- signed for procedural language-to-image align- ment. Our experiments across three instruc- tional datasets (HTStep, CaptainCook4D, and WikiAll) demonstrate that our method signifi- cantly outperforms existing baselines in gener- ating visuals that accurately reflect the linguis- tic content and sequential nature of instructions. This work contributes to the growing body of research on grounding procedural language in visual content, with applications spanning edu- cation, task guidance, and multimodal language understanding. 1 Introduction Procedural language understanding represents a significant challenge in natural language process- ing. Unlike declarative text, instructional language contains implicit temporal dependencies, causal relationships, and action sequences that require specialized approaches to model effectively. While large language models have advanced our ability to generate and comprehend procedural text, they still struggle with the inherent limitations of the text modality itself, which often inadequately conveys Figure 1: The qualitative results of our method, com- pared with baseline models, are illustrated in the figure. The baseline model struggles to capture the progression in the text, whereas our method successfully captures this progression and achieves a more illustrative result. However, the image lacks in StackDiffusion due to the model’s limitation of not generating more than 6 steps. spatial relationships, physical manipulations, and visual states critical to executing complex tasks. Users frequently supplement textual instructions with visual aids, demonstrating the natural comple- mentary relationship between language and vision in procedural understanding. This paper addresses this gap by developing a framework that bridges procedural language with corresponding visual rep- resentations, advancing multimodal instruction un- derstanding in NLP. To address this need for visual instruction, plat- forms such as YouTube and TikTok have become go-to resources for learning new skills. Instruc- tional videos on these platforms have emerged as a popular means for people to learn new skills and improve their abilities in executing complex proce- dural activities. Typically, an instructional video includes a brief title that states the ultimate goal to be achieved. Throughout the video, various steps are demonstrated progressively, each building on 1 the dependencies of the previous ones. In key time frames, visual instructions are supplemented with specific textual descriptions of the actions or con- tent of the current step. This combination allows viewers to grasp the nuances of complex tasks | https://arxiv.org/abs/2505.16425v1 |
by linking visual demonstrations with textual explana- tions. These videos are not only beneficial for hu- mans, but also hold promise for machine learning, as they provide clear visual demonstrations of intri- cate tasks and detailed human-object interactions across various domains. These challenges intersect with core NLP tasks such as procedural text un- derstanding and multimodal instruction following, which have gained interest in the ACL community. Drawing inspiration from these instructional videos, we formulate this process as a generative problem. The challenge lies in effectively modeling the distribution of the goals of a task, its procedural steps, and the associated visual information. Our approach addresses this by utilizing a generative model to encapsulate instructional visual informa- tion. We decompose the problem into manageable parts: modeling the textual components of goals and steps, and then conditioning the visual informa- tion on these textual descriptors. This is mathemat- ically represented through the probability function p(g, s, v ) =pϕ(V|g,S)pθ(g,S), (1) where p(·)denotes the model distribution, and g,S, andSrepresent the goal text, the sequence of step instructions (si∈ S), and the visual information (such as images or videos) (vi∈ V), respectively. To effectively learn pϕ(v|g, s), we employ a diffusion-based strategy, specifically training a de- noising diffusion model. For the pθ(g, s)distribu- tion, we first leverage several recent instructional video and image datasets containing trajectory data of task performance, ensuring our model learns a wide array of task-specific visuals that are both accurate and varied. To enhance the integration and coherence between the textual and visual com- ponents, we refine our approach by leveraging a large language model (LLM) extensively trained to represent the joint distribution of goals and pro- cedural steps pθ(g, s), generating text samples that are diverse and contextually relevant. We lever- age the pretrained LLM to draw (g, s)samples and use image-text alignment model acts as a reward model to provide feedback to further ensure that the generated visuals accurately reflect the textual instructions.Evaluating the proposed illustrative instruction generation framework presents unique challenges, especially given the limitations of current main- stream evaluation metrics for instructional content. Effective evaluation requires considering both the quality of the generated images and their alignment with the instructional text. Traditional metrics like Fréchet Inception Distance (FID) and Inception Score focus on photorealism but fail to capture the semantic alignment between text and images. CLIPScore is commonly used for evaluating text- image alignment, as it measures how well text de- scriptions match generated images. However, our experiments revealed that CLIPScore often falls short in capturing the nuances of longer, descrip- tive texts typical of procedural steps, leading to misleadingly low alignment scores. State-of-the- art multimodal models, which use techniques like language binding or image binding, also struggle to provide meaningful judgments on the congruence between extended texts and corresponding images. To address these evaluation challenges, we ini- tially conducted a series of human evaluations to assess the alignment between the generated text and images. Human judgment remains the gold standard for measuring the quality and relevance of instructional content, providing insights that auto- mated metrics currently cannot. | https://arxiv.org/abs/2505.16425v1 |
Based on these hu- man evaluations, we then leveraged caption gener- ation and text similarity models that demonstrated high alignment with human judgments. These mod- els were adapted to refine our evaluation frame- work, ensuring that it not only measures photore- alism but also accurately reflects the instructional integrity and coherence between text and visual representations. Our main contributions are summarized as fol- lows: •We propose a mathematically sound frame- work for visual instruction generation based on goal and step dependencies. This frame- work effectively decomposes the problem into manageable components, enabling meaning- ful generation of instructional content. •We design a feedback-based finetuning mech- anism for a diffusion generative model. By incorporating image-text alignment feedback, we ensure that the visual outputs are relevant and accurate representations of the textual in- structions. 2 •We address the challenges in evaluating visual instruction generation by developing a com- prehensive evaluation protocol. To our knowl- edge, we are the first to conduct experiments targeting instructional image-text alignment within the LLM realm. This protocol includes human evaluations and advanced models for caption generation and text similarity, ensur- ing robust assessment of the quality and co- herence of the generated content. •We situate our work in the broader context of multimodal text generation, demonstrating how large language models (LLMs) and diffu- sion can jointly produce coherent instructional visuals. We believe that the release of our code and backbone network weights will facilitate further research and benchmarking in this area, driving ad- vancements in the integration of visual and textual instructional systems. 2 Related Work Instructional Visual Understanding: The field of instructional visual understanding has rapidly advanced with the introduction of specialized datasets like Breakfast (Kuehne et al., 2014), 50Sal- ads (Stein and Mckenna, 2013), COIN (Tang et al., 2019), and CrossTask (Zhukov et al., 2019), en- abling detailed research into the structured se- quences of actions in videos. The HT-Step dataset (Afouras et al., 2023), in particular, with its detailed annotations from real-world instructional articles focused on cooking, provides a compre- hensive framework for training models on video- text synchronization. This enhances our ability to interpret procedural videos through computa- tional tasks such as step classification (Lin et al., 2022), action segmentation (Fried et al., 2020), video retrieval (Ging et al., 2020), and temporal article grounding (Liu et al., 2022). These tasks are crucial for the precise identification and cat- egorization of actions, even when steps are not visually explicit. Additionally, the Visual Goal- Step Inference (VGSI) task advances this domain by testing models’ abilities to discern the intent behind actions and connect these with instructional goals (Yang et al., 2021b). It enriched the interpre- tative capabilities of instructional content within images contexts. The introduction of the Cap- tainCook4D dataset further enriches this domain(Peddi et al., 2023), offering recordings of cook- ing procedures in real kitchen environments to benchmark understanding of procedural activities visually and temporally. The availability of ex- tensive video-text datasets has driven innovations in joint video-language embedding (Bain et al., 2022; Miech et al., 2020; Radford et al., 2021; Yang et al., 2021a), particularly by | https://arxiv.org/abs/2505.16425v1 |
utilizing narra- tions to contextually ground procedural steps (Han et al., 2022), enhancing performance across tasks. New benchmarks such as InstructionBench (Wei et al., 2025) explicitly evaluate question-answering over instructional videos, while Pivot pre-training exploits task hierarchies for more data-efficient rep- resentation learning (Samel et al., 2025). Cross- modal planning has been pushed forward by Plan- LLM (Yang et al., 2025), and large-scale transfer of internet video knowledge to unseen tasks is demon- strated by Luo et al. (Luo et al., 2025). Aligning Diffusion Models: A variety of works attempt to improve diffusion models for aligning with human preferences. Many approaches focus on fine-tune for a better alignment. DDPO and DPOK (Black et al., 2024; Fan et al., 2023) use reinforcement learning approaches for reward fine- tuning of diffusion models. With a slight difference in approach, a work (Lee et al., 2023) uses binary human feedback and semi-supervised learning for the reward fine-tuning. DRaFT, AlignProp, and Im- ageReward (Clark et al., 2023; Prabhudesai et al., 2023; Xu et al., 2023) directly use reward function gradients to fine-tune diffusion models. Instead of using a reward function, Diffusion-KTO, Diffusion- DPO, and D3PO (Li et al., 2024; Wallace et al., 2023; Yang et al., 2023) introduce a fine-tuning objective directly targeting human preferences and feedback for alignment with human preferences. Emu (Dai et al., 2023) selects an extremely small number of high-quality data in the process of fine- tune for better generated image quality of text-to- image models. Similarly, RAFT (Dong et al., 2023) samples and filters high-reward data based on a re- ward model and uses the filtered data to fine-tune diffusion models. PickScore and Human Prefer- ence Score v2 (Kirstain et al., 2023; Wu et al., 2023) work as scoring models to guide the fine- tuning of diffusion models, better aligning them with human preferences. Some works propose techniques or models which lead to a better human preference align- ment. SDXL (Podell et al., 2023) improves dif- fusion models by adding a larger UNet-backbone, 3 an additional conditioning technique and a noising- denoising process based on the existing latent diffu- sion model. Several methods (Gal et al., 2022; Hao et al., 2023) optimize conditional input prompts in text-to-image models. Some other methods (Betker et al.; Segalis et al., 2023) use captions with better fidelity and semantics to recaption existing datasets for improved text-to-image generation quality. Cal- ibrated Preference Optimization (CaPO) intro- duces a multi-reward alignment strategy for text-to- image diffusion (Lee et al., 2025), while VideoDPO brings omni-preference alignment to text-to-video generation (Liu et al., 2025). A comprehensive 2025 survey by Wu et al. (Wu et al., 2025) synthe- sizes these emerging directions. Language Models as Evaluators: Recognizing the burgeoning capabilities of LLMs for open- ended tasks, evaluation methodologies have shifted towards a direct appraisal of generated text qual- ity, often employing LLMs themselves as evalua- tors (Fu et al., 2023; Wang et al., 2023a,b). The automation of evaluation is further extended to the multi-modality domain (Chen et al., 2024a). X-IQE (Chen et al., 2023) leverages a hierarchi- cal chain of thoughts (CoT), | https://arxiv.org/abs/2505.16425v1 |
from image descrip- tion to task-specific analysis for reasoning the Text2Image generation quality. Concept cover- age (Chen et al., 2024b) calculated based on Vi- sual Question Answering semantic entropy is fur- ther proposed for aestheticism and defects qual- ification of generations. CIGEval frames GPT- 4-based vision-language agents as fine-grained judges for conditional image generation (Wang et al., 2025), and GPT-ImgEval provides the first dedicated benchmark for diagnosing GPT-4V’s im- age generation and editing abilities (Yan et al., 2025). 3 Preliminary 3.1 Generative Diffusion Models pϕ(z0:T) :=p(zT)TY t=1pϕ(zt−1|zt), (2) where pϕis a model distribution parameterized by ϕandz1, . . . , z Tare latent variables of the same dimensionality as z0. Conversely, the forward pro- cess models q(z1:T|z0)by gradually adding Gaus- sian noise to the data sample z0. In this process, the intermediate noisy sample ztcan be sampled as: zt=√αtz0+√ 1−αtϵ, (3)in variance-preserving diffusion formulation (Ho et al., 2020). Here, ϵ∼ N (0, I)is a noise vari- able and α1:T∈(0,1]Tis a sequence that con- trols the amount of noise added at each diffusion timet. Given the noisy sample ztandt, the dif- fusion model fϕlearns to approximate the reverse process for data generation. The diffusion model parameters ϕare typically optimized to minimize Ezt,ϵ∥ϵ−fϕ(zt, t)∥2orEzt,ϵ∥z0−fϕ(zt, t)∥2. Note that exact formulations vary across the literature, and we kindly refer the reader to the survey papers (Chang et al., 2023; Yang et al., 2024) for a more comprehensive review of diffusion models. 3.2 Classifier-Free Guidance (CFG) CFG is a method proposed to achieve a better trade- off between fidelity and diversity for conditional sampling using diffusion models. Instead of gen- erating a sample using conditional score estimates only, it proposes to mix the conditional and un- conditional score estimates to control a trade-off between sample fidelity and diversity: ˜fϕ(zt, t, c) = (1 + w)fϕ(zt, t, c)−wfϕ(zt, t,∅), (4) where cis conditioning information and wis a hyperparameter that controls the strength of the guidance. However, Equation 4 requires training both conditional and unconditional diffusion mod- els. To address this, Ho et al. (Ho et al., 2020) introduces conditioning dropout during training, which enables the parameterization of both con- ditional and unconditional models using a single diffusion network. Conditioning dropout simply setscto a null token ∅with a chosen probability puncond to jointly learn the conditional and uncon- ditional scores during network training. Due to its ability to achieve a better balance between fidelity and diversity, CFG is used in many state-of-the-art conditional diffusion models. 4 Method Our framework addresses the fundamental NLP challenge of translating procedural language into coherent visual sequences. This task requires deep understanding of linguistic structures that encode temporal relationships, action sequences, and state changes - elements that are central to procedural text understanding. We approach this as a language- conditioned generation problem with particular at- tention to preserving discourse coherence across sequential instructions. 4 4.1 Formulation We aim to generate a sequence of images V= {vi}given a goal text gand step texts S={si}. Formally: p(V |g,S) = p v1, . . . , v n|g, s 1, . . . , s n . | https://arxiv.org/abs/2505.16425v1 |
A naive approach assumes each videpends only on (g, si): p v1, . . . , v n|g,S ≈nY i=1p vi|g, si , but often fails to maintain coherence across mul- tiple steps (see Figure 1). Although a fully joint model (Menon et al., 2023) better captures global context, it is fixed in step count and grows com- putationally expensive. We therefore introduce a pairwise factorization: p v1, . . . , v n|g,S ≈Y i<jp vi, vj|g, si, sj , allowing localized interactions across adjacent steps without incurring a combinatorial explosion. We implement this with Stable Diffusion XL to learn p vi, vj|g, si, sj , striking a balance be- tween capturing relevant cross-step dependencies and maintaining efficiency. Notably, our formulation views instruction gen- eration as a sequence of overlapping pairs that collectively encode the trajectory. This implic- itly enforces shared elements (e.g., recurring ob- jects) while allowing each pair to focus on the step- specific text. Such a decomposition is particularly useful in real-world tasks like cooking or assem- bly, where continuous reference to previously in- troduced items or partial progress is required. 4.2 Enhanced Cross-Image Consistency Given pairs (vi, si)and(vj, sj), we first encode each image into latent tensors zi, zj∈Rw×h×dvia a V AE. We assemble Z⊤= [z⊤ i, z⊤ j]∈R(2m)×d where m=w×h. Inspired by self-attention, we create a mask ˆM∈R(2m)×(2m)that selectively highlights temporal neighbors: Softmax ˆM⊙ QK⊤ √ d , where Q, K, V are projections of Z⊤. Unlike stan- dard transformers, this mask enforces step ordering by restricting attention mostly to one’s own latent slice and the adjacent slice. Figure 5 illustratesthe idea: each latent can still glimpse cross-step features, but is guided toward local consistency. This design is motivated by the fact that many procedural tasks exhibit visual continuity—e.g., the same tool or object might appear in subsequent steps. Our modified attention ensures that the gener- ated image for step jinherits relevant context from stepi, such as object identity or environment set- ting, yielding smoother transitions between steps without explicitly encoding an entire multi-step prompt at once. 4.3 Effective Text Encoding Our instructional dataset often contains long, de- tailed step descriptions exceeding typical encoder capacity. To retain critical information, we employ a constituency parser to segment each step into co- herent clauses (e.g., verb phrases). Each clause is then separately encoded and concatenated. For goal text (usually short and high-level), we adopt CLIP-ViT/L (Radford et al., 2021), while steps are encoded by OpenCLIP-ViT/G (Cherti et al., 2023), which better handles extended text. This division preserves essential details—particularly when mul- tiple objects or sub-steps appear in a single instruc- tion—minimizing semantic loss. 4.4 Preference Optimization Our model conditions on (g, s)sampled from a pretrained LLM and aims to match ground-truth images more closely. Let c= (g, s)andxT∼ N(0, I): J(θ) =Ec,xTh r sample (θ, g, s, x T), g, si , where r(·)quantifies the alignment of the image- text. We freeze the LLM and update only diffusion parameters ϕvia gradient-based feedback through the sampling steps, improving fidelity without re- training the language backbone. 4.5 Training and Optimization We | https://arxiv.org/abs/2505.16425v1 |
iteratively ascend the gradient of J(θ), passing reward signals backward through each diffusion step. Over multiple epochs, the model aligns gen- erated visuals more tightly with textual goals/steps. By decoupling text generation from our image dif- fuser, we can leverage large language models di- rectly, focusing on refining visual accuracy and coherence for complex, multi-step tasks. 5 Figure 2: We randomly sample pairs (vi, si)and(vj, sj), apply a custom adjacency mask to fuse latent representa- tions, and decode them back into images. The constituency parser (Sec. 4.3) splits text to handle length constraints. 5 Experiment 5.1 Datasets CaptainCook4D: The CaptainCook4D dataset comprises 384 cooking videos, covering 24 cook- ing tasks. According to the creator of the dataset, the primary objective of creating the dataset was to understand the errors in the procedural activities, so that some of the candidates’ operations in the videos did not align with the steps in the instruc- tions. To address this issue, we eliminated the error steps based on the annotation labels. HT-Step: HT-Step is a substantial dataset featuring temporal annotations of instructional steps in cook- ing videos, derived from the HowTo100M dataset. It encompasses 116k segment-level annotations across 20k narrated videos, totaling 401 unique tasks. HT-Step offers an unprecedented scale and diversity in terms of tasks and the richness of nat- ural language descriptions of steps, providing a robust foundation for aligning instructional articles with how-to videos. WikiAll: Another significant wellspring of instruc- tional visual datasets stems from how-to articles. We meticulously scoured prior research that har- nesses these datasets, methodically merging and filtering three distinct sources: VGSI (Yang et al., 2021b), as well as two other datasets derived from WikiHow, as detailed in papers (Wu et al., 2024; Yang et al., 2022), to culminate in a synthetic dataset termed WikiAll. This consolidated dataset comprises 87,651 tasks. 5.2 Text-Image Alignment Traditional diffusion model evaluations prioritize image quality, assessing whether generated imagesresemble real photographs or exhibit aesthetic ap- peal. However, in instructional visual modeling, the alignment of text with images is crucial. This alignment is typically assessed using tools like CLIPScore (Hessel et al., 2021) due to their sim- plicity and cost-effectiveness. CLIPScore quanti- fies semantic accuracy by computing the cosine similarity between the text and the corresponding image and times wto enlarge the differentiation between distribution, where we take w= 2.5be consistent. Despite its utility, CLIPScore often assigns low scores in instructional contexts when either goal or step text alignment is inadequate, as illustrated in Fig. 5. This recurring issue is likely due to instructional texts being composed of com- plex natural language, which does not align well with CLIP’s capabilities, challenging its effective- ness in such scenarios. To address the limitations of standard methods like CLIP, we first developed a human evaluation procedure where evaluators assess the alignment between images and goal/step texts. We maintained the same score scale as CLIPScore, where 2.5 indi- cates high alignment and 0 denotes no relevance. Given that the dataset is intended to illustrate task execution, we anticipated a strong correlation between the visual information and the associated texts | https://arxiv.org/abs/2505.16425v1 |
on goals and steps. However, our results re- veal that evaluators often struggled to identify clear connections between the goal texts and the visual content. This issue may arise because the goal texts tend to provide contextual rather than explicit visual references. Moreover, although evaluators were instructed to assign higher scores for strong perceived alignment, the scores did not reach the 6 Figure 3: We demonstrate that the goal text often serves as contextual information with limited relation to the visual content, and CLIP frequently produces the highest scores across various datasets. Additionally, the MLLM often fails to align with human judgment, providing high scores that do not correspond with the intended goals. upper end of the scale. This suggests that while there is generally better alignment between step texts and images than between goal texts and im- ages, there is still a lack of consensus among eval- uators about what constitutes maximal alignment. Consequently, relying solely on goal-text-image alignment for evaluation may be inadequate, as this metric often demands additional reasoning and imagination that current models do not capture. Building on these insights, we further evaluated two popular methods: Imagebind and Langbind. Although these occasionally outperformed CLIP in certain datasets (e.g., Captain dataset), they also showed weaknesses, such as lower image-step text alignment scores in the same dataset, as shown in Fig. 5. In search of a more effective approach, we lever- aged a caption model to generate image captions, subsequently comparing these captions with the original texts. We employed two widely-used MLLMs, BLIP2 and LLA V A, for captioning, de- tailed in the appendix. Furthermore, we introduced a text encoder, referenced as (BehnamGhader et al., 2024), using a decoder-only model as an encoder to assess the similarity of long texts. Our findings reveal that this approach aligns well with human judgment across three datasets, partic- ularly in terms of step-text alignment. Notably, in the Htstep dataset, the score histograms varied, but the fitted Kernel Density Estimates (KDE) were nearly identical, indicating strong agreement both with human evaluators and between models. By integrating MLLMs and the text encoder, we estab- lished a robust evaluation framework that serves not only as an evaluator but also as a reward model to assist in fine-tuning, discussed further in Sec 4.4.5.3 Evaluation Both "Goal Faithfulness" and "Step Faithfulness" use CLIP to measure how well an image aligns with its intended text. They compare the CLIP similarity of the image with the correct text (either the goal or the step text) against texts from other goals or steps. Both metrics rely on text-image alignment scores compared with other random steps, using relative CLIP scores as the standard. However, previous sections show that CLIP scores poorly correlate with human preferences. Initial experiments also revealed that the "Cross- Image Consistency" metric in previous work strug- gles with diffusion model text prompts that fail to differentiate between steps, often generating very similar images for different steps as shown in Fig- ure1 Essentially, our goal is to model the distribu- tion discrepancy. In our work, we use the Kullback- Leibler | https://arxiv.org/abs/2505.16425v1 |
(KL) divergence and the Chi-square test as metrics to quantify this discrepancy. This ap- proach addresses shortcomings observed in previ- ously proposed metrics, enhancing the accuracy of text-image alignment evaluations to better align with human preferences. We choose the below baseline model: Stable Diffusion 1.5 (SD15) (Rombach et al., 2022) SD15 is a text-to-image diffusion model that produces photorealistic images from textual de- scriptions. The 1.5 version has been fine-tuned through 595k steps at a 512x512 resolution using the "laion-aesthetics v2 5+" dataset, with a 10% reduction in text-conditioning to enhance classifier- free guidance sampling. Stable Diffusion 2.1 Turbo (SD21) (Rombach et al., 2022) SD21 Turbo is a distillation of SD2.1, itself an iteration of SD2.0, trained from scratch over 550k steps at 256x256 resolution on a subset of LAION-5B, yielding improved results compared to SD1.5. 7 Figure 4: Compared to goal image alignment, MLLMs perform better in aligning step text with images, showing high agreement with human judgment. Although human evaluators tend to assign a range of scores, they generally award high scores. Table 1: Comparison of text-image alignment distribution across procedural language datasets Method GT Captioner HTStep Captain4D Wikiall KL Chi2 FID KL Chi2 FID KL Chi2 FID SD1.5 Human llava 13.01 6.67 53.50 13.46 7.09 57.09 14.55 6.21 51.99 SD1.5 Human blip2 12.81 6.23 12.55 7.00 14.35 6.18 SD2.1 Human llava 14.06 6.67 47.31 13.77 6.91 49.24 14.23 6.23 45.78 SD2.1 Human blip2 13.81 6.68 13.45 6.93 14.64 6.15 StackedDiffusion Human llava 13.01 6.67 39.65 12.74 7.05 40.21 14.55 6.22 37.72 StackedDiffusion Human blip2 12.91 6.56 11.72 7.87 14.12 6.19 SDXL Human llava 13.90 6.20 33.23 12.92 6.41 35.54 13.28 5.20 36.67 SDXL Human blip2 13.02 6.43 14.08 6.20 13.95 5.18 Ours Human llava 12.71 5.92 31.87 11.92 4.71 34.04 12.18 4.70 32.49 Ours Human blip2 12.48 5.22 10.62 4.91 11.79 4.43 Note: Lower values indicate better alignment between procedural language and generated visual representation (×100for clarity). The FID column provides an image quality baseline across datasets, independent of the language- vision alignment measures. Stable Diffusion xl Turbo(SDXl) (Podell et al., 2023) SDXL is the latest and most powerful open source diffusion model in the Stable Diffusion se- ries, featuring a UNet that is three times larger and incorporating a second text encoder (OpenCLIP ViT-bigG/14) alongside the original, significantly increasing its parameter count. StackedDiffusion (Menon et al., 2023) This work is closely related to our approach of tiling along a single spatial dimension and using diffusion latent representations to generate instruction sequences. From the table, it is clear that our method outper- formed the baseline model in both the Chi-squared (CHI2) and KL divergence tests. However, we also observed that although the images generated by the baseline model are not as convincing as illustra- tions, they received high scores when judged by the MLLM. Upon reviewing the captions gener- ated, we found that the MLLM tends to assign high scores when the object names in the text match, even if the actions described differ. This indicates potential areas for further improvement.Conclusion and Limitations In this work, we introduced a | https://arxiv.org/abs/2505.16425v1 |
text-conditioned dif- fusion framework that decomposes instructional content into goals and step-by-step instructions. Our method leverages a large language model for textual representation, combined with a feedback mechanism for image-text alignment, to generate coherent illustrations of procedural activities. The experimental results demonstrate the effectiveness of our model over baselines in multiple datasets, supported by a comprehensive evaluation protocol. However, several challenges remain. First, our approach struggles with abstract or implicit actions that do not manifest themselves as clear visual fea- tures. Second, while we ensure per-step alignment, uniformity across consecutive images can still be improved, especially for extended multi-step pro- cesses. Lastly, integrating multi-turn LLM reason- ing into the generative process is a promising av- enue for capturing more nuanced instructions and facilitating broader applicability. We will explore these directions in future work. 8 References Triantafyllos Afouras, Effrosyni Mavroudi, Tushar Na- garajan, Huiyu Wang, and Lorenzo Torresani. 2023. HT-step: Aligning instructional articles with how- to videos. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Bench- marks Track . Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zis- serman. 2022. Frozen in time: A joint video and image encoder for end-to-end retrieval. Preprint , arXiv:2104.00650. Parishad BehnamGhader, Vaibhav Adlakha, Marius Mosbach, Dzmitry Bahdanau, Nicolas Chapados, and Siva Reddy. 2024. Llm2vec: Large language mod- els are secretly powerful text encoders. Preprint , arXiv:2404.05961. James Betker, Gabriel Goh, Li Jing, †TimBrooks, Jianfeng Wang, Linjie Li, †LongOuyang, †Jun- tangZhuang, †JoyceLee, †YufeiGuo, †Wesam- Manassra, †PrafullaDhariwal, †CaseyChu, †Yunx- inJiao, and Aditya Ramesh. Improving image gener- ation with better captions. Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. 2024. Training diffusion models with reinforcement learning. Preprint , arXiv:2305.13301. Ziyi Chang, George Alex Koulieris, and Hubert P. H. Shum. 2023. On the design fundamentals of diffusion models: A survey. Preprint , arXiv:2306.04542. Dongping Chen, Ruoxi Chen, Shilin Zhang, Yinuo Liu, Yaochen Wang, Huichi Zhou, Qihui Zhang, Pan Zhou, Yao Wan, and Lichao Sun. 2024a. Mllm-as-a-judge: Assessing multimodal llm-as-a-judge with vision- language benchmark. Preprint , arXiv:2402.04788. Muxi Chen, Yi Liu, Jian Yi, Changran Xu, Qiuxia Lai, Hongliang Wang, Tsung-Yi Ho, and Qiang Xu. 2024b. Evaluating text-to-image generative models: An empirical study on human image synthesis. arXiv preprint arXiv:2403.05125 . Yixiong Chen, Li Liu, and Chris Ding. 2023. X- iqe: explainable image quality evaluation for text-to- image generation with visual large language models. Preprint , arXiv:2305.10843. Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. 2023. Reproducible scaling laws for con- trastive language-image learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2818–2829. Kevin Clark, Paul Vicol, Kevin Swersky, and David J Fleet. 2023. Directly fine-tuning diffusion models on differentiable rewards. Preprint , arXiv:2309.17400.Xiaoliang Dai, Ji Hou, Chih-Yao Ma, Sam Tsai, Jialiang Wang, Rui Wang, Peizhao Zhang, Simon Vanden- hende, Xiaofang Wang, Abhimanyu Dubey, Matthew Yu, Abhishek Kadian, Filip Radenovic, Dhruv Ma- hajan, Kunpeng Li, Yue Zhao, Vladan Petrovic, Mitesh Kumar Singh, Simran Motwani, and 7 others. 2023. Emu: Enhancing image generation models using photogenic needles in a haystack. Preprint , | https://arxiv.org/abs/2505.16425v1 |
arXiv:2309.15807. Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. 2023. Raft: Reward ranked finetuning for generative foundation model alignment. Preprint , arXiv:2304.06767. Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mo- hammad Ghavamzadeh, Kangwook Lee, and Kimin Lee. 2023. Dpok: Reinforcement learning for fine- tuning text-to-image diffusion models. Preprint , arXiv:2305.16381. Daniel Fried, Jean-Baptiste Alayrac, Phil Blunsom, Chris Dyer, Stephen Clark, and Aida Nematzadeh. 2020. Learning to segment actions from observation and narration. Preprint , arXiv:2005.03684. Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166 . Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, and Daniel Cohen- Or. 2022. An image is worth one word: Personaliz- ing text-to-image generation using textual inversion. Preprint , arXiv:2208.01618. Simon Ging, Mohammadreza Zolfaghari, Hamed Pirsi- avash, and Thomas Brox. 2020. Coot: Cooperative hierarchical transformer for video-text representation learning. Preprint , arXiv:2011.00597. Tengda Han, Weidi Xie, and Andrew Zisserman. 2022. Temporal alignment networks for long-term video. Preprint , arXiv:2204.02968. Yaru Hao, Zewen Chi, Li Dong, and Furu Wei. 2023. Optimizing prompts for text-to-image generation. Preprint , arXiv:2212.09611. Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2021. Clipscore: A reference- free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718 . Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. De- noising diffusion probabilistic models. Preprint , arXiv:2006.11239. Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, and Omer Levy. 2023. Pick-a- pic: An open dataset of user preferences for text-to- image generation. Preprint , arXiv:2305.01569. 9 Hilde Kuehne, Ali Arslan, and Thomas Serre. 2014. The language of actions: Recovering the syntax and semantics of goal-directed human activities. In 2014 IEEE Conference on Computer Vision and Pattern Recognition , pages 780–787. Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Moham- mad Ghavamzadeh, and Shixiang Shane Gu. 2023. Aligning text-to-image models using human feed- back. Preprint , arXiv:2302.12192. Kyungmin Lee, Hyeongjin Kang, and Sungwoong Kim. 2025. Calibrated multi-preference optimiza- tion for aligning diffusion models. arXiv preprint arXiv:2502.02588 . Shufan Li, Konstantinos Kallidromitis, Akash Gokul, Yusuke Kato, and Kazuki Kozuka. 2024. Aligning diffusion models by optimizing human utility. ArXiv , abs/2404.04465. Xudong Lin, Fabio Petroni, Gedas Bertasius, Mar- cus Rohrbach, Shih-Fu Chang, and Lorenzo Tor- resani. 2022. Learning to recognize procedu- ral activities with distant supervision. Preprint , arXiv:2201.10990. Runtao Liu, Ning Duan, Zhenyang Yang, and 1 oth- ers. 2025. Videodpo: Omni-preference alignment for video diffusion generation. arXiv preprint arXiv:2412.14167 . To appear at CVPR 2025. Ye Liu, Siyuan Li, Yang Wu, Chang Wen Chen, Ying Shan, and Xiaohu Qie. 2022. Umt: Unified multi- modal transformers for joint video moment retrieval and highlight detection. Preprint , arXiv:2203.12745. Calvin Luo, Jacob Lambert, Eric Fu, and 1 others. 2025. Solving new tasks by adapting internet video knowl- edge. arXiv preprint arXiv:2504.15369 . Sachit Menon, Ishan Misra, and Rohit Girdhar. 2023. Generating illustrated instructions. Preprint , arXiv:2312.04552. Antoine Miech, Jean-Baptiste Alayrac, | https://arxiv.org/abs/2505.16425v1 |
Lucas Smaira, Ivan Laptev, Josef Sivic, and Andrew Zisserman. 2020. End-to-end learning of visual representa- tions from uncurated instructional videos. Preprint , arXiv:1912.06430. Rohith Peddi, Shivvrat Arya, Bharath Challa, Likhitha Pallapothula, Akshay Vyas, Jikai Wang, Qifan Zhang, Vasundhara Komaragiri, Eric Ragan, Nicholas Ruozzi, Yu Xiang, and Vibhav Gogate. 2023. Cap- taincook4d: A dataset for understanding errors in procedural activities. Preprint , arXiv:2312.14556. Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. 2023. Sdxl: Improving latent diffusion models for high-resolution image synthesis. Preprint , arXiv:2307.01952.Mihir Prabhudesai, Anirudh Goyal, Deepak Pathak, and Katerina Fragkiadaki. 2023. Aligning text-to- image diffusion models with reward backpropagation. Preprint , arXiv:2310.03739. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, and 1 others. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748–8763. PMLR. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High- resolution image synthesis with latent diffusion mod- els. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10684–10695. Karan Samel, Nitish Sontakke, and Irfan Essa. 2025. Leveraging procedural knowledge and task hierar- chies for efficient instructional video pre-training. arXiv preprint arXiv:2502.17352 . Eyal Segalis, Dani Valevski, Danny Lumen, Yossi Ma- tias, and Yaniv Leviathan. 2023. A picture is worth a thousand words: Principled recaptioning improves image generation. Preprint , arXiv:2310.16656. Sebastian Stein and Stephen Mckenna. 2013. User- adaptive models for recognizing food preparation activities. pages 39–44. Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou. 2019. Coin: A large-scale dataset for comprehensive instructional video analysis. In 2019 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) , pages 1207–1216. Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. 2023. Diffusion model alignment using direct prefer- ence optimization. Preprint , arXiv:2311.12908. Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023a. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048 . Jifang Wang, Rundi Cui, Lei Hou, and 1 others. 2025. Cigeval: A unified agentic framework for evaluat- ing conditional image generation. arXiv preprint arXiv:2504.07046 . Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023b. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926 . Haiwan Wei, Chen Zhao, Abhishek Garg, and Jianbo Shi. 2025. Instructionbench: An instructional video understanding benchmark. arXiv preprint arXiv:2504.05040 . 10 Sihao Wu, Kun Yin, Yanhong Liu, and Di He. 2025. Preference alignment on diffusion models: A com- prehensive survey for image generation and editing. arXiv preprint arXiv:2502.07829 . Te-Lin Wu, Alex Spangher, Pegah Alipoormolabashi, Marjorie Freedman, Ralph Weischedel, and Nanyun Peng. 2024. Understanding multimodal procedural knowledge by sequencing multimodal instructional manuals. Preprint , arXiv:2110.08486. Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. | https://arxiv.org/abs/2505.16425v1 |
2023. Hu- man preference score v2: A solid benchmark for eval- uating human preferences of text-to-image synthesis. Preprint , arXiv:2306.09341. Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. 2023. Imagereward: Learning and evaluating human preferences for text-to-image generation. Preprint , arXiv:2304.05977. Zhiyuan Yan, Yutong Liu, Yaqi Yuan, and 1 others. 2025. Gpt-imgeval: A benchmark for diagnosing gpt-4v’s image generation. arXiv preprint arXiv:2504.02782 . Dejie Yang, Zijing Zhao, and Yang Liu. 2025. Plan- llm: Video procedure planning with refinable large language models. In Proceedings of the 39th AAAI Conference on Artificial Intelligence (AAAI) . Jianwei Yang, Yonatan Bisk, and Jianfeng Gao. 2021a. Taco: Token-aware cascade contrastive learning for video-text alignment. Preprint , arXiv:2108.09980. Jinhui Yang, Xianyu Chen, Ming Jiang, Shi Chen, Louis Wang, and Qi Zhao. 2022. Visualhow: Multimodal problem solving. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 15606–15616. Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Qimai Li, Weihan Shen, Xiaolong Zhu, and Xiu Li. 2023. Using human feedback to fine-tune diffusion models without any reward model. ArXiv , abs/2311.13231. Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Wentao Zhang, Bin Cui, and Ming-Hsuan Yang. 2024. Diffusion models: A comprehensive survey of methods and applications. Preprint , arXiv:2209.00796. Yue Yang, Artemis Panagopoulou, Qing Lyu, Li Zhang, Mark Yatskar, and Chris Callison-Burch. 2021b. Visual goal-step inference using wikihow. arXiv preprint arXiv:2104.05845 . Dimitri Zhukov, Jean-Baptiste Alayrac, Ramazan Gok- berk Cinbis, David Fouhey, Ivan Laptev, and Josef Sivic. 2019. Cross-task weakly super- vised learning from instructional videos. Preprint , arXiv:1903.08225.A Appendix A.1 Prompt for Generating Image Goal: Make Classic Chocolate Chip Cookies Step1: Sift together flour, baking soda, and salt. Step2: Beat in eggs and vanilla extract into the butter mixture. Step3: Gradually mix the dry ingredients into the wet mixture. Step4: Stir in chocolate chips. Step5: Chill the dough for 30 minutes in the refrigerator. Step6: Drop dough by rounded spoonfuls onto baking sheets. Step7: Bake and let cool on a wire rack. Goal: Make Pupusas (Salvadoran Stuffed Corn Cakes) Step1: Combine 2 cups of masa harina (corn flour) with about 1.5 cups of warm water and a pinch of salt. Mix until the dough is smooth and pliable. If it’s too dry, add more water; if too sticky, add more masa harina. Step2: Common fillings include shredded cheese, refried beans, and cooked and minced pork (chicharrón). You can mix these fillings together or use them separately. Step3: Divide the dough into 8-10 equal por- tions and roll them into balls. Step4: Press each ball into a flat round disc, making a well in the center for the filling. Add your chosen filling, then fold the dough over the filling and seal it by pressing the edges together. Step5: Gently flatten the filled balls into thick discs, being careful not to let the filling escape. Step6: Heat a skillet or griddle over medium- high heat. Cook each pupusa for about 4-5 minutes on each side until they are golden brown and the | https://arxiv.org/abs/2505.16425v1 |
surface has some charred spots. Step7: Serve hot with curtido (a spicy cabbage slaw) and tomato salsa. Goal: Make Kolaches (Czech Pastries) Step1: Dissolve 1 packet of active dry yeast in 1/4 cup of warm water with a teaspoon of sugar. Let it sit until it becomes frothy, about 5-10 minutes. Step2: In a large bowl, mix 4 cups of flour, 1/4 cup of sugar, and a pinch of salt. In a separate bowl, beat 2 eggs with 1 cup of warm milk and 1/4 cup of melted butter. Combine the yeast mixture with the egg mixture, then gradually add to the dry ingredients to form a dough. 11 Step3: Turn the dough onto a floured surface and knead until smooth and elastic, about 8-10 minutes. Place in a greased bowl, cover, and let it rise in a warm place until doubled in size, about 1 hour. Step4: Fillings can vary from sweet (like fruit preserves or sweetened cream cheese) to savory (like sausage or cheese). Prepare your chosen fill- ing. Step5: Punch down the dough and divide it into about 24 small balls. Flatten each ball slightly, and then press a deep indent in the center. Fill the indent with your filling. Step6: Arrange the filled dough balls on a bak- ing sheet, cover, and let rise for another 30 minutes. Step7: Preheat your oven to 375 °F (190 °C). Bake the kolaches for 15-20 minutes or until golden brown. Goal: Make Grilled Steak Step1: Rub the raw steak with salt, pepper, and garlic for seasoning. Step2: Heat the grill to a high temperature of 450°F (232°C). Step3: Sear the seasoned steak on the grill, flip- ping once, until it develops a rich, golden crust, about 4-5 minutes per side. Step4: During the final minutes, baste the steak with a mixture of melted butter and aromatic herbs such as thyme and rosemary. Step5: Transfer the steak from the grill to a plate and allow it to rest, enhancing its juiciness. Step6: Cut the steak against the grain into thin slices, revealing the tender, cooked interior. Step7: Plate the sliced steak with your chosen sides, serving it hot and ready to enjoy. A.2 Prompt for Generating Caption Choice1: "Describe this image <image> in a de- tailed manner" Choice2: "What happened in the picture <im- age>? Answer in short sentences." Choice3: "Briefly say the content of this scene <image>" Choice4: "Show the content in the photo <im- age> in short text." Choice5: "Please describe the content of the image <image> in a few words." Choice6: "What is the content of the image <image>? Please answer in short sentences." Choice7: "Can you give me a brief description of this image <image>?"Choice8: "What do you see in this picture <im- age>?" Choice9: "In a few words, describe the content of the image <image>." Choice10: "Provide a concise explanation of this photograph <image>." Choice11: "What is happening in this scene <image>?" Choice12: "Summarize the content of the photo <image>." Choice13: "What are the main elements present in the image <image>?" Choice14: "Quickly explain the content of | https://arxiv.org/abs/2505.16425v1 |
this visual <image>." Choice15: "In a nutshell, what can you say about this picture <image>?" Choice16: "What’s the main subject in the im- age <image>?" Choice17: "Describe the main features of the image <image>." Choice18: "What is depicted in this photograph <image>?" Choice19: "Give me a short description of the picture <image>." Choice20: "Briefly describe the objects and ac- tions in the image <image>." Choice21: "What is the context of this image <image>?" (Randomly choose one from the above when generating captions.) A.3 Successful and Failed Cases We also observed a biased situation where the pres- ence of certain words consistently triggers similar figures, regardless of how much additional instruc- tional text we include in the goal or step descrip- tions. For example, as shown in Figure 5, when the word ’steak’ appears, the figure invariably displays an angel, failing to represent the progression. This occurs despite explicitly mentioning a stove in the prompt, yet the stove does not appear in the image. 12 Figure 5: Images of successful and failed generation 13 | https://arxiv.org/abs/2505.16425v1 |
Beyond Static Testbeds: An Interaction-Centric Agent Simulation Platform for Dynamic Recommender Systems Song Jin1*, Juntian Zhang1*, Yuhan Liu1†, Xun Zhang2,Yufei Zhang2,Guojun Yin2,Fei Jiang2,Wei Lin2,Rui Yan1,3† 1Gaoling School of Artificial Intelligence, Renmin University of China,2Meituan,3Wuhan University jinsong8@ruc.edu.cn ,zhangjuntian@ruc.edu.cn Abstract Evaluating and iterating upon recommender systems is crucial, yet traditional A/B testing is resource-intensive, and offline methods strug- gle with dynamic user-platform interactions. While agent-based simulation is promising, ex- isting platforms often lack a mechanism for user actions to dynamically reshape the en- vironment. To bridge this gap, we introduce RecInter , a novel agent-based simulation plat- form for recommender systems featuring a ro- bust interaction mechanism. In RecInter plat- form, simulated user actions (e.g., likes, re- views, purchases) dynamically update item at- tributes in real-time, and introduced Merchant Agents can reply, fostering a more realistic and evolving ecosystem. High-fidelity sim- ulation is ensured through Multidimensional User Profiling module ,Advanced Agent Ar- chitecture , and LLM fine-tuned on Chain-of- Thought (CoT) enriched interaction data. Our platform achieves significantly improved sim- ulation credibility and successfully replicates emergent phenomena like Brand Loyalty and theMatthew Effect . Experiments demonstrate that this interaction mechanism is pivotal for simulating realistic system evolution, establish- ing our platform as a credible testbed for rec- ommender systems research. 1 Introduction Recommender systems are integral to modern dig- ital platforms, shaping user experiences and driv- ing engagement across diverse domains. The abil- ity to effectively evaluate and iterate upon these systems is paramount for their continued improve- ment and success (Ricci et al., 2010). However, traditional online A/B testing, while considered the gold standard, is often resource-intensive, time- consuming, and can carry risks associated with de- ploying untested algorithms to real users. Offline *Equal contribution to this work. †Corresponding authors.evaluation methods, conversely, struggle to capture the dynamic, interactive nature of user-platform engagement. Agent-based simulation has emerged as a powerful and cost-effective paradigm to bridge this gap, offering a controlled environment to test new algorithms, understand user behavior, and ex- plore complex system dynamics such as filter bub- bles, information cocoons, and user conformity behaviors (Zhang et al., 2024; Wang et al., 2025). Early recommendation simulation, like Virtual Taobao (Shi et al., 2019) and RecSim (Ie et al., 2019), primarily rely on rule-based user models that, while useful, limited their behavioral realism and adaptability. The recent advent of Large Lan- guage Models (LLMs) has catalyzed a new wave of sophistication in agent-based modeling (Wang et al., 2023b; Xi et al., 2025). In the recommen- dation domain, RecAgent (Wang et al., 2025) pio- neered the use of LLM agents to simulate diverse user activities on a platform. Agent4Rec (Zhang et al., 2024) and SimUSER (Bougie and Watan- abe, 2025) enhance the reliability of user behavior simulation in recommender systems by incorporat- ing enriched user profiling and specially designed memory mechanisms. However, a critical aspect often underrepresented in existing recommendation simulations is a deeply integrated interaction mech- anism. Real-world platforms are not static; item attributes (e.g., popularity, average ratings, review sentiment) evolve dynamically based on continuous user feedback and even merchant interventions. Re- search (Lee and | https://arxiv.org/abs/2505.16429v1 |
Hosanagar, 2021; Cui et al., 2012; Li et al., 2019) has already demonstrated the sig- nificant impact of these dynamic item attributes on user decision-making. This reciprocal relationship, where user actions reshape the environment and, in turn, influence future user decisions, is crucial for realistic simulation. To address this gap, we introduce RecInter , a novel agent-based simulation platform for recom- mender systems incorporating a interaction mecha-arXiv:2505.16429v1 [cs.CL] 22 May 2025 nism. Our platform allows simulated user actions such as liking, reviewing, or purchasing items to dynamically update item attributes in real time. Fur- thermore, we introduce Merchant Agents capable of replying to users, further enriching the dynamic nature of the simulated ecosystem. To achieve high- fidelity simulations, our integrates three compo- nents. Firstly, a Multidimensional User Profiling module to construct detailed user profiles from his- torical data. Secondly, our platform’s User Agents are engineered with sophisticated memory systems (encompassing both perceptual and cognitive facul- ties) and advanced action selection mechanisms to more accurately emulate human decision-making processes. Thirdly, a Behavior Simulation Training pipeline is employed to fine-tune the LLM-based agents using high-quality, CoT enriched interaction data. Our contributions are threefold: •We develop a realistic recommendation simu- lation platform RecInter featuring a novel interac- tion mechanism where user feedback and merchant replies dynamically alter item attributes, fostering a more lifelike and evolving environment. •Through Multidimensional User Profiling, Ad- vanced Agent Architecture and Behavior Simula- tion Training, we achieve a higher authentication of simulated user behaviors, significantly surpassing previous methods. •We explored and validated the crucial role of the interaction mechanism in modeling realistic system using RecInter .RecInter successfully re- produced the Brand Loyalty and Matthew Effect phenomena, demonstrating its credibility as a reli- able testbed for recommendation systems research. 2 Related Work 2.1 LLM-based Agents Large Language Model (LLM) agents are au- tonomous computational entities that perceive, de- cide, and act within their environment (Xi et al., 2025). With the emergence of LLMs, agent-based systems have gained renewed attention (Wang et al., 2023b). The Generative Agent framework (Park et al., 2023b) introduced agents with memory, planning, and reflection, simulating human cog- nition. Recent efforts divide LLM agents into task- oriented and simulation-oriented categories (Xi et al., 2025). Task-oriented agents are designed to accomplish specific goals defined by users. For example, V oy-ager (Wang et al., 2023a) enables LLM-driven nav- igation in Minecraft, while ChatDev (Qian et al., 2023) and AutoGen (Wu et al., 2023) build col- laborative multi-agent systems for software devel- opment. Simulation-oriented agents, on the other hand, focus on modeling human-like behaviors and social dynamics. The SANDBOX framework (Liu et al., 2023) explores social issues through multi- agent interaction to support LLM alignment with ethical norms, while WereWolf and AgentSims (Xu et al., 2023; Lin et al., 2023) use game-like environ- ments to examine complex group dynamics. The FPS (Liu et al., 2024b) and FUSE (Liu et al., 2024c) respectively explore the use of LLMs-based agent for simulating the propagation and the evolution of fake news. Our framework provides a more accurate simulation of user reviews and merchant responses in | https://arxiv.org/abs/2505.16429v1 |
recommendation scenarios. 2.2 Recommendation Simulation Recommendation simulators have emerged as a cor- nerstone in recommender systems research (Afsar et al., 2023; Yang et al., 2021; Luo et al., 2022; Liu et al., 2025), offering a cost-effective alternative to online testing and addressing persistent chal- lenges such as serendipitous discovery and filter bubbles (Huang et al., 2020; Chen et al., 2019). Early simulators primarily served as data sources for reinforcement learning applications. Notable examples include Virtual Taobao (Shi et al., 2019), which simulates e-commerce user behaviors, and RecSim (Ie et al., 2019), providing toolkits for se- quential recommendation simulations. However, these conventional simulators often relied on sim- plistic rules, limiting their flexibility and valid- ity. The recent advent of LLM-powered agents has shown remarkable potential in approximat- ing human-like intelligence (Wang et al., 2023b), opening new avenues for more sophisticated rec- ommendation simulators. A notable example is RecAgent (Wang et al., 2025), which pioneered the development of a recommendation platform inte- grating diverse user behaviors. Agent4Rec (Zhang et al., 2024) proposes an agent system composed of LLMs to simulate recommendation systems, and SimUSER (Bougie and Watanabe, 2025) is an agent framework that simulates human-like be- havior to evaluate recommender algorithms, us- ing self-consistent personas and memory modules. Building on these advancements, our research ex- plores how user feedback and merchant replies dy- namically influence item attributes, enabling more realistic recommendation simulations. 3 Methodology Our simulation platform, RecInter (as illustrated in Figure 1), is designed to emulate a realistic recom- mendation scenario. To achieve this objective, we focus on two key aspects: (1) enhancing the cred- ibility of user simulation, and (2) constructing a interactive recommendation platform environment. To improve user simulation accuracy, we introduce modules including Multidimensional User Profil- ing, an Advanced User Agent Architecture, and a Behavior Simulation Training pipeline. Addi- tionally, we build an interactive recommendation platform environment that incorporates dynamic updates, merchant reply, and recommendation al- gorithm. 3.1 Problem Formulation LetUdenote the set of users and Irepre- sent the item set. For each user u∈ U , we first extract user profiles from their historical interaction sequences Hu = {(i1, r1, c1),(i2, r2, c2), ...,(iNu, rNu, cNu)}, where ij∈ I represents an interacted item with its rich contextual information, rj∈ {1,2,3,4,5} denotes the rating provided by the user, and cj denotes the textual review provided by the user for item ij. We construct a user profile pool P={P(u)|u∈U}, where each profile P(u) encodes information extracted from the user’s historical interactions. This pool forms the basis for instantiating simulated user agents. RecInter operates for Ttime steps. At each time step t, the platform recommends a set of items Rt⊂ I to the simulated user agent, who then provides feedback Ftbased on their preferences. This feedback subsequently updates the attributes Aof items on the platform. Our objective is to minimize the behavioral discrepancy D(Breal, Bsim)between the simulated user agents and real users, thereby creating a realistic simulation environment for recommender systems research. 3.2 Multidimensional User Profiling As presented in the Figure 2, Multidimensional User Profiling involves constructing the user’s ob- jective, subjective, | https://arxiv.org/abs/2505.16429v1 |
and inferred profile. Objective Profile Building on the statistical met- ricsTact,Tconf, and Tconsproposed by Zhang et al. (2024), we further introduce a set of novel in- dicators aimed at enhancing the realism of userobjective profile. We conducted a systematic analysis of product categories within user inter- action history Huand identified the most top-k frequently interacted item categories as Tcate= {c1, c2, ..., c k}, where kis set to 30. We also calcu- lated the top-k most frequently interacted items as Titem={i1, i2, ..., i k}, where kis set to 10. We use Trate=1 kPk i=1rito calculate the user’s historical average rate score. In our designed RecInter , users are able to leave reviews for items. Therefore, it is necessary to additionally consider the characteris- tics of user reviews. We define the probability that a user leaves a review as Trepr=1 kPk i=1I(ri̸=∅). To represent the average length of a user’s histori- cal reviews, we use Trelen=1 kPk i=1len(ri), where len(ri)denotes the length of the i-th review. Ad- ditionally, we explore the user’s review style by extracting the top-N most frequent keywords from their reviews using TF-IDF (Salton and Buckley, 1988). We denote the set of these keywords as Trekey={w1, w2, ..., w N}, where Nis set to 20. Subjective Profile While objective profile consti- tute a crucial part of user profile, relying solely on statistical indicators often fails to capture the more nuanced aspects of user preferences. To address this, we leverage LLM to derive subjective user pro- file. Specifically, for each user, we randomly sam- ple 60 items from their historical interactions Hu and apply an LLM GPT-4o to perform information augmentation, generating more detailed descrip- tions for these items. The items’ basic information A, augmented content ¯A, and user ratings rare provided as inputs to the LLM to facilitate the con- struction of subjective user profiles. This approach enables the model to summarize key aspects of the subjective profile, including taste preferences, consumption budget range, scenario preferences, and consumption habits. The prompt used and one profile case are provided in Appendix D.1 and Ap- pendix C.1 respectively. Inferred Profile User reviews have been shown to indirectly reflect personal profile (Sachdeva and McAuley, 2020; Srifi et al., 2020). Despite this, prior user simulation in recommender systems have largely ignored the potential of review data. To address this, we leverage LLM with carefully de- signed prompt to infer user profile from reviews. The input includes 60 items with each item’s basic attributes A, augmented content ¯A, user ratings r, and review texts c. To reduce hallucinations, the model is instructed to output “unknow” when infer- ence is uncertain. This process yields inferred pro- Memory Module Action Module Perceptual Memory Cognitive Memory User Agent MultVAE MF LightGCNName:McDonald‘s Description: Category: ['Fast food restaurant', 'Hamburger restaurant ’…] Introduction: McDonald's, a globally recognized fast - food giant, has been a … Review: 'Always a reliable choice for a quick meal. Love their burgers and fries!' 'Good place for quick bites! ' Recommendation Platform Dynamic Item Attribute Database Recommendation Algorithm Recommendation Result Merchant AgentI’d like to make | https://arxiv.org/abs/2505.16429v1 |
a reply for the review … I will make a review for … The experience was … Item profiles User Profile like dislike review Objective Subjective Inferred Figure 1: The overall framework of RecInter . The User Agent, equipped with user profile, memory module and action module, interacts with the platform by taking actions that can modify the attributes of items. In response, the platform guided by recommendation algorithm returns updated items to the user, thereby completing the interaction loop. Similarly, the Merchant Agent is also capable of participating in this dynamic process. User History Interaction User Profile Objective Profile T_act, T_conf, T_div, T_cate, T_item. T_rate, T_relen, T_rekey taste preferences, consumption budget range, scenario preferences, consumption habitsSubjective Profile Inferred Profile estimated age range, occupation type, income level, life status, price sensitivity Figure 2: Multidimensional User Profiling framework. file elements such as estimated age range, occupa- tion type, income level, life status, price sensitivity, quality consciousness, service preferences, points of concern, and review language style. The prompt used and one inferred profile case are provided in Appendix D.2 and Appendix C.2 respectively. 3.3 Advanced User Agent Architecture 3.3.1 Memory Module Recommender systems face unique challenges, in- cluding vast number of items, shifting user inter- ests, and the influence of users’ cognitive states. To address this, we introduce perceptual memory and cognitive memory. Perceptual Memory Perceptual memory stores an agent’s historical interactions and serves as the agent’s past experiences. To enable faster and more accurate retrieval while conserving prompt space, we perform a simplification process that retains only the most essential information from each interaction. Specifically, each perceptualmemory is represented as mp t= (lt, at, t), where lt={i1, i2, ..., i n}denotes the list of recom- mended items received by the agent at time step t, atrepresents the agent’s action at that time. Cognitive Memory InRecInter , agents engage in internal reasoning and judgment before taking actions, analogous to how individuals think before making a purchase. This internal cognitive process influences both current and future decisions. For instance, when presented with two chip flavors, a user may choose one now while planning to try the other later. To capture this process, we define cog- nitive memory at time step t as mc t= (¯lt, st, at, t), where ¯ltis the list of recommended items with their attributes, stis the agent’s thought process, atis the action taken, and tis the time step. At time step t, the agent retrieves relevant in- formation from its perceptual memory Mp= [mp 1, mp 2, ..., mp Np]and cognitive memory Mc= [mc 1, mc 2, ..., mc Nc]before making action. Follow- ing Park et al. (2023a), each memory is assigned an importance score: score i=αe−γ·∇t+βsim(mi,Rt) where ∇tis the time elapsed since the memory was formed, sim(mi,Rt)denotes the similarity be- tween memory miand the current reasoning con- textRt, and α, β, γ are tunable parameters. In addition to retrieving the top-scoring memories, we impose adaptive thresholds θpandθcto dy- namically adjust the maximum number of retrieved memory for each memory type. The specific hyper- parameter is included in Appendix A.3. 3.3.2 | https://arxiv.org/abs/2505.16429v1 |
Action Module We use a “think-then-act” approach, similar to chain-of-thought reasoning, enables simulated users to behave more similarly to real users, fol- lowing Yang et al. (2024). Inspired by real-world recommendation platforms, we designed a rich set of user actions that more closely reflect authentic interactions. Unlike previous works (Zhang et al., 2024; Wang et al., 2025), our simulated users ex- hibit interactive actions that can dynamically alter the attributes Aof items within the system. The action space includes: do nothing, like product, dislike product, share product, purchase prodcut, create review, like review, and dislike review. 3.4 Interactive Platform Environment 3.4.1 Interaction Implement To achieve a more realistic simulation of recom- mender systems, we incorporate an interaction mechanism into RecInter . Specifically, we im- plement a set of database tables associated with items, which store their dynamic attributes A. The agents’ actions are enhanced such that each action acan update these tables in real time. At each rec- ommendation time step, the platform queries the most recent item attributes Afrom the database and presents them to the simulated users. This method allows the platform to update items dynamically based on user feedback and enable interaction. 3.4.2 Merchant Reply In real-world recommendation platforms, mer- chants also can make changes to the attributes of their items, such as responding to user reviews or modifying product descriptions. To better simulate this interactive environment, we incorporate mer- chant agent into RecInter . The merchant agent autonomously updates item attributes based on its own strategy and interacts with users through re- views. This addition enables the study of merchant behavior within the recommender systems. 3.4.3 Recommendation Algorithm Recommendation algorithms also constitute a criti- cal component in the simulation of recommender systems. In RecInter , we have integrated a vari- ety of algorithms, including random, most popular, LightGCN (He et al., 2020), MultV AE (Liang et al.,2018), and MF (Koren et al., 2009). This module is designed to be flexible and extensible, allowing for the incorporation of custom recommendation algorithms as well. These algorithms aim to rec- ommend items that users are likely to be interested in, thereby enhancing user satisfaction and engage- ment within the simulation. 3.5 Behavior Simulation Training We adopt a Chain-of-Thought (CoT) fine-tuning approach to enhance the reliability of agent sim- ulation in recommender systems. To construct our training dataset, we used GPT-4o as the base model and ran RecInter multiple times, guiding the model to “think-then-act”. This process gener- ated a substantial number of simulated interactions enriched with CoT reasoning. To ensure the quality of the data, we implemented a multi-stage filtering pipeline consisting of four key components: (1) Format Filter: Ensures that the model outputs con- form to the required structural format. (2) Prefer- ence Filter: Verifies alignment between the agent’s simulated actions and the user’s actual preferences by leveraging real user interaction data. Specifi- cally, we check whether positively interacted items appeared in the user’s real interactions and whether negatively interacted items did not. (3) LLM Filter: Utilizes LLM to assess the plausibility and consis- tency of the simulated outputs. (4) | https://arxiv.org/abs/2505.16429v1 |
Human Filter: Involves manual verification to further ensure data quality. Following this pipeline, we curated a high- quality dataset comprising 5,295 CoT enhanced samples. We then fine-tuned the Qwen-2.5-7B- Instruct model, resulting in the base model that achieved the best simulation performance. The fine-tuning setting is provided in the Appendix A.4. 4 Experiment 4.1 Simulation Credibility Settings We employ the fine-tuned Qwen2.5-7B- Instruct as the default base model for RecInter . By default, the recommendation algorithm used is LightGCN, with 10 time steps and 1,000 user agents. The GoogleLocal serves as the default dataset. Additional experimental settings and varia- tions will be specified in subsequent each sections. Baselies Our baselines include RecAgent (Wang et al., 2025), Agent4Rec (Zhang et al., 2024), SimUSER (Bougie and Watanabe, 2025). Please refer to Appendix A.2 for more details. 4.1.1 Metric-Based Evaluation This part of the experimental setup follows the evaluation used in Zhang et al. (2024). In the exper- iments, 1,000 agents provide feedback on 20 ran- dom items. These 20 items consist of both really interacted and non-interacted items by the users, mixed in a 1 :mratio. Each agent selects the items they are interested in. Based on the correct- ness of the agents’ selections, we compute eval- uation metrics such as Accuracy, Precision, Re- call, and F1 Score. Experiments are conducted on three real-world datasets: Google Local (Li et al., 2022), MovieLens (Harper and Konstan, 2015), and AmazonBook (McAuley et al., 2015). The pre- processing steps for datasets are described in the Appendix A.1. All experimental results are showed in Table 1. The experimental results demonstrate that our approach outperforms existing methods across all datasets, indicating that the proposed Multidimensional User Profiling and Behavior Sim- ulation Training significantly enhance the accuracy of user simulation in recommender systems. 4.1.2 LLM-Based Evaluation To address the limitations of metric-based evalu- atios in capturing complex agent behaviors, we introduce an LLM-based evaluation that assesses performance across the entire simulation process. The detailed experimental procedures are provided in the Appendix B.6. The complete results are pre- sented in Table 5. As shown, our method achieves a most higher Adjusted Win Rate of 0.6917. To further validate the reliability of our LLM-based evaluation method, we conducted a human evalua- tion study in Appendix B.7. 4.1.3 Macro-Level Evaluation In addition to assessing the simulation credibility from the perspective of the user agent, we further evaluated the overall credibility of the simulation by comparing the alignment between the real data and simulation results in terms of the distribution of actions and items. Actions Distribution We conducted a comparative analysis of behavioral distributions between real users and simulated agents across three represen- tative actions: like, dislike, and review. Real user data was extracted from the GoogleLocal dataset, while simulated agent data was obtained by execut- ing a full simulation process and then collecting the corresponding actions statistics. As shown in Figure 3, the distribution patterns between the two Figure 3: Comparison of actions distribution. groups are generally aligned. Items Distribution We also examined differences in items popularity between real users and simu- lated | https://arxiv.org/abs/2505.16429v1 |
agents. For real users, we identified the top-10 most popular items from the GoogleLocal dataset and computed their frequency distributions. Sim- ilarly, for the simulated agents, we analyzed the top-10 most frequently interacted items based on the simulation results. As shown in Figure 11, five out of the top six most popular items overlapped between the two groups, and the overall popularity distributions showed a high degree of similarity. 4.2 Impact of Interaction Mechanism This part of the experiment shows that introducing an interaction mechanism significantly affects the evolutionary process of recommender system sim- ulation, highlighting its indispensable role in the simulation. In Section 4.2.1 and Section 4.2.2, we investigate the impact of the presence or absence of interaction and malicious interaction on the sim- ulation evolutionary process. In the Appendix B.1, we demonstrate that the interactive attitude of mer- chants also shapes this process. 4.2.1 Impact of the Presence or Absence of Interaction We conducted two simulations: one incorporating an interaction mechanism and the other without it, while keeping all other settings same. The goal was to observe differences in the simulated evolution- ary process between the two simulations, focusing on the changes in the number of likes received by two restaurants—McDonald’s and Denny’s. Mc- Donald’s represents the most popular option, while Denny’s is considered moderately popular. The statistical results are presented in the Figure 4. The findings indicate a significant difference in the likes for McDonald’s under the two simulation MethodGoogleLocal MovieLens AmazonBook Accuracy Precision Recall F1 Score Accuracy Precision Recall F1 Score Accuracy Precision Recall F1 Score RecAgent (1:1) 0.5643 0.5832 0.5342 0.5576 0.5807 0.6391 0.6035 0.6205 0.6035 0.6539 0.6636 0.6587 RecAgent (1:3) 0.5012 0.6134 0.3765 0.4666 0.5077 0.7396 0.3987 0.5181 0.6144 0.6676 0.4001 0.5003 RecAgent (1:9) 0.4625 0.6213 0.1584 0.2523 0.4800 0.7491 0.2168 0.3362 0.6222 0.6641 0.1652 0.2647 Agent4Rec (1:1) 0.6281 0.6134 0.6223 0.6178 0.6912 0.7460 0.6914 0.6982 0.7190 0.7276 0.7335 0.7002 Agent4Rec (1:3) 0.6012 0.6456 0.3905 0.4866 0.6675 0.7623 0.4210 0.5433 0.6707 0.6909 0.4423 0.5098 Agent4Rec (1:9) 0.5786 0.6631 0.2042 0.3112 0.6175 0.7753 0.2139 0.3232 0.6617 0.6939 0.2369 0.3183 SimUSER (1:1) 0.6823 0.6312 0.6754 0.6526 0.7912 0.7976 0.7576 0.7771 0.8221 0.7969 0.7841 0.7904 SimUSER (1:3) 0.6489 0.6624 0.3893 0.4904 0.7737 0.8173 0.5223 0.6373 0.6629 0.7547 0.5657 0.6467 SimUSER (1:9) 0.6042 0.6923 0.2187 0.3324 0.6791 0.8382 0.3534 0.4972 0.6497 0.7588 0.3229 0.4530 RecInter(1:1) 0.7143 0.6646 0.7057 0.6854 0.7947 0.8092 0.7595 0.7812 0.8302 0.8049 0.7901 0.7975 RecInter(1:3) 0.6753 0.7038 0.4312 0.5357 0.7852 0.8236 0.5474 0.6476 0.6804 0.7651 0.5813 0.6614 RecInter(1:9) 0.6218 0.7580 0.2508 0.3769 0.6869 0.8391 0.3638 0.5054 0.6634 0.7631 0.3214 0.4547 Table 1: Metric-based comparison of simulation credibility across different methods with the best results highlighted in bold. Our approach achieves superior performance compared to baselines across all datasets. Figure 4: Impact of interaction mechanism on likes. conditions. In the simulation with the interaction mechanism, McDonald’s received a substantially higher number of likes by the end of the simulation. More specifically, from time step t = 3 onward, the increase in likes for McDonald’s became notably greater in the interaction-enabled simulation. This can be attributed to a surge | https://arxiv.org/abs/2505.16429v1 |
in likes, sales, and re- views for McDonald’s at that point, which likely influenced agents to favor McDonald’s more fre- quently. In contrast, the difference between the two simulations for Denny’s was relatively minor. This suggests that Denny’s had insufficient attention in the early stages of the simulation, thereby exerting limited influence on the later stages. Additionally, a case study on agent responses in the Appendix C.3 further demonstrates that the interaction influences user decision-making during the simulation. 4.2.2 Impact of Malicious Interaction We further investigated the impact of malicious in- teraction on the evolution of the simulation. Specif- ically, at time step t= 5, we introduced three mali- cious reviews targeting McDonald’s. By compar- ing the trends in user likes, purchases, and reviews with or without the introduction of these reviews, we aimed to assess their influence on user behavior. The results are presented in the Figure 5. Com-pared to the scenario without malicious reviews, the inclusion of such reviews led to a noticeable deceleration in the growth of likes, purchases, and reviews. These findings suggest that malicious reviews can significantly influence user decision- making, which aligns with real-world observations. 4.3 Ablation Study In the ablation study, we evaluate the contributions of the Multidimensional User Profiling and Behav- ior Simulation Training modules. The experimental setup follows the same configuration as described in Section 4.1.1 with the parameter mset to 1. The w/o personalization variant directly uses LLM to summarize the user’s profile from 60 sampled history interactions. The w/o training variant em- ploys an untrained Qwen-2.5-7B-Instruct as the agent. As shown in the Table 2, RecInter achieves the best performance, indicating that both Multidi- mensional User Profiling and Behavior Simulation Training play critical roles in enhancing the realism of user behavior simulation. In the Appendix B.4, we conduct ablation studies by replacing differ- ent base models, demonstrating that our fine-tuned model achieves the best performance. Furthermore, Appendix B.5 evaluate the impact of different rec- ommendation algorithms and show that a effective recommendation strategy significantly enhances the engagement of simulated users on the platform. Method Accuracy Precision Recall F1 Score w/o personalization 0.5733 0.5865 0.5641 0.5601 w/o training 0.6715 0.6229 0.6732 0.6471 RecInter 0.7143 0.6646 0.7057 0.6854 Table 2: Ablation study results over 2 variants. 4.4 Phenomenon Observation In Section 4.4.1 and Section 4.4.2, we reproduced the phenomena of Brand Loyalty and Matthew Ef- Figure 5: Impact of malicious interaction on the cumulative number of likes for McDonald’s. Figure 6: Distribution of user interactions across items. fect. We also analyzed Conservative Behavior phe- nomenon through the reviews of simulated user in the Appendix B.2. 4.4.1 Brand Loyalty We analyzed the proportion of interactions each item received at the final stage of the simulation rel- ative to the total number of interactions, as shown in Figure 6. The results indicate that brand-related items were significantly more popular, with Mc- Donald’s and Starbucks accounting for 15.2% and 11.7% of all interactions respectively. This suggests the presence of Brand Loyalty among the simulated users. To further investigate this observed Brand Loyalty, we conducted an additional | https://arxiv.org/abs/2505.16429v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.