text string | source string |
|---|---|
providing robust support for system optimization and reliable deployment [207]. C. Chapter Summary This chapter provides a comprehensive summary of the key research challenges and future directions for LAMs and Agentic AI in future intelligent communication systems. It systematically reviews the core limitations hindering their performance and deployment, along with potential solutions. For LAMs, we highlight major challenges such as delayed communication data updates and learning, insufficient rea- soning capabilities, limited interpretability, and deployment difficulties. We propose that these issues can be addressed through the adoption of techniques such as continual learning, long-chain reasoning, explainable AI, and model compression and distillation, thereby enhancing the model’s capabilities in knowledge acquisition, logical reasoning, interpretability, and edge deployment in dynamic communication environ- ments. For Agentic AI, we summarize the challenges related to insufficient coverage of communication knowledge, weak system scalability, complex agent control mechanisms, and the absence of robust evaluation methodologies. We emphasize that advancing Agentic RAG mechanisms guided by dynamic knowledge, distributed control architectures, unified control protocols (MCP, A2A, ACP), and process-oriented evaluation frameworks will be crucial for future development. Collec- tively, these insights offer a systematic analysis and forward- looking guidance to drive intelligent communication systems toward greater autonomy, interpretability, and practical utility. VII. C ONCLUSION This tutorial provides a systematic review of the develop- ment trajectory and key technological pathways from LAMs to Agentic AI in future intelligent communication systems. First, we present a comprehensive overview of the core components and classification methods of LAMs, covering models such as Transformer, ViT, V AE, Diffusion, DiT, and MoE, and differ- entiating the applicability of LLMs, LVMs, LMMs, LRMs, and lightweight LAMs in communication tasks. We then propose a LAM construction paradigm tailored for communication sys- tems, encompassing three critical aspects: dataset construction, internal training mechanisms (e.g., pre-training, fine-tuning, and alignment), and external learning mechanisms (e.g., RAG and KG), to guide effective model learning in communication scenarios. Building on this foundation, we construct a LAM- based Agentic AI system framework by defining its core modules, including planners, knowledge bases, tools, and memory modules, and by outlining the interaction mechanisms for both single-agent and multi-agent settings. We further propose a multi-agent system for data retrieval, collaborative planning, and reflective evaluation. At the application level, we systematically summarize the practical value and poten- tial of LAMs and Agentic AI in key communication tasks, including semantic communication, the IoT, edge intelligence, network management, network security, UA V communication, 30 and other emerging applications. Finally, we identify the core challenges currently facing LAMs and Agentic AI in communications and outline future research directions. This work offers a systematic reference and theoretical foundation to support the evolution of intelligent communication systems from model-driven to agent-driven paradigms. REFERENCES [1] Z. Zhang, Y . Xiao, Z. Ma, M. Xiao, Z. Ding, X. Lei, G. K. Karagian- nidis, and P. Fan, “6g wireless networks: Vision, requirements, archi- tecture, and key technologies,” IEEE vehicular technology magazine , vol. 14, no. 3, pp. 28–41, 2019. [2] J. D. M.-W. C. Kenton and L. K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” | https://arxiv.org/abs/2505.22311v1 |
in Proceedings of naacL-HLT , vol. 1, 2019, p. 2. [3] A. Radford, “Improving language understanding by generative pre- training,” 2018. [4] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al. , “Language models are unsupervised multitask learners,” OpenAI blog , vol. 1, no. 8, p. 9, 2019. [5] T. B. Brown, “Language models are few-shot learners,” arXiv preprint arXiv:2005.14165 , 2020. [6] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y . Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of machine learning research , vol. 21, no. 140, pp. 1–67, 2020. [7] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al. , “Training language models to follow instructions with human feedback,” Advances in neural information processing systems , vol. 35, pp. 27 730–27 744, 2022. [8] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y . Lo et al. , “Segment anything,” in Proceedings of the IEEE/CVF International Conference on Computer Vision , Paris, France, October 2023, pp. 4015–4026. [9] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat et al. , “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774 , 2023. [10] Team, Gemini et al. , “Gemini: A family of highly capable multimodal models,” 2024. [Online]. Available: https://arxiv.org/abs/2312.11805 [11] K. Hayawi and S. Shahriar, “A cross-domain performance report of open ai chatgpt o1 model,” 2024. [12] D. Guo, D. Yang, H. Zhang, J. Song, R. Zhang, R. Xu, Q. Zhu, S. Ma, P. Wang, X. Bi et al. , “Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning,” arXiv preprint arXiv:2501.12948 , 2025. [13] H. Yang, S. Yue, and Y . He, “Auto-gpt for online decision making: Benchmarks and additional opinions,” arXiv preprint arXiv:2306.02224 , 2023. [14] Y . Nakajima, “Babyagi,” GitHub repository , 2023. [15] T. Xie, F. Zhou, Z. Cheng, P. Shi, L. Weng, Y . Liu, T. J. Hua, J. Zhao, Q. Liu, C. Liu et al. , “Openagents: An open platform for language agents in the wild,” arXiv preprint arXiv:2310.10634 , 2023. [16] M. A. Ferrag, N. Tihanyi, and M. Debbah, “From llm reasoning to autonomous ai agents: A comprehensive review,” arXiv preprint arXiv:2504.19678 , 2025. [17] D. B. Acharya, K. Kuppan, and B. Divya, “Agentic ai: Autonomous intelligence for complex goals–a comprehensive survey,” IEEE Access , 2025. [18] M. Gridach, J. Nanavati, K. Z. E. Abidine, L. Mendes, and C. Mack, “Agentic ai for scientific discovery: A survey of progress, challenges, and future directions,” arXiv preprint arXiv:2503.08979 , 2025. [19] F. Jiang, C. Pan, L. Dong, K. Wang, M. Debbah, D. Niyato, and Z. Han, “A comprehensive survey of large ai models for future com- munications: Foundations, applications and challenges,” arXiv preprint arXiv:2505.03556 , 2025. [20] H. Zhou, C. Hu, Y . Yuan, Y . Cui, Y . Jin, | https://arxiv.org/abs/2505.22311v1 |
C. Chen, H. Wu, D. Yuan, L. Jiang, D. Wu et al. , “Large language model (llm) for telecommu- nications: A comprehensive survey on principles, key techniques, and opportunities,” arXiv preprint arXiv:2405.10825 , 2024.[21] G. O. Boateng, H. Sami, A. Alagha, H. Elmekki, A. Hammoud, R. Mizouni, A. Mourad, H. Otrok, J. Bentahar, S. Muhaidat et al. , “A survey on large language models for communication, network, and service management: Application insights, challenges, and future directions,” IEEE Communications Surveys & Tutorials , 2025. [22] Z. Chen, Z. Zhang, and Z. Yang, “Big ai models for 6g wireless networks: Opportunities, challenges, and research directions,” IEEE Wireless Communications , vol. 31, no. 5, pp. 164–172, 2024. [23] Y . Huang, H. Du, X. Zhang, D. Niyato, J. Kang, Z. Xiong, S. Wang, and T. Huang, “Large language models for networking: Applications, enabling techniques, and challenges,” IEEE Network , vol. 39, no. 1, pp. 235–242, 2025. [24] M. Z. Chowdhury, M. Shahjalal, S. Ahmed, and Y . M. Jang, “6g wireless communication systems: Applications, requirements, technolo- gies, challenges, and research directions,” IEEE Open Journal of the Communications Society , vol. 1, pp. 957–975, 2020. [25] F. Jiang, Y . Peng, L. Dong, K. Wang, K. Yang, C. Pan, D. Niyato, and O. A. Dobre, “Large language model enhanced multi-agent systems for 6g communications,” IEEE Wireless Communications , vol. 31, no. 6, pp. 48–55, 2024. [26] F. Jiang, Y . Peng, L. Dong, K. Wang, K. Yang, C. Pan, and X. You, “Large ai model-based semantic communications,” IEEE Wireless Communications , vol. 31, no. 3, pp. 68–75, 2024. [27] J. Zhong, M. Li, Y . Chen, Z. Wei, F. Yang, and H. Shen, “A safer vision-based autonomous planning system for quadrotor uavs with dynamic obstacle trajectory prediction and its application with llms,” in2024 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW) , 2024, pp. 920–929. [28] Y . Shen, J. Shao, X. Zhang, Z. Lin, H. Pan, D. Li, J. Zhang, and K. B. Letaief, “Large language models empowered autonomous edge ai for connected intelligence,” IEEE Communications Magazine , vol. 62, no. 10, pp. 140–146, 2024. [29] J. Wu, J. Zhu, and Y . Liu, “Agentic reasoning: Reasoning llms with tools for the deep research,” arXiv preprint arXiv:2502.04644 , 2025. [30] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems , vol. 30, 2017. [31] K. Yang, S. Wang, J. Dai, K. Tan, K. Niu, and P. Zhang, “Witt: A wire- less image transmission transformer for semantic communications,” inICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2023, pp. 1–5. [32] Y . Leng, Q. Lin, L.-Y . Yung, J. Lei, Y . Li, and Y .-C. Wu, “Unveiling the power of complex-valued transformers in wireless communications,” arXiv preprint arXiv:2502.11151 , 2025. [33] Y . Tian, Q. Zhao, F. Boukhalfa, K. Wu, F. Bader et al. , “Multimodal transformers for wireless communications: A case study in beam prediction,” arXiv | https://arxiv.org/abs/2505.22311v1 |
preprint arXiv:2309.11811 , 2023. [34] J. Zhang, J. Li, Z. Wang, L. Shi, S. Jin, W. Chen, and H. V . Poor, “Decision transformers for wireless communications: A new paradigm of resource management,” IEEE Wireless Communications , 2025. [35] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al. , “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929 , 2020. [36] M. A. Mohsin, M. Jazib, Z. Alam, M. F. Khan, M. Saad, and M. A. Jamshed, “Vision transformer based semantic communications for next generation wireless networks,” arXiv preprint arXiv:2503.17275 , 2025. [37] G. Gharsallah and G. Kaddoum, “Vit los v2x: Vision transformers for environment-aware los blockage prediction for 6g vehicular networks,” IEEE Access , 2024. [38] G. Zheng, B. Zang, P. Yang, W. Zhang, and B. Li, “Fe-skvit: A feature- enhanced vit model with skip attention for automatic modulation recognition.” Remote Sensing , vol. 16, no. 22, 2024. [39] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114 , 2013. [40] M. Hussien, K. K. Nguyen, and M. Cheriet, “Prvnet: A novel partially- regularized variational autoencoders for massive mimo csi feedback,” in2022 IEEE wireless communications and networking conference (WCNC) . IEEE, 2022, pp. 2286–2291. [41] Y . Bo, Y . Duan, S. Shao, and M. Tao, “Joint coding-modulation for digital semantic communications via variational autoencoder,” IEEE Transactions on Communications , 2024. [42] G. Omondi and T. O. Olwal, “Variational autoencoder-enhanced deep neural network-based detection for mimo systems,” e-Prime-Advances in Electrical Engineering, Electronics and Energy , vol. 6, p. 100335, 2023. 31 [43] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” inInternational conference on machine learning . Lille, France: PMLR, July 2015, pp. 2256–2265. [44] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , New Orleans, LA, USA, June 2022, pp. 10 684– 10 695. [45] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, “Hierarchi- cal text-conditional image generation with clip latents,” arXiv preprint arXiv:2204.06125 , vol. 1, no. 2, p. 3, 2022. [46] J. Betker, G. Goh, L. Jing, T. Brooks, J. Wang, L. Li, L. Ouyang, J. Zhuang, J. Lee, Y . Guo et al. , “Improving image generation with bet- ter captions,” Computer Science. https://cdn. openai. com/papers/dall- e-3. pdf , vol. 2, no. 3, p. 8, 2023. [47] C. Saharia, W. Chan, S. Saxena, L. Li, J. Whang, E. L. Denton, K. Ghasemipour, R. Gontijo Lopes, B. Karagol Ayan, T. Salimans et al. , “Photorealistic text-to-image diffusion models with deep language understanding,” Advances in neural information processing systems , vol. 35, pp. 36 479–36 494, 2022. [48] H. Fu, W. Si, and R. Liu, “Conditional denoising diffusion-based channel estimation for fast time-varying mimo-ofdm systems,” Digital Signal Processing , p. 105283, 2025. [49] E. Grassucci, C. Marinoni, A. Rodriguez, and D. Comminiello, “Dif- fusion models for | https://arxiv.org/abs/2505.22311v1 |
audio semantic communication,” in ICASSP 2024- 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Seoul, Korea: IEEE, April 2024, pp. 13 136– 13 140. [50] Y . Zeng, X. He, X. Chen, H. Tong, Z. Yang, Y . Guo, and J. Hao, “Dmce: Diffusion model channel enhancer for multi-user semantic communication systems,” in ICC 2024-IEEE International Conference on Communications . IEEE, 2024, pp. 855–860. [51] Y . Xu, L. Huang, L. Zhang, L. Qian, and X. Yang, “Diffusion-based radio signal augmentation for automatic modulation classification,” Electronics , vol. 13, no. 11, p. 2063, 2024. [52] W. Peebles and S. Xie, “Scalable diffusion models with transformers,” inProceedings of the IEEE/CVF international conference on computer vision , 2023, pp. 4195–4205. [53] Y . Liu, K. Zhang, Y . Li, Z. Yan, C. Gao, R. Chen, Z. Yuan, Y . Huang, H. Sun, J. Gao et al. , “Sora: A review on background, technology, limitations, and opportunities of large vision models,” arXiv preprint arXiv:2402.17177 , 2024. [54] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton, “Adaptive mixtures of local experts,” Neural computation , vol. 3, no. 1, pp. 79–87, 1991. [55] N. Du, Y . Huang, A. M. Dai, S. Tong, D. Lepikhin, Y . Xu, M. Krikun, Y . Zhou, A. W. Yu, O. Firat et al. , “Glam: Efficient scaling of language models with mixture-of-experts,” in International conference on machine learning . PMLR, 2022, pp. 5547–5569. [56] A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bam- ford, D. S. Chaplot, D. d. l. Casas, E. B. Hanna, F. Bressand et al. , “Mixtral of experts,” arXiv preprint arXiv:2401.04088 , 2024. [57] C. Zhao, H. Du, D. Niyato, J. Kang, Z. Xiong, D. I. Kim, X. S. Shen, and K. B. Letaief, “Enhancing physical layer communication security through generative ai with mixture of experts,” IEEE Wireless Communications , 2025. [58] R. Zhang, H. Du, Y . Liu, D. Niyato, J. Kang, Z. Xiong, A. Jamalipour, and D. In Kim, “Generative ai agents with large language model for satellite networks via a mixture of experts transmission,” IEEE Journal on Selected Areas in Communications , vol. 42, no. 12, pp. 3581–3596, 2024. [59] J. Gao, Q. Cao, and Y . Chen, “Moe-amc: Enhancing automatic modulation classification performance using mixture-of-experts,” arXiv preprint arXiv:2312.02298 , 2023. [60] G. Team, R. Anil, S. Borgeaud, J.-B. Alayrac, J. Yu, R. Soricut, J. Schalkwyk, A. M. Dai, A. Hauth, K. Millican et al. , “Gem- ini: a family of highly capable multimodal models,” arXiv preprint arXiv:2312.11805 , 2023. [61] G. Team, P. Georgiev, V . I. Lei, R. Burnell, L. Bai, A. Gulati, G. Tanzer, D. Vincent, Z. Pan, S. Wang et al. , “Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context,” arXiv preprint arXiv:2403.05530 , 2024. [62] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozi `ere, N. Goyal, E. Hambro, F. Azhar et al. , “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971 , 2023.[63] H. | https://arxiv.org/abs/2505.22311v1 |
Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale et al. , “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprint arXiv:2307.09288 , 2023. [64] A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Yang, A. Fan et al. , “The llama 3 herd of models,” arXiv preprint arXiv:2407.21783 , 2024. [65] F. Jiang, L. Dong, Y . Peng, K. Wang, K. Yang, C. Pan, and X. You, “Large ai model empowered multimodal semantic communications,” IEEE Communications Magazine , vol. 63, no. 1, pp. 76–82, 2025. [66] Y . Yao, J. Duan, K. Xu, Y . Cai, Z. Sun, and Y . Zhang, “A survey on large language model (llm) security and privacy: The good, the bad, and the ugly,” High-Confidence Computing , p. 100211, 2024. [67] K. He, X. Chen, S. Xie, Y . Li, P. Doll ´ar, and R. Girshick, “Masked autoencoders are scalable vision learners,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2022, pp. 16 000–16 009. [68] M. Oquab, T. Darcet, T. Moutakanni, H. V o, M. Szafraniec, V . Khali- dov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby et al. , “Dinov2: Learning robust visual features without supervision,” arXiv preprint arXiv:2304.07193 , 2023. [69] S. Tariq, B. E. Arfeto, C. Zhang, and H. Shin, “Segment anything meets semantic communication,” arXiv preprint arXiv:2306.02094 , 2023. [70] F. Jiang, S. Tu, L. Dong, K. Wang, K. Yang, R. Liu, C. Pan, and J. Wang, “Lightweight vision model-based multi-user semantic communication systems,” arXiv preprint arXiv:2502.16424 , 2025. [71] H. Liu, C. Li, Q. Wu, and Y . J. Lee, “Visual instruction tuning,” Advances in neural information processing systems , vol. 36, 2024. [72] H. Zhang, H. Li, F. Li, T. Ren, X. Zou, S. Liu, S. Huang, J. Gao, Leizhang, C. Li et al. , “Llava-grounding: Grounded visual chat with large multimodal models,” in European Conference on Computer Vision . Springer, 2024, pp. 19–35. [73] Z. Guo, R. Xu, Y . Yao, J. Cui, Z. Ni, C. Ge, T.-S. Chua, Z. Liu, and G. Huang, “Llava-uhd: an lmm perceiving any aspect ratio and high-resolution images,” in European Conference on Computer Vision . Springer, 2024, pp. 390–406. [74] L. Qiao, M. B. Mashhadi, Z. Gao, C. H. Foh, P. Xiao, and M. Bennis, “Latency-aware generative semantic communications with pre-trained diffusion models,” arXiv preprint arXiv:2403.17256 , 2024. [75] S. Xu, C. K. Thomas, O. Hashash, N. Muralidhar, W. Saad, and N. Ramakrishnan, “Large multi-modal models (lmms) as universal foundation models for ai-native wireless systems,” arXiv preprint arXiv:2402.01748 , 2024. [76] T. Kawamoto, T. Suzuki, K. Miyama, T. Meguro, and T. Takagi, “Application of frozen large-scale models to multimodal task-oriented dialogue,” arXiv preprint arXiv:2310.00845 , 2023. [77] M. Ghasemi, A. H. Moosavi, and D. Ebrahimi, “A comprehensive survey of reinforcement learning: From algorithms to practical challenges,” 2025. [Online]. Available: https://arxiv.org/abs/2411.18892 [78] S. Zhang, L. Dong, X. Li, S. Zhang, X. Sun, S. Wang, J. Li, R. | https://arxiv.org/abs/2505.22311v1 |
Hu, T. Zhang, F. Wu et al. , “Instruction tuning for large language models: A survey,” arXiv preprint arXiv:2308.10792 , 2023. [79] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V . Le, D. Zhou et al. , “Chain-of-thought prompting elicits reasoning in large language models,” Advances in neural information processing systems , vol. 35, pp. 24 824–24 837, 2022. [80] P. Lewis, E. Perez, A. Piktus, F. Petroni, V . Karpukhin, N. Goyal, H. K ¨uttler, M. Lewis, W.-t. Yih, T. Rockt ¨aschel et al. , “Retrieval- augmented generation for knowledge-intensive nlp tasks,” Advances in Neural Information Processing Systems , vol. 33, pp. 9459–9474, 2020. [81] Z. Shao, P. Wang, Q. Zhu, R. Xu, J. Song, X. Bi, H. Zhang, M. Zhang, Y . Li, Y . Wu et al. , “Deepseekmath: Pushing the limits of mathematical reasoning in open language models,” arXiv preprint arXiv:2402.03300 , 2024. [82] A. Yang, B. Yang, B. Zhang, B. Hui, B. Zheng, B. Yu, C. Li, D. Liu, F. Huang, H. Wei et al. , “Qwen2. 5 technical report,” arXiv preprint arXiv:2412.15115 , 2024. [83] R. Coulom, “Efficient selectivity and backup operators in monte-carlo tree search,” in International conference on computers and games . Springer, 2006, pp. 72–83. [84] X. Qu, Y . Li, Z. Su, W. Sun, J. Yan, D. Liu, G. Cui, D. Liu, S. Liang, J. He et al. , “A survey of efficient reasoning for large rea- soning models: Language, multimodality, and beyond,” arXiv preprint arXiv:2503.21614 , 2025. [85] J. Ainslie, J. Lee-Thorp, M. De Jong, Y . Zemlyanskiy, F. Lebr ´on, and S. Sanghai, “Gqa: Training generalized multi-query transformer models from multi-head checkpoints,” arXiv preprint arXiv:2305.13245 , 2023. 32 [86] N. Shazeer, “Fast transformer decoding: One write-head is all you need,” arXiv preprint arXiv:1911.02150 , 2019. [87] P. Zhang, G. Zeng, T. Wang, and W. Lu, “Tinyllama: An open-source small language model,” arXiv preprint arXiv:2401.02385 , 2024. [88] B. Xu, Y . Chen, Z. Wen, W. Liu, and B. He, “Evaluating small language models for news summarization: Implications and factors influencing performance,” arXiv preprint arXiv:2502.00641 , 2025. [89] S. Hu, Y . Tu, X. Han, C. He, G. Cui, X. Long, Z. Zheng, Y . Fang, Y . Huang, W. Zhao et al. , “Minicpm: Unveiling the potential of small language models with scalable training strategies,” arXiv preprint arXiv:2404.06395 , 2024. [90] M. Javaheripi, S. Bubeck, M. Abdin, J. Aneja, S. Bubeck, C. C. T. Mendes, W. Chen, A. Del Giorno, R. Eldan, S. Gopi et al. , “Phi-2: The surprising power of small language models,” Microsoft Research Blog , vol. 1, no. 3, p. 3, 2023. [91] M. Abdin, J. Aneja, H. Awadalla, A. Awadallah, A. A. Awan, N. Bach, A. Bahree, A. Bakhtiari, J. Bao, H. Behl et al. , “Phi-3 technical report: A highly capable language model locally on your phone,” arXiv preprint arXiv:2404.14219 , 2024. [92] H. Zou, Q. Zhao, Y . Tian, L. Bariah, F. Bader, T. Lestable, and M. Debbah, “Telecomgpt: A framework to build telecom-specfic large language models,” | https://arxiv.org/abs/2505.22311v1 |
arXiv preprint arXiv:2407.09424 , 2024. [93] R. Nikbakht, M. Benzaghta, and G. Geraci, “Tspec-llm: An open- source dataset for llm understanding of 3gpp specifications,” arXiv preprint arXiv:2406.01768 , 2024. [94] F. Jiang, W. Zhu, L. Dong, K. Wang, K. Yang, C. Pan, and O. A. Dobre, “Commgpt: A graph and retrieval-augmented multimodal communica- tion foundation model,” arXiv preprint arXiv:2502.18763 , 2025. [95] J. Wang, X. Yi, R. Guo, H. Jin, P. Xu, S. Li, X. Wang, X. Guo, C. Li, X. Xu et al. , “Milvus: A purpose-built vector data management system,” in Proceedings of the 2021 International Conference on Management of Data , 2021, pp. 2614–2627. [96] S. D. M. Henriquez, E. R. Tito, J. F. I. Loo, L. Ruth, and H. Rond ´on, “Graph database with neo4j and the cypher language: An application in mining companies.” [97] A. Jaech, A. Kalai, A. Lerer, A. Richardson, A. El-Kishky, A. Low, A. Helyar, A. Madry, A. Beutel, A. Carney et al. , “Openai o1 system card,” arXiv preprint arXiv:2412.16720 , 2024. [98] S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y . Cao, and K. Narasimhan, “Tree of thoughts: Deliberate problem solving with large language models,” Advances in Neural Information Processing Systems , vol. 36, 2024. [99] M. Besta, N. Blach, A. Kubicek, R. Gerstenberger, M. Podstawski, L. Gianinazzi, J. Gajda, T. Lehmann, H. Niewiadomski, P. Nyczyk et al. , “Graph of thoughts: Solving elaborate problems with large language models,” in Proceedings of the AAAI Conference on Artificial Intelligence , vol. 38, no. 16, 2024, pp. 17 682–17 690. [100] L. Wang, W. Xu, Y . Lan, Z. Hu, Y . Lan, R. K.-W. Lee, and E.-P. Lim, “Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models,” arXiv preprint arXiv:2305.04091 , 2023. [101] Y . Gao, Y . Xiong, X. Gao, K. Jia, J. Pan, Y . Bi, Y . Dai, J. Sun, H. Wang, and H. Wang, “Retrieval-augmented generation for large language models: A survey,” arXiv preprint arXiv:2312.10997 , vol. 2, p. 1, 2023. [102] S. Pan, L. Luo, Y . Wang, C. Chen, J. Wang, and X. Wu, “Unifying large language models and knowledge graphs: A roadmap,” IEEE Transactions on Knowledge and Data Engineering , vol. 36, no. 7, pp. 3580–3599, 2024. [103] P. Sahoo, A. K. Singh, S. Saha, V . Jain, S. Mondal, and A. Chadha, “A systematic survey of prompt engineering in large language models: Techniques and applications,” arXiv preprint arXiv:2402.07927 , 2024. [104] A. Ehtesham, A. Singh, G. K. Gupta, and S. Kumar, “A survey of agent interoperability protocols: Model context protocol (mcp), agent communication protocol (acp), agent-to-agent protocol (a2a), and agent network protocol (anp),” arXiv preprint arXiv:2505.02279 , 2025. [105] C. Qu, S. Dai, X. Wei, H. Cai, S. Wang, D. Yin, J. Xu, and J.-R. Wen, “Tool learning with large language models: A survey,” Frontiers of Computer Science , vol. 19, no. 8, p. 198343, 2025. [106] B. Lindemann, T. M ¨uller, H. Vietz, N. Jazdi, and M. Weyrich, “A survey on long short-term memory networks for time series prediction,” Procedia | https://arxiv.org/abs/2505.22311v1 |
Cirp , vol. 99, pp. 650–655, 2021. [107] Y . Han, C. Liu, and P. Wang, “A comprehensive survey on vector database: Storage and retrieval technique, challenge,” arXiv preprint arXiv:2310.11703 , 2023. [108] K. Ueki, “Survey of visual-semantic embedding methods for zero- shot image retrieval,” in 2021 20th IEEE International Conference onMachine Learning and Applications (ICMLA) . IEEE, 2021, pp. 628– 634. [109] N. Shinn, F. Cassano, A. Gopinath, K. Narasimhan, and S. Yao, “Reflexion: Language agents with verbal reinforcement learning,” Advances in Neural Information Processing Systems , vol. 36, pp. 8634– 8652, 2023. [110] R. Sapkota, K. I. Roumeliotis, and M. Karkee, “Ai agents vs. agentic ai: A conceptual taxonomy, applications and challenge,” arXiv preprint arXiv:2505.10468 , 2025. [111] Z. Wang, L. Zou, S. Wei, F. Liao, J. Zhuo, H. Mi, and R. Lai, “Large language model enabled semantic communication systems,” arXiv preprint arXiv:2407.14112 , 2024. [112] P. Jiang, C.-K. Wen, X. Yi, X. Li, S. Jin, and J. Zhang, “Semantic communications using foundation models: Design approaches and open issues,” IEEE Wireless Communications , vol. 31, no. 3, pp. 76–84, 2024. [113] S. Salehi, M. Erol-Kantarci, and D. Niyato, “Llm-enabled data transmission in end-to-end semantic communication,” arXiv preprint arXiv:2504.07431 , 2025. [114] F. Jiang, S. Tu, L. Dong, C. Pan, J. Wang, and X. You, “Large gen- erative model-assisted talking-face semantic communication system,” arXiv preprint arXiv:2411.03876 , 2024. [115] S. Ribouh and O. Saleem, “Large language model-based seman- tic communication system for image transmission,” arXiv preprint arXiv:2501.12988 , 2025. [116] F. Jiang, C. Tang, L. Dong, K. Wang, K. Yang, and C. Pan, “Visual language model based cross-modal semantic communication systems,” IEEE Transactions on Wireless Communications , pp. 1–1, Mar. 2025. [117] F. Jiang, Y . Peng, L. Dong, K. Wang, K. Yang, C. Pan, and X. You, “Large generative model assisted 3d semantic communication,” arXiv preprint arXiv:2403.05783 , 2024. [118] D. Cao, J. Wu, and A. K. Bashir, “Multimodal large language models driven privacy-preserving wireless semantic communication in 6g,” in 2024 IEEE International Conference on Communications Workshops (ICC Workshops) . IEEE, 2024, pp. 171–176. [119] F. Jiang, S. Tu, L. Dong, K. Wang, K. Yang, and C. Pan, “M4sc: An mllm-based multi-modal, multi-task and multi-user semantic commu- nication system,” arXiv preprint arXiv:2502.16418 , 2025. [120] W. Chen, W. Xu, H. Chen, X. Zhang, Z. Qin, Y . Zhang, and Z. Han, “Semantic communication based on large language model for under- water image transmission,” arXiv preprint arXiv:2408.12616 , 2024. [121] A. Kalita, “Large language models (llms) for semantic communication in edge-based iot networks,” arXiv preprint arXiv:2407.20970 , 2024. [122] H. Cui, Y . Du, Q. Yang, Y . Shao, and S. C. Liew, “Llmind: Or- chestrating ai and iot with llm for complex task execution,” IEEE Communications Magazine , vol. 63, no. 4, pp. 214–220, 2025. [123] L. Shen, Q. Yang, Y . Zheng, and M. Li, “Autoiot: Llm-driven automated natural language programming for aiot applications,” arXiv preprint arXiv:2503.05346 , 2025. [124] X. Chen, W. Wu, Z. Li, L. Li, and F. Ji, “Llm-empowered iot for 6g networks: Architecture, challenges, and | https://arxiv.org/abs/2505.22311v1 |
solutions,” arXiv preprint arXiv:2503.13819 , 2025. [125] B. Xiao, B. Kantarci, J. Kang, D. Niyato, and M. Guizani, “Efficient prompting for llm-based generative internet of things,” IEEE Internet of Things Journal , 2024. [126] Y . Otoum, A. Asad, and A. Nayak, “Llm-based threat detec- tion and prevention framework for iot ecosystems,” arXiv preprint arXiv:2505.00240 , 2025. [127] M. Zong, A. Hekmati, M. Guastalla, Y . Li, and B. Krishnamachari, “In- tegrating large language models with internet of things: applications,” Discover Internet of Things , vol. 5, no. 1, p. 2, 2025. [128] M. Shirali, M. F. Sani, Z. Ahmadi, and E. Serral, “Llm-based event abstraction and integration for iot-sourced logs,” in International Con- ference on Business Process Management . Springer, 2024, pp. 138– 149. [129] T. An, Y . Zhou, H. Zou, and J. Yang, “Iot-llm: Enhancing real- world iot task reasoning with large language models,” arXiv preprint arXiv:2410.02429 , 2024. [130] S. Mo, R. Salakhutdinov, L.-P. Morency, and P. P. Liang, “Iot-lm: Large multisensory language models for the internet of things,” arXiv preprint arXiv:2407.09801 , 2024. [131] M. Xu, D. Cai, Y . Wu, X. Li, and S. Wang, “Fwdllm: Efficient fedllm using forward gradient,” arXiv preprint arXiv:2308.13894 , 2023. [132] Z. Yu, Z. Wang, Y . Li, R. Gao, X. Zhou, S. R. Bommu, Y . Zhao, and Y . Lin, “Edge-llm: Enabling efficient large language model adaptation on edge devices via unified compression and adaptive layer voting,” 33 inProceedings of the 61st ACM/IEEE Design Automation Conference , San Francisco, Jun. 2024, pp. 1–6. [133] M. Zhang, X. Shen, J. Cao, Z. Cui, and S. Jiang, “Edgeshard: Efficient llm inference via collaborative edge computing,” IEEE Internet of Things Journal , pp. 1–1, Dec. 2024. [134] W. Zhao, W. Jing, Z. Lu, and X. Wen, “Edge and terminal coop- eration enabled llm deployment optimization in wireless network,” inIEEE/CIC International Conference on Communications (ICCC Workshops) , Hangzhou, Aug. 2024, pp. 220–225. [135] G. Qu, Q. Chen, W. Wei, Z. Lin, X. Chen, and K. Huang, “Mobile edge intelligence for large language models: A contemporary survey,” IEEE Communications Surveys & Tutorials , pp. 1–1, Mar. 2025. [136] A. Khoshsirat, G. Perin, and M. Rossi, “Decentralized llm infer- ence over edge networks with energy harvesting,” arXiv preprint arXiv:2408.15907 , 2024. [137] Z. Lin, G. Qu, Q. Chen, X. Chen, Z. Chen, and K. Huang, “Pushing large language models to the 6g edge: Vision, challenges, and oppor- tunities,” arXiv preprint arXiv:2309.16739 , 2023. [138] O. Friha, M. Amine Ferrag, B. Kantarci, B. Cakmak, A. Ozgun, and N. Ghoualmi-Zine, “Llm-based edge intelligence: A comprehensive survey on architectures, applications, security and trustworthiness,” IEEE Open Journal of the Communications Society , vol. 5, pp. 5799– 5856, 2024. [139] J. Wang, L. Zhang, Y . Yang, Z. Zhuang, Q. Qi, H. Sun, L. Lu, J. Feng, and J. Liao, “Network meets chatgpt: Intent autonomous management, control and operation,” Journal of Communications and Information Networks , vol. 8, no. 3, pp. 239–255, 2023. [140] L. Yue and T. Chen, “Ai large model and 6g network,” in 2023 IEEE Globecom | https://arxiv.org/abs/2505.22311v1 |
Workshops (GC Wkshps) . Kuala Lumpur, Malaysia: IEEE, December 2023, pp. 2049–2054. [141] V . Komanduri, S. Estropia, S. Alessio, G. Yerdelen, T. Ferreira, G. P. Roldan, Z. Dong, and R. Rojas-Cessa, “Optimizing llm prompts for automation of network management: A user’s perspective,” in 2025 International Conference on Artificial Intelligence in Information and Communication (ICAIIC) . IEEE, 2025, pp. 0958–0963. [142] S. K. Mani, Y . Zhou, K. Hsieh, S. Segarra, T. Eberl, E. Azulai, I. Frizler, R. Chandra, and S. Kandula, “Enhancing network management using code generated by large language models,” in Proceedings of the 22nd ACM Workshop on Hot Topics in Networks , 2023, pp. 196–204. [143] H. Lee, M. Kim, S. Baek, N. Lee, M. Debbah, and I. Lee, “Large language models for knowledge-free network management: Feasibility study and opportunities,” arXiv preprint arXiv:2410.17259 , 2024. [144] D. Wu, X. Wang, Y . Qiao, Z. Wang, J. Jiang, S. Cui, and F. Wang, “Netllm: Adapting large language models for networking,” in Proceed- ings of the ACM SIGCOMM 2024 Conference , 2024, pp. 661–678. [145] A. Dandoush, V . Kumarskandpriya, M. Uddin, and U. Khalil, “Large language models meet network slicing management and orchestration,” arXiv preprint arXiv:2403.13721 , 2024. [146] Z. He, A. Gottipati, L. Qiu, X. Luo, K. Xu, Y . Yang, and F. Y . Yan, “Designing network algorithms via large language models,” in Proceedings of the 23rd ACM Workshop on Hot Topics in Networks , 2024, pp. 205–212. [147] H. Yang, K. Xiang, M. Ge, H. Li, R. Lu, and S. Yu, “A comprehen- sive overview of backdoor attacks in large language models within communication networks,” IEEE Network , 2024. [148] S. A. Khowaja, P. Khuwaja, K. Dev, H. A. Hamadi, and E. Zeydan, “Pathway to secure and trustworthy 6g for llms: Attacks, defense, and opportunities,” arXiv preprint arXiv:2408.00722 , 2024. [149] H. Luo, J. Luo, and A. V . Vasilakos, “Bc4llm: Trusted artificial intelligence when blockchain meets large language models,” arXiv preprint arXiv:2310.06278 , 2023. [150] A. Mishra, M. Li, and S. Deo, “Sentinellms: Encrypted input adaptation and fine-tuning of language models for private and secure inference,” in Proceedings of the AAAI Conference on Artificial Intelligence , vol. 38, no. 19, 2024, pp. 21 403–21 411. [151] Z. Zeng, J. Wang, J. Yang, Z. Lu, H. Zhuang, and C. Chen, “Priva- cyrestore: Privacy-preserving inference in large language models via privacy removal and restoration,” arXiv preprint arXiv:2406.01394 , 2024. [152] Y . Wang, Y . Lin, X. Zeng, and G. Zhang, “Privatelora for efficient privacy preserving llm,” arXiv preprint arXiv:2311.14030 , 2023. [153] G. Feretzakis and V . S. Verykios, “Trustworthy ai: Securing sensitive data in large language models,” AI, vol. 5, no. 4, pp. 2773–2800, 2024. [154] J. Zhang, H. Bu, H. Wen, Y . Liu, H. Fei, R. Xi, L. Li, Y . Yang, H. Zhu, and D. Meng, “When llms meet cybersecurity: A systematic literature review,” Cybersecurity , vol. 8, no. 1, pp. 1–41, 2025.[155] R. Zhang, H.-W. Li, X.-Y . Qian, W.-B. Jiang, and H.-X. Chen, “On large language models safety, security, and privacy: | https://arxiv.org/abs/2505.22311v1 |
A survey,” Journal of Electronic Science and Technology , p. 100301, 2025. [156] W. Lee and J. Park, “Llm-empowered resource allocation in wireless communications systems,” arXiv preprint arXiv:2408.02944 , 2024. [157] H. Du, G. Liu, Y . Lin, D. Niyato, J. Kang, Z. Xiong, and D. I. Kim, “Mixture of experts for intelligent networks: A large language model- enabled approach,” in 2024 International Wireless Communications and Mobile Computing (IWCMC) , Ayia Napa, Cyprus, May 2024, pp. 531– 536. [158] X. Peng, Y . Liu, Y . Cang, C. Cao, and M. Chen, “Llm-optira: Llm- driven optimization of resource allocation for non-convex problems in wireless communications,” arXiv preprint arXiv:2505.02091 , 2025. [159] R. Ren, Y . Wu, X. Zhang, J. Ren, Y . Shen, S. Wang, and K.-F. Tsang, “Retrieval-augmented generation for mobile edge computing via large language model,” arXiv preprint arXiv:2412.20820 , 2024. [160] M. Xu, D. Niyato, H. Zhang, J. Kang, Z. Xiong, S. Mao, and Z. Han, “Cached model-as-a-resource: Provisioning large language model agents for edge intelligence in space-air-ground integrated networks,” 2024. [Online]. Available: https://arxiv.org/abs/2403.05826 [161] Z. Yang, Y . Yang, C. Zhao, Q. Guo, W. He, and W. Ji, “Perllm: Personalized inference scheduling with edge-cloud collaboration for diverse llm services,” arXiv preprint arXiv:2405.14636 , 2024. [162] C. Liu and J. Zhao, “Resource allocation for stable llm training in mo- bile edge computing,” in Proceedings of the Twenty-fifth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing , Athens Greece, October 2024, pp. 81–90. [163] Y . Jiang, H. Wang, L. Xie, H. Zhao, H. Qian, and J. C. S. Lui, “D-llm: A token adaptive computing resource allocation strategy for large language models,” in Advances in Neural Information Processing Systems (NeurIPS) , vol. 37, 2024, pp. 1725–1749. [Online]. Available: https://openreview.net/forum?id=UIOjGTKHQG [164] A. Amayuelas, J. Yang, S. Agashe, A. Nagarajan, A. Antoniades, X. E. Wang, and W. Wang, “Self-resource allocation in multi-agent llm systems,” arXiv preprint arXiv:2504.02051 , 2025. [165] H. Zou, Q. Zhao, L. Bariah, M. Bennis, and M. Debbah, “Wireless multi-agent generative ai: From connected intelligence to collective intelligence,” arXiv preprint arXiv:2307.02757 , 2023. [166] S. Long, F. Tang, Y . Li, T. Tan, Z. Jin, M. Zhao, and N. Kato, “6g comprehensive intelligence: network operations and optimization based on large language models,” IEEE Network , 2024. [167] Z. Liu and H. Du, “Model context protocol-based internet of ex- perts for wireless environment-aware llm agents,” arXiv preprint arXiv:2505.01834 , 2025. [168] Z. Chen, Q. Sun, N. Li, X. Li, Y . Wang, and C.-L. I, “Enabling mobile ai agent in 6g era: Architecture and key technologies,” IEEE Network , vol. 38, no. 5, pp. 66–75, 2024. [169] Z. Xiao, C. Ye, Y . Hu, H. Yuan, Y . Huang, Y . Feng, L. Cai, and J. Chang, “Llm agents as 6g orchestrator: A paradigm for task-oriented physical-layer automation,” arXiv preprint arXiv:2410.03688 , 2024. [170] M. Xu, D. Niyato, J. Kang, Z. Xiong, S. Mao, Z. Han, D. I. Kim, and K. B. Letaief, “When large language model agents meet 6g networks: Perception, grounding, | https://arxiv.org/abs/2505.22311v1 |
and alignment,” IEEE Wireless Communications , vol. 31, no. 6, pp. 63–71, 2024. [171] W. Yang, Z. Xiong, Y . Yuan, W. Jiang, T. Q. S. Quek, and M. Debbah, “Agent-driven generative semantic communication with cross-modality and prediction,” IEEE Transactions on Wireless Communications , vol. 24, no. 3, pp. 2233–2248, 2025. [172] W. Yang, Z. Xiong, S. Mao, T. Q. S. Quek, P. Zhang, M. Debbah, and R. Tafazolli, “Rethinking generative semantic communication for multi-user systems with large language models,” IEEE Wireless Communications , pp. 1–9, 2025. [173] X. Jia, X. Wang, Y . Zhang, M. Sheng, and G. Cheng, “Resource allocation for multi-cell semantic communication systems based on drl,” in 2024 12th International Conference on Information Systems and Computing Technology (ISCTech) , 2024, pp. 1–6. [174] H. Zhang, H. Wang, Y . Li, K. Long, and V . C. Leung, “Toward intel- ligent resource allocation on task-oriented semantic communication,” IEEE Wireless Communications , vol. 30, no. 3, pp. 70–77, 2023. [175] S. Jiang, B. Lin, Y . Wu, and Y . Gao, “Links: Large language model integrated management for 6g empowered digital twin networks,” in 2024 IEEE 100th Vehicular Technology Conference (VTC2024-Fall) . IEEE, 2024, pp. 1–6. [176] J. Tong, W. Guo, J. Shao, Q. Wu, Z. Li, Z. Lin, and J. Zhang, “Wirelessagent: Large language model agents for intelligent wireless networks,” arXiv preprint arXiv:2505.01074 , 2025. 34 [177] Y . Xiao, G. Shi, and P. Zhang, “Towards agentic ai networking in 6g: A generative foundation model-as-agent approach,” arXiv preprint arXiv:2503.15764 , 2025. [178] X. Wu, Y . Wang, J. Farooq, and J. Chen, “Llm-driven agentic ai approach to enhanced o-ran resilience in next-generation networks,” Authorea Preprints , 2025. [179] T. Nguyen, H. Nguyen, A. Ijaz, S. Sheikhi, A. V . Vasilakos, and P. Kostakos, “Large language models in 6g security: challenges and opportunities,” arXiv preprint arXiv:2403.12239 , 2024. [180] M. A. Onsu, P. Lohan, and B. Kantarci, “Leveraging edge intelligence and llms to advance 6g-enabled internet of automated defense vehicles,” arXiv preprint arXiv:2501.06205 , 2024. [181] X. Cao, G. Nan, H. Guo, H. Mu, L. Wang, Y . Lin, Q. Zhou, J. Li, B. Qin, Q. Cui et al. , “Exploring llm-based multi-agent situation awareness for zero-trust space-air-ground integrated network,” IEEE Journal on Selected Areas in Communications , 2025. [182] F. Lin, Y . Tian, Y . Wang, T. Zhang, X. Zhang, and F.-Y . Wang, “Airvista: Empowering uavs with 3d spatial reasoning abilities through a multimodal large language model agent,” in 2024 IEEE 27th Interna- tional Conference on Intelligent Transportation Systems (ITSC) . IEEE, 2024, pp. 476–481. [183] A. Sezgin, “Scenario-driven evaluation of autonomous agents: Integrat- ing large language model for uav mission reliability,” Drones , vol. 9, no. 3, p. 213, 2025. [184] F. Zhu, F. Huang, Y . Yu, G. Liu, and T. Huang, “Task offloading with llm-enhanced multi-agent reinforcement learning in uav-assisted edge computing,” Sensors , vol. 25, no. 1, p. 175, 2024. [185] O. Sautenkov, Y . Yaqoot, M. A. Mustafa, F. Batool, J. Sam, A. Lykov, C.-Y . Wen, and D. Tsetserukou, “Uav-codeagents: Scalable | https://arxiv.org/abs/2505.22311v1 |
uav mission planning via multi-agent react and vision-language reasoning,” arXiv preprint arXiv:2505.07236 , 2025. [186] B. Liu, S. Mazumder, E. Robertson, and S. Grigsby, “Ai autonomy: Self-initiated open-world continual learning and adaptation,” AI Mag- azine , vol. 44, no. 2, pp. 185–199, 2023. [187] D. Yu, X. Zhang, Y . Chen, A. Liu, Y . Zhang, P. S. Yu, and I. King, “Recent advances of multimodal continual learning: A comprehensive survey,” arXiv preprint arXiv:2410.05352 , 2024. [188] X. Ru, X. Cao, Z. Liu, J. M. Moore, X.-Y . Zhang, X. Zhu, W. Wei, and G. Yan, “Maintaining adversarial robustness in continuous learning,” arXiv preprint arXiv:2402.11196 , 2024. [189] A. Chaudhry, P. K. Dokania, T. Ajanthan, and P. H. Torr, “Riemannian walk for incremental learning: Understanding forgetting and intransi- gence,” in Proceedings of the European conference on computer vision (ECCV) , 2018, pp. 532–547. [190] R. Friedman, “Large language models and logical reasoning,” Ency- clopedia , vol. 3, no. 2, pp. 687–697, 2023. [191] H. Lightman, V . Kosaraju, Y . Burda, H. Edwards, B. Baker, T. Lee, J. Leike, J. Schulman, I. Sutskever, and K. Cobbe, “Let’s verify step by step,” in The Twelfth International Conference on Learning Representations , 2023. [192] Z. Xi, W. Chen, B. Hong, S. Jin, R. Zheng, W. He, Y . Ding, S. Liu, X. Guo, J. Wang et al. , “Training large language models for reasoning through reverse curriculum reinforcement learning,” arXiv preprint arXiv:2402.05808 , 2024. [193] L. Cheng, R. Guo, R. Moraffah, P. Sheth, K. S. Candan, and H. Liu, “Evaluation methods and measures for causal learning algorithms,” IEEE Transactions on Artificial Intelligence , vol. 3, no. 6, pp. 924– 943, 2022. [194] B. Wagner and A. d. Garcez, “Neural-symbolic integration for interactive learning and conceptual grounding,” arXiv preprint arXiv:2112.11805 , 2021. [195] M. T. Ribeiro, S. Singh, and C. Guestrin, “” why should i trust you?” explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining , 2016, pp. 1135–1144. [196] D. Bau, J.-Y . Zhu, H. Strobelt, B. Zhou, J. B. Tenenbaum, W. T. Free- man, and A. Torralba, “Gan dissection: Visualizing and understanding generative adversarial networks,” arXiv preprint arXiv:1811.10597 , 2018. [197] X. Zhu, J. Li, Y . Liu, C. Ma, and W. Wang, “A survey on model com- pression for large language models,” Transactions of the Association for Computational Linguistics , vol. 12, pp. 1556–1577, 2024. [198] G. Xiao, J. Lin, M. Seznec, H. Wu, J. Demouth, and S. Han, “Smoothquant: Accurate and efficient post-training quantization for large language models,” in International Conference on Machine Learning . PMLR, 2023, pp. 38 087–38 099.[199] X. Xu, M. Li, C. Tao, T. Shen, R. Cheng, J. Li, C. Xu, D. Tao, and T. Zhou, “A survey on knowledge distillation of large language models,” arXiv preprint arXiv:2402.13116 , 2024. [200] A. Singh, A. Ehtesham, S. Kumar, and T. T. Khoei, “Agentic retrieval- augmented generation: A survey on agentic rag,” arXiv preprint arXiv:2501.09136 , 2025. [201] Y . Yang, H. Chai, S. Shao, Y | https://arxiv.org/abs/2505.22311v1 |
. Song, S. Qi, R. Rui, and W. Zhang, “Agentnet: Decentralized evolutionary coordination for llm-based multi-agent systems,” arXiv preprint arXiv:2504.00587 , 2025. [202] A. Saleh, S. Tarkoma, P. K. Donta, N. H. Motlagh, S. Dustdar, S. Pirt- tikangas, and L. Lov ´en, “Usercentrix: An agentic memory-augmented ai framework for smart spaces,” arXiv preprint arXiv:2505.00472 , 2025. [203] K.-T. Tran, D. Dao, M.-D. Nguyen, Q.-V . Pham, B. O’Sullivan, and H. D. Nguyen, “Multi-agent collaboration mechanisms: A survey of llms,” arXiv preprint arXiv:2501.06322 , 2025. [204] N. Krishnan, “Advancing multi-agent systems through model con- text protocol: Architecture, implementation, and applications,” arXiv preprint arXiv:2504.21030 , 2025. [205] I. Habler, K. Huang, V . S. Narajala, and P. Kulkarni, “Building a secure agentic ai application leveraging a2a protocol,” arXiv preprint arXiv:2504.16902 , 2025. [206] M. Zhuge, C. Zhao, D. Ashley, W. Wang, D. Khizbullin, Y . Xiong, Z. Liu, E. Chang, R. Krishnamoorthi, Y . Tian et al. , “Agent-as-a-judge: Evaluate agents with agents,” arXiv preprint arXiv:2410.10934 , 2024. [207] A. Yehudai, L. Eden, A. Li, G. Uziel, Y . Zhao, R. Bar-Haim, A. Cohan, and M. Shmueli-Scheuer, “Survey on evaluation of llm-based agents,” arXiv preprint arXiv:2503.16416 , 2025. | https://arxiv.org/abs/2505.22311v1 |
arXiv:2505.22312v1 [cs.LG] 28 May 2025Skywork Open Reasoner 1 Technical Report Jujie He∗,†, Jiacai Liu∗, Chris Yuhao Liu , Rui Yan , Chaojie Wang , Peng Cheng , Xiaoyu Zhang , Fuxiang Zhang , Jiacheng Xu , Wei Shen , Siyuan Li , Liang Zeng , Tianwen Wei , Cheng Cheng , Bo An , Yang Liu , and Yahui Zhou Skywork AI, Kunlun Inc GitHub: https://github.com/SkyworkAI/Skywork-OR1 HuggingFace: https://huggingface.co/Skywork/Skywork-OR1-32B Abstract The success of DeepSeek-R1 underscores the significant role of reinforcement learning (RL) in enhancing the reasoning capabilities of large language models (LLMs). In this work, we present Skywork-OR1, an effective and scalable RL implementation for long Chain-of-Thought (CoT) models. Building on the DeepSeek-R1-Distill model series, our RL approach achieves notable performance gains, increasing average accuracy across AIME24, AIME25, and LiveCodeBench from 57.8% to 72.8% (+15.0%) for the 32B model and from 43.6% to 57.5% (+13.9%) for the 7B model. Our Skywork-OR1-32B model surpasses both DeepSeek-R1 and Qwen3-32B on the AIME24 and AIME25 benchmarks, while achieving comparable results on LiveCodeBench. The Skywork-OR1-7B and Skywork-OR1-Math-7B models demonstrate competitive reasoning capabilities among models of similar size. We perform comprehensive ablation studies on the core components of our training pipeline to validate their effectiveness. Additionally, we thoroughly investigate the phenomenon of entropy collapse, identify key factors affecting entropy dynamics, and demonstrate that mitigating premature entropy collapse is critical for improved test performance. To support community research, we fully open-source our model weights, training code, and training datasets. Figure 1: The performance curve of Skywork-OR1-32B during RL training for AIME 2024 and AIME 2025. The red stars indicate the selected final checkpoints. ∗Equal contribution. †Corresponding author: jujie.he@kunlun-inc.com 1 Contents 1 Introduction 3 2 Preliminaries 6 3 MAGIC in Skywork-OR1 7 3.1 MAGIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 Effectiveness of MAGIC Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2.1 Data Mixture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2.2 Multi-Stage Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2.3 Advantage Mask for Truncated Responses . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.4 High-temperature Sampling . . . . . . . . . . . . . . . . . . | https://arxiv.org/abs/2505.22312v1 |
. . . . . . . . . . . . . . . 14 3.2.5 Adaptive Entropy Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.2.6 No KL Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4 Empirical Studies on Mitigating Policy Entropy Collapse 18 4.1 Ablation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2 Premature Entropy Collapse Generally Manifests as Worse Performance . . . . . . . . . . . . 20 4.3 The Impact of Rollout-Diversity-Related Hyperparameters . . . . . . . . . . . . . . . . . . . . 20 4.4 The Impact of Off-policy Update by Increasing NSGD. . . . . . . . . . . . . . . . . . . . . . 21 4.5 Preventing Premature Entropy Collapse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5 Empirical Studies on Training Resource Allocation 27 5.1 Improving Training Efficiency with Fixed Computational Resources . . . . . . . . . . . . . . . 28 5.2 Improving Test Performance with More Computational Resources . . . . . . . . . . . . . . . . 29 6 Dataset Preparation 30 6.1 Data Source Selection and Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 6.2 Model-Aware Difficulty Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 6.3 Quality Assessment via Human and LLM-as-a-Judge . . . . . . . . . . . . . . . . . . . . . . . 33 7 Math & Code Verifiers 34 7.1 Math Verifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 7.2 Code Sandboxes . . . . . . . . . . . . | https://arxiv.org/abs/2505.22312v1 |
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 8 Experiments 35 8.1 Training and Evaluation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 8.2 Evaluation Results of Skywork-OR1 models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 9 Conclusion 37 2 1 Introduction In recent months, post-training techniques based on reinforcement learning (RL) have achieved groundbreaking success in enhancing the reasoning capabilities of large language models (LLMs). Representative models such as OpenAI-o1 [ 9], DeepSeek-R1 [ 6], and Kimi-K1.5 [ 24] demonstrate RL’s remarkable ability to significantly improve performance in mathematics and coding. While prior RL approaches have primarily relied on Monte Carlo Tree Search (MCTS) or Process Reward Models (PRMs) to improve reasoning over supervised fine-tuning (SFT) models, the success of DeepSeek-R1 demonstrates conclusively that online RL with a simple rule-based reward is sufficient to substantially enhance the reasoning capabilities of base models. As model capabilities continue to advance, Chains-of-Thought (CoT) have grown progressively longer. For example, the DeepSeek-R1-Distill model series [ 6] generates CoT sequences averaging over 10K tokens on the AIME24 benchmark, significantly surpassing earlier popular SFT models such as the Qwen 2.5 model series [33] and the Llama 3.1 model series [ 5]. Despite several reproduction efforts (e.g., Logic-RL [ 31], Open-Reasoner-Zero [ 8], DAPO [ 34], VAPO [ 35]) following the success of DeepSeek-R1, most have focused on applying RL to base models rather than to long CoT models that have already undergone SFT. As a result, it remains unclear how to improve the reasoning abilities of long CoT models using RL in an efficient and scalable manner. While recent works such as DeepScaleR [ 17], Light-R1 [ 28], and DeepCoder [ 16] have made preliminary progress toward efficient RL optimization for long CoT models, their analyses do not systematically disentangle the contributions of distinct algorithmic components during RL training. In this work, we introduce Skywork Open Reasoner 1 (abbreviated as Skywork-OR1 throughout the report), an efficient and scalable RL recipe for long CoT models. Our experiments are based on the DeepSeek- R1-Distill model series and open-source datasets with rigorous preprocessing and filtering. As shown in Figure 1 and Table 13, the Skywork-OR1 model series achieves significant performance improvements over base models, demonstrating the effectiveness of our RL implementation. Specifically, Skywork-OR1-32B achieves scores of 82.2 on AIME24, 73.3 on AIME25, and 63.0 on LiveCodeBench[ 10] (2024-08 - 2025-02), outperforming DeepSeek-R1 and Qwen3-32B in the math domain. Skywork-OR1-7B achieves 70.2 on AIME24, 54.6 on AIME25, and 47.6 on LiveCodeBench, exhibiting competitive performance relative to similarly sized models in both math and coding tasks. Our previously released model, Skywork-OR1-Math-7B, also delivers strong performance among similarly | https://arxiv.org/abs/2505.22312v1 |
sized models, scoring 69.8 on AIME24, 52.3 on AIME25, and 43.6 on LiveCodeBench. We conducted exhaustive ablation experiments to validate the effectiveness of the core components in the training pipeline. Balancing exploration and exploitation is crucial in RL training [ 22]. We conducted a comprehensive study on premature entropy collapse, a phenomenon associated with excessive exploitation, and found that mitigating premature entropy collapse is essential for achieving better test performance. Through exhaustive ablation experiments, we identified key factors that influence entropy dynamics. To ensure full reproducibility and support ongoing research within the LLM community, we release all of our training resources, including source code∗, the post-training dataset†, and model weights‡§. Furthermore, we conducted extensive ablation studies across both data and algorithmic dimensions to elucidate effective RL implementations for long CoT models. As a follow-up to our previously released Notion blog post [ 7], we present this more detailed technical report, with our key findings summarized as follows: ∗https://github.com/SkyworkAI/Skywork-OR1 †https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data ‡https://huggingface.co/Skywork/Skywork-OR1-7B §https://huggingface.co/Skywork/Skywork-OR1-32B 3 Data Collection 1.To ensure stable and effective training, it is crucial to incorporate problems from a diverse set of sources. We observe that, in the absence of consistent quality assessment and filtering procedures, previously successful datasets exhibit several failure modes with larger models (Section 6). 2.Rigorous filtering and quality control of training data significantly accelerate learning. Our proposed data mixture, constructed with stringent filtering criteria, outperforms a baseline mixture assembled with looser quality thresholds (Section 3.2.1). Training Strategy 1.Multi-stage training significantly improves training efficiency in the initial phase while preserving scalability for later stages (Section 3.2.2). 2.Addressing noisy training signals introduced by truncated trajectories in Stage Idoes not lead to better scaling at large context lengths, e.g., 32K (Section 3.2.3). 3.High-temperature sampling results in lower test accuracy during the early training steps but ultimately yields greater performance improvements (Section 3.2.4). 4. On-policy training mitigates entropy collapse and leads to higher test performance (Section 4). Loss Function 1.Adaptive entropy control effectively keeps the model’s entropy lower-bounded by the target entropy throughout training, maintaining the model’s exploration ability and high learning plasticity, with test performance steadily improving (Section 3.2.5). 2.The KL penalty hinders further improvements in test performance during multi-stage training. Therefore, we omit KL loss from our training pipeline (Section 3.2.6). Empirical Results of Our Entropy Collapse Study 1.Faster entropy collapse generally correlates with poorer test performance (Section 4.2). Appropriate entropy control that mitigates premature convergence can improve test outcomes (Section 4.5). 2.Increasing rollout diversity by enlarging the batch and group sizes has only minor effects on entropy dynamics (Section 4.3), whereas using a higher sampling temperature significantly impacts initial entropy and learning dynamics (Section 3.2.4). 3.Off-policy training – via increased mini-batches or data reuse – accelerates entropy collapse and generally leads to degraded test performance compared to on-policy updates, due to the introduction of off-policy data (Section 4.4). 4.The entropy loss exhibits high sensitivity to both the training data and the coefficient. By either adaptively adjusting the entropy loss coefficient or applying a clip-higher trick with an appropriate clip ratio, entropy dynamics become slower and more stable, leading to improved | https://arxiv.org/abs/2505.22312v1 |
test performance. Nevertheless, entropy still converges faster than in on-policy training (Section 4.5). Organization In Section 2, we introduce the preliminaries of several important policy optimization methods in RL. Section 3 elaborates on our training pipeline, including comprehensive ablation studies that validate the effectiveness of its core components. A systematic investigation of entropy collapse is presented in Section 4, demonstrating that mitigating premature policy convergence is critical in RL training for enhancing exploration and achieving better test performance. We discuss training resource allocation in Section 5. The 4 implementation details of our training data preparation and rule-based reward are provided in Sections 6 and 7. Finally, Section 8 presents a comprehensive description of the training and evaluation details for our three released models: Skywork-OR1-Math-7B, Skywork-OR1-7B, and Skywork-OR1-32B. Figure 2: Performance of Skywork-OR1-32B on challenging mathematics and coding benchmarks. Figure 3: Performance of Skywork-OR1-7B on challenging mathematics and coding benchmarks. 5 2 Preliminaries The success of Deepseek-R1 demonstrates that Policy Gradient (PG) methods [ 22], especially Group Relative Policy Optimization(GRPO) [ 21], can effectively enhance the reasoning abilities of LLMs. Generally speaking, the RL objective is to find a policy πthat maximizes the reward, i.e.: max π J(π) :=Ex∼DEy∼π(·|x)[r(x, y)] , (2.1) where xis the training prompt, Dis the sampling distribution of x,yis the response sampled by the policy πfor input prompt x, and rdenotes the reward function. In practice, we estimate a surrogate objective for J(π)at the batch level for tractable optimization. At each training step k, we sample a batch of Nprompts x1, . . . , x Nfrom the data distribution D, denoted as Tk, and generate the corresponding responses y1, . . . , y Nusing the current policy πwith a context length Tand temperature τ. The batch-level surrogate objective at step kcan be formulated as: max π Jk(π) :=Exi∼TkEyi∼π(·|xi)[r(xi, yi)] , (2.2) where πkis shorthand for the policy πθkparameterized by θk. Vanilla Policy Gradient For a parameterized policy πθ, vanilla PG [ 23] uses gradient ascent to obtain the optimal parameter θ∗, i.e. θ←θ+∇θJ(πθ). A valid first-order surrogate policy loss for vanilla PG at each iteration kis given by: LPG k(θ) =−Exi∼TkEyi∼πk(·|xi) |yi|−1X t=0πθ(at i|st i) πk(at i|st i)·Aπk st i, at i , (2.3) where the response yi= (a0 i, ..., a|y|−1 i)consists of |y|tokens, at iis the t-th token in the sequence yi, st i:= (xi, a0 i, ..., at−1 i)is the prefix context when generating at i, and Aπkis the advantage function defined as Aπk st, at :=Ey∼πk(·|x) r(x, y)|st, at −Ey∼πk(·|x) r(x, y)|st . One can easily show that ∇θLPG k(θk) =−∇θJk(πk). Proximal Policy Optimization (PPO) At each training step k, PPO [20] performs multiple gradient descent steps on the policy loss Lkwith a clip trick to keep the new policy restricted within the trust region ofπk. The policy loss employed in PPO is formulated as: LPPO k(θ) =−Exi∼TkEyi∼πk(·|xi) |yi|−1X t=0min ρt i(θ)Aπk st i, at i ,clip ρt i(θ),1−ε,1 +ε ·Aπk st i, at i , where ρt i(θ) :=πθ(at i|st i) πk(at i|st i), and εis the clip hyperparameter. In practice, PPO | https://arxiv.org/abs/2505.22312v1 |
generally uses GAE [ 19] to estimate the token-level advantage Aπk(st i, at i). Group Relative Policy Optimization (GRPO) Suppose Mi.i.d. responses yi1, .., y iMare sampled for each prompt xi. GRPO [ 21] estimates the token-level advantage using the group-normalized rewards and 6 introduces an additional length normalization term1 |yij|for each response yij. The policy loss employed in GRPO is formulated as: LGRPO k (θ) =−Exi∼TkE{yij}M j=1∼πk(·|x) 1 MMX i=11 |yij||yij|−1X t=0min ρt ij(θ)At ij,clip ρt ij(θ),1−ε,1 +ε At ij −βDt ij(θ) ,(2.4) where yij= (a0 ij, ..., a|yij|−1 ij ),at ijis the t-th token in the sequence yij,st ij:= (xi, a0 ij, ..., at−1 ij),ρt ij(θ) := πθ(at ij|st ij) πk(at ij|st ij),εis the clip hyperparameter, Dt ijis the token-level k3 loss [ 21] applied in at ijwith coefficient βto keep the policy πθstay in the trust region of reference policy πref, i.e. Dt ij(θ) :=πref at ij|st ij πθ at ij|st ij−logπref at ij|st ij πθ at ij|st ij−1, For each prompt-response pair (xi, yij), a binary reward r(xi, yij)∈ {0,1}is given by a rule-based verifier. The token-level advantage At ijis estimated by ∀t:At ij=r(xi, yij)−mean ( r(xi, yi1), ..., r (xi, yiM)) std (r(xi, yi1), ..., r (xi, yiM)). (2.5) 3 MAGIC in Skywork-OR1 We employ a training pipeline built upon a modified version of GRPO [ 21], referred to as Multi-stageAdaptive entropy scheduling for GRPOInConvergence ( MAGIC ). In the following sections, we first introduce the recipe of MAGIC and then analyze the effectiveness of each of its components. 3.1 MAGIC In the following, we present the MAGIC framework by detailing its components in terms of Data Collection, Training Strategy, and Loss Function. Data Collection To ensure the quality of queries during post-training, we construct the initial dataset through stringent data preparation, as described in Section 6, and adopt more accurate verifiers to provide reward signals, as outlined in Section 7. Additionally, we employ the following strategies to further improve sample efficiency: 1.Offline and Online Filtering. We apply data filtering both before and during training. Prior to training, we remove prompts with base model correctness rates of 1 (fully correct) or 0 (completely incorrect). During training, at the beginning of each stage, we also discard training prompts for which the actor model achieved correctness of 1 in the previous stage. This dynamic online filtering mechanism ensures that the actor model is consistently trained on challenging problems at each stage. 2.Rejection Sampling. Responses in the zero-advantage group (as defined by Equation (2.5)) do not contribute to the policy loss but may influence the KL loss or entropy loss, potentially leading to a more unstable training process due to the implicitly increased relative weight of these losses. To mitigate this issue, our training batches include only groups with non-zero advantages; specifically, the samples of prompt xiare filtered out if i /∈˜Tk, where ˜Tk:=n i∈[N] :∃j∈[M]ˆAij̸= 0o . 7 Training Strategy We made the following refinements to the training strategy of vanilla GRPO: 1.Multi-Stage Training. Inspired by DeepScaleR [17], we progressively increase the context length T and divide the training | https://arxiv.org/abs/2505.22312v1 |
process into multiple stages. We found that multi-stage training significantly reduces computational costs while preserving scalability, as supported by the evidence presented in Section 3.2.2. 2.Advantage Mask for Truncated Responses. To address potential noise in training signals when outcomes cannot be derived from truncated responses – since assigning negative advantages in such cases may introduce bias – we experimented with an advantage mask during the early stages of multi-stage training, when many responses are truncated. However, as shown in Section 3.2.3, penalizing truncated responses does not hinder later-stage improvements and enhances token efficiency. Based on these results, we do notemploy any advantage mask strategy in our training pipeline. 3.High-Temperature Sampling. We set the rollout temperature to τ= 1to enhance the model’s exploration capability and improve learning plasticity. This decision was motivated by our observation that the sampling policy either immediately enters (in the case of math data) or quickly transitions into (in the case of code data) a low-entropy state when using a smaller sampling temperature (e.g., τ= 0.6). See Section 3.2.4 for further details. 4.On-Policy Training. We adopted on-policy training for Skywork-OR1-7B and Skywork-OR1-32B, as we found that on-policy updates significantly slow entropy collapse and lead to higher test performance. See Section 4 for our detailed findings on entropy collapse. In contrast, Skywork-OR1-Math-7B was trained with two gradient steps per training step (and was therefore not strictly on-policy). This setup preceded our complete understanding of the relationship between off-policy updates and premature entropy collapse. Nevertheless, adaptive entropy control (Section 3.2.5) effectively mitigated collapse, allowing the model to achieve strong performance. Loss Function To mitigate implicit length bias, we adopt a token-level policy loss by removing the length normalization term 1/|yij|from each response. The policy loss is averaged across all tokens in a training batch, formulated as follows: LMAGIC(θ) =−1 TkX i∈˜TkMX j=1 |yij|−1X t=0min ρt ij(θ)At ij,clip ρt ij(θ),1−ε,1 +ε At ij +αkHt ij(θ) ,(3.1) where yij:= (a0 ij, ..., a|yij|−1 ij ),at ijis the t-th token in the sequence yij,st ij:= (xi, a0 ij, ..., at−1 ij)is the prefix context when generating at ij,ρt ij(θ) :=πθ(at ij|st ij) πk(at ij|st ij),Ht ij(θ) :=H πθ ·|st ij is the entropy of the generation policy of token at ij,αk≥0is the coefficient of the entropy, Tk:=P i∈˜TkPM j=1|yij|is the total number of tokens in the training batch. Meanwhile, we also introduce the following characteristics into the loss function: 1.Adaptive Entropy Control. To preserve the model’s exploration capability and maintain high learning plasticity, it is common to include an additional entropy loss to prevent entropy collapse. An appropriately weighted entropy loss can enhance generalization. However, our experiments show that selecting a suitable coefficient in advance is often challenging, as the entropy loss is highly sensitive to both the coefficient and the training data. To address this, we introduce an additional hyperparameter, tgt-ent, representing the target entropy. This hyperparameter dynamically adjusts the coefficient αk based on the difference between the current entropy and the target entropy, ensuring that the current entropy remains lower-bounded by tgt-ent. See Section 3.2.5 for more details. 2.No KL | https://arxiv.org/abs/2505.22312v1 |
Loss. We found that including a KL loss term hinders performance gains, particularly in the later stages of multi-stage training. Therefore, we omit the KL loss from our training recipe. See Section 3.2.6 for further discussion. 8 3.2 Effectiveness of MAGIC Components In this section, we present results from extensive experiments conducted to examine how various components of our MAGIC recipe influence the performance improvement of reinforcement learning during post-training. 3.2.1 Data Mixture (a) (b) Figure 4: Left:Comparison of our data mixture with DeepScaleR’s mixture. The experiment was conducted on an earlier version of the 32B variant, using only math data. Right:Comparison of AIME 24 performance between two mixtures: our official mixture (default) and a version with additional data selected using lower verification criteria (i.e., with potential errors in ground truth answers). Although the quality is lower, we observe only slower learning progress compared to the clean counterpart. In our formal training recipe, we include additional hard problems filtered from NuminaMath-1.5 [ 13] to construct our final data mixture. We conduct the following ablation study to demonstrate the effectiveness of this design choice. We primarily compare against DeepScaleR’s data mixture [ 17], as existing models trained on it have shown strong performance. Ablation Experiments 1: Existing Mixture vs. Our Data Mixture 1.DeepScaleR mixture [ 17]: Comprises problems from previous years’ AIME, AMC, Omni-MATH [4], and STILL [26]. 2.Skywork-OR1 mixture: Our custom mixture described in Section 6, incorporating problems from more diverse sources (e.g., NuminaMath-1.5) and selected via difficulty filtering and quality control. We use the same hyperparameters and approximately the same number of training steps across both experiments to control for the effect of data size. Results are shown in Figure 4. Although the DeepScaleR dataset performs well with smaller model variants, we observed a slight initial improvement on AIME24. However, performance degraded sharply after 300 training steps, eventually returning to the same accuracy as before training. Additionally, in Figure 4(b), we test our data mixture combined with an extra subset obtained via a less stringent verification procedure. This extra subset contains hard problems from NuminaMath-1.5 that were previously excluded due to potential mismatches between extracted and provided solutions. We find that the performance difference between the two mixtures is negligible within the first 900 steps. The version including the extra subset exhibits slightly slower early 9 (a) (b) Figure 5:Left:Comparison of From-Scratch vs.Multi-Stage training.Top left: Response length during RL training. Bottom left: AIME24 avg@8 performance at temperature 1 (left y-axis) and cumulative training hours (right y-axis). Multi-stage training achieves the same final accuracy with significantly fewer training hours due to a smaller context length in the early stages. Right:AIME24 avg@32 vs. response length for Skywork-OR1-Math-7B and DeepSeek-R1-Distill-Qwen-7B with 32K context length. The Stage I checkpoint of Skywork-OR1-Math-7B reaches comparable performance to DeepSeek-R1-Distill-Qwen-7B with notably better token efficiency; further performance gains are seen in Stages II &III. progress, possibly due to noise in the provided answers. We hypothesize that RL training is robust to small amounts of ground truth noise, consistent with findings in [ 36].Therefore, we adopt the default data composition | https://arxiv.org/abs/2505.22312v1 |
described in Section 6 for all subsequent exploration experiments. 3.2.2 Multi-Stage Training One of the major challenges in optimizing long Chain-of-Thought (CoT) models with RL is managing excessively long outputs, which can lead to slow convergence and high training variance. Inspired by DeepScaleR [ 17], we incorporated multi-stage training in all our released models to improve training efficiency. Specifically, we used a shorter context length Tin the initial stages. Once the model’s performance converged, we increased Tin the subsequent stage. This approach led to significant performance improvements on benchmarks while also enhancing training efficiency. Same Improvement, Higher Efficiency. To demonstrate the effectiveness of multi-stage training, we conducted two experiments based on DeepSeek-R1-Distill-Qwen-7B with different schedules for T: Ablation Experiments 2: From-Scratch vs. Multi-Stage 1. From-Scratch: We started with T= 16K at step 0 and kept it fixed during training. 2.Multi-Stage: We started with T= 8Kat step 0. At a later step (i.e., step 540), we switched to Stage II and increased Tto16K. The other hyper-parameters were kept same for both experiments and are reported in Table 1. The results are presented in Figure 5(a) and Figure 5(b). Figure 5(a) illustrates how AIME24 accuracy, generated response length, and cumulative training hours evolve with the number of training steps in Ablation Experiments 2. As shown, the AIME24 accuracy in both experiments converges to approximately 60 when the number of training steps is sufficiently large. However, in the multi-stage experiment, the context length in Stage I(i.e., 8K) is only half that used in the from-scratch experiment (i.e., 16 K). As a result, the average response length in the multi-stage experiment is 10 Batch Size Mini-batch Size Group Size Entropy Control KL Loss 64 32 16 target-entropy 0.2 No Table 1: Shared hyperparameters in Ablation Experiments 1 based on Deepseek-R1-Distill-Qwen-7B. significantly shorter during Stage Iand the initial steps of Stage II, leading to more efficient training due to reduced inference and computational costs (approximately 100 training hours are saved over 1000 training steps). After transitioning to Stage II, both the response length and AIME24 accuracy begin to increase immediately. Within roughly 500 training steps in Stage II, the accuracy of the multi-stage experiment reaches the same level as that of the from-scratch experiment. Improving Token Efficiency While Preserving Scaling Potential. Truncated responses are labeled as negative samples in RL training because they lack final answers. A potential concern with multi-stage training is that using short context windows may bias the model toward generating shorter responses, potentially limiting its exploratory capacity and reducing its ability to solve complex problems. Our findings demonstrate that multi-stage training not only improves token efficiency in the initial stage but also preserves scaling ability. In Figure 5(b), we observe that training with an 8 Kcontext length in Stage Imaintains comparable AIME24 accuracy under a 32 Kcontext length while significantly improving token efficiency (reducing the average response length from approximately 12.5 Kto 5.4Ktokens). In Stages II and III, Skywork-OR1-Math-7B steadily increases response length while concurrently improving performance. 3.2.3 Advantage Mask for Truncated Responses In practice, responses are sampled within a fixed context length | https://arxiv.org/abs/2505.22312v1 |
T. When response lengths exceed T, the outcomes cannot be derived, and accuracy rewards are set to 0, resulting in negative advantages for these truncated responses, which may introduce bias. To mitigate this issue, we investigated several advantage mask strategies aimed at reducing the influence of truncated responses. However, our findings show that assigning negative advantages to truncated samples not only improves token efficiency but also preserves the model’s scaling ability in later stages. As a result, we did not apply any mask strategies in our final training pipeline. Figure 6: Training accuracy and clip ratio during RL training of Skywork-OR1-Math-7B in Stage I. accuracy: Mean accuracy reward on training batch. accuracy_nontruncated: Mean accuracy of non-truncated samples. clip_ratio: Ratio of truncated responses. Two Optimization Directions in Short Context Length. In ourStage Itraining of Skywork-OR1- 11 Math-7B, we set the context length to T= 8K, and approximately 40% of responses were truncated at the initial steps. Although overall training accuracy continued to increase during RL training, we observed that the accuracy of non-truncated samples initially declined sharply within the first 100 training steps before showing a slight upward trend. See Figure 6 for details. A truncated response typically receives an accuracy reward of 0 because the final answer is missing due to truncation, even if it would be correct if fully generated. Therefore, reducing the number of truncated responses improves achievable accuracy. Figure 6 shows that the initial increase in training accuracy (steps 0-100) is primarily due to a sharp decrease in the clip ratio. After step 100, the algorithm begins to improve accuracy for non-truncated responses as well. A Brief Explanation from a Theoretical Perspective. We now use mathematical language to clarify this phenomenon further in a formal way. Recall the objective of RL training in (2.1), π∗∈argmax π{J(π) :=Ex∼DEy∼π(·|x)[r(x, y)]}, where xis the prompt, Dis the distribution of prompts, yis the response sampled from actor π,r(x, y)∈ {0,1} is the binary accuracy reward. Note that the response yis sampled under the context length T. For these truncated responses whose lengths are greater than T, i.e.|y|> T, the accuracy reward is r(x, y) = 0since the outcome can not be derived from the response. Based on this observation, one can easily shows that the objective function J(π)satisfies J(π) =Ex∼DEy∼π(·|x)[r(x, y)] =Ex∼DEy∼π(·|x)[r(x, y)I{|y| ≤T}] =Ex∼D pπ non−trunc(x)Ey∼π(·|x)I{|y| ≤T} pπ non−trunc(x)r(x, y) =Ex∼D pπ non−trunc(x)Ey∼ˆπT(·|x)[r(x, y)] =Ex∼D pπ non−trunc(x) ¯rπ non−trunc(x) , where pπ non−trunc(x):=Py∼π(·|x)(|y| ≤T)is the probability that a response yis not truncated by the limit of context length T(we assume pπ non−trunc(x)>0for simplicity), ¯rπ non−trunc(x):=Ey∼ˆπT(·|x)[r(x, y)]is the accuracy of the non-truncated responses output by policy πandˆπT(y|x):=π(y|x) pπ non−trunc(x)I{|y| ≤T}.This implies that the accuracy on training distribution, i.e. J(π), can be increased by: •increasing pπ non−trunc(x), which means the number of the responses that receive accuracy reward of 0 erroneously decreases. •increasing rπ non−trunc(x), which means the response quality within the context length will be improved. Advantage Mask for Truncated Responses. To encourage the algorithm to focus on optimizing accuracy within the context length – i.e., increasing rπ non−trunc(x)– rather than merely shortening | https://arxiv.org/abs/2505.22312v1 |
responses to avoid erroneously receiving a zero accuracy reward – i.e., increasing pπ non−trunc(x)– we explored various advantage mask strategies. These strategies were designed to mitigate the impact of noisy training signals introduced by truncated samples. We conducted ablation experiments using DeepSeek-R1-Distill-Qwen-7B in Stage Ito evaluate the effects of different advantage mask strategies. 12 (a) (b) (c) Figure 7:Left:The clip ratio of generated responses during reinforcement learning training was analyzed after applying various advantage mask strategies in Ablation Experiments 2. Using an advantage mask mitigates the decay in response length. The clip ratio even increased after applying Adv-Mask-Before. Middle: Training accuracy of responses influenced by different advantage mask strategies in Ablation Experiment 2 shows distinct patterns. After applying the Adv-Mask-Before, training accuracy decreases. In contrast, it continues to increase when using the Adv-Mask-After or No-Adv-Mask strategies. Right:Training accuracy of non-truncated responses induced by different advantage mask strategies in Ablation Experiment 2 showed distinct outcomes. After applying the Adv-Mask-Before strategy, the training accuracy of non-truncated responses continued to rise. In contrast, both the Adv-Mask-After and No-Adv-Mask strategies resulted in a sharp decrease during the early steps. Ablation Experiments 3: Different Advantage Mask Strategies 1. No-Adv-Mask: We do not employ any advantage mask strategy. 2. Adv-Mask-Before: The truncated responses are not involved in the group advantage calculation for non-truncated responses, and the advantage of these truncated responses are set to 0 (thus not contributing to the policy loss): ∀t:At ij= r(xi,yij)−mean(ˆRi) std(ˆRi)|y| ≤T 0 |y|> T Here ˆRiis the accuracy rewards group of non-truncated responses of prompt xi. 3.Adv-Mask-After: The truncated responses are not involved in the group advantage calculation for non-truncated responses, and the advantage of these truncated responses are set to 0 (thus not contributing to the policy loss): ∀t:At ij= r(xi,yij)−mean(ˆRi) std(ˆRi)|y| ≤T 0 |y|> T Here ˆRiis the accuracy rewards group of non-truncated responses of prompt xi. The other hyperparameters remain the same for both experiments and are reported in Table 2. The results can be found in Figure 7(a), Figure 7(c) and Figure 8. Figure 7 shows the clip ratio, overall accuracy, and accuracy on non-truncated responses in Ablation Experiments 2. We observe that although the response quality within the context length (i.e., the accuracy of non-truncated responses) increases as expected after applying the Adv-Mask-Before strategy, the overall training accuracy continues to decline, and the clip ratio increases steadily. This appears to be a form of reward hacking from our perspective. More importantly, as shown later in Figure 8, the accuracy of the 13 Batch Size Mini-batch Size Group Size Context Length TEntropy Control KL Loss 256 128 16 Stage I 8K target-entropy 0.2 No Table 2: Shared hyperparameters in Ablation Experiments 2 based on Deepseek-R1-Distill-Qwen-7B. Figure 8: AIME24 avg@32 performance vs. context length for different advantage mask strategies in Ablation Experiments 3. All strategies achieve the same accuracy at the 32K context length. The accuracy was further improved after the training of Stage II even though the noisy training signals from truncated responses were introduced in Stage I. Adv-Mask-Before strategy under large context lengths – where | https://arxiv.org/abs/2505.22312v1 |
responses are typically not truncated (e.g., 32K) – shows no improvement. This may be attributed to the smaller effective training batch size caused by the increased clip ratio under the Adv-Mask-Before strategy. The behavior of Adv-Mask-After serves as an intermediate point between Adv-Mask-Before and No-Adv-Mask. Advantage Mask Does Not Exhibit Better Performance Given a Larger Inference Budget. Although Ablation Experiments 2 demonstrate that ¯rπ untrunc (x)is optimized under short context lengths when applying advantage masks, we find that accuracy does not improve when the context length is large enough to avoid truncation (i.e., 32K). We compare the test-time scaling behavior on AIME24 for models trained with different advantage mask strategies (see Figure 8). The results show that applying an advantage mask does not improve test-time scaling behavior in Stage I, and accuracy at 32K remains unchanged-even though ¯rπ non−trunc(x)is optimized during training. In contrast, RL training without an advantage mask in Stage I not only maintains accuracy at large context lengths but also significantly improves token efficiency. Moreover, the shorter response lengths learned in Stage I do not hinder the simultaneous improvements in both response length and accuracy observed in Stage II. Based on these findings, we did not apply any advantage mask to address noisy training signals from truncated samples in our final training recipe. 3.2.4 High-temperature Sampling The group-wise nature of GRPO implies that the sampling procedure for responses directly affects the quality and diversity of each group, which in turn influences learning. Prior work suggests that higher temperatures generally lead to slightly worse performance due to increased randomness. If the temperature is set too 14 Figure9: AIME25avg@8performanceandentropyversusthenumberoftrainingstepsinAblationExperiments 3. Training with a temperature of 0.6 starts with the lowest entropy and learns more slowly than at a temperature of 1.0. Note that the entropy in the right plot remains around 0.2 because adaptive entropy control is enabled. This experiment was conducted on an earlier version of the 32B variant using only math data. Note also that in the left plot, the two temperatures indicate the rollout temperatures used during training. The scores of AIME25 were obtained by evaluating both models at a temperature of 0.6 to ensure a fair comparison. high, it may increase the likelihood of sampling groups containing only incorrect responses, thereby reducing training efficiency due to the absence of advantageous signals. On the other hand, using a low temperature reduces group diversity, resulting in solutions that are highly similar or potentially all correct. Therefore, selecting an appropriate temperature is critical to ensure sufficient in-group solution diversity. We conducted ablation experiments on the choice of sampling temperature τ, and the results are presented in Figure 9. Ablation Experiments 4: Different Online Sampling Temperatures τ We compared two different sampling temperatures in online RL training: 1. High Temperature: We set the temperature hyperparameter τ= 1.0. 2. Low Temperature: We set the temperature hyperparameter τ= 0.6. The other hyperparameters were kept the same for both experiments and are reported in Table 3. The results can be found in Figure 9. Batch Size Mini-batch Size Group Size Context Length TEntropy Control | https://arxiv.org/abs/2505.22312v1 |
KL Loss 64 32 16 Stage I16K target entropy 0.2 0 Table 3: Shared hyperparameters in Ablation Experiments 4 In our experiments, we identified an additional entropy-related phenomenon: when a low temperature is used (e.g., 0.6), the model either begins with extremely low entropy or its entropy quickly collapses to near zero within approximately 100 steps. This behavior initially slows learning progress and ultimately leads to stagnation. We hypothesize that with a less diverse group of solutions – despite containing both correct and incorrect responses – the policy update becomes overly focused on a narrow subset of tokens. This results in a large probability mass being assigned to specific tokens that frequently appear in the sampled responses. When we increased the rollout temperature to 1.0, the model’s initial entropy rose to a more desirable range. Although entropy still eventually converges, the higher temperature substantially enhances the learning signal in the early stages and preserves greater potential for continued training, as shown in the figure above. 15 3.2.5 Adaptive Entropy Control Building on the findings from Section 4 – which suggest that while preventing premature entropy collapse via entropy regularization is beneficial, selecting an appropriate entropy loss coefficient is challenging – we introduce Adaptive Entropy Control , a method that adaptively adjusts the entropy loss coefficient based on the target and current entropy. Specifically, we introduce two additional hyperparameters: tgt-ent (the desired target entropy) and ∆(the adjustment step size for the entropy loss coefficient). We initialize the adaptive coefficient with c0= 0. At each training step k, let edenote the current entropy of the actor (estimated from the rollout buffer). If eis less than tgt-ent, we increase ckby∆(i.e., ck+1=ck+ ∆). Ife exceedstgt-ent, we decrease ckby∆. To alleviate instability caused by unnecessary entropy loss, we activate the entropy loss only when e≤tgt-ent, i.e., αk=ck·I{e≤tgt-ent }, ensuring that the current entropy remains lower-bounded by the target entropy. By leveraging adaptive entropy control, we maintain the model’s entropy at a reasonable level throughout training and effectively prevent premature collapse. Figure 10 illustrates the entropy trajectory of Skywork- OR1-Math-7B across all training stages. In our experiments, we set tgt-ent= 0.2 and ∆= 0.005. To further validate the effectiveness of adaptive entropy control, we conducted an ablation study detailed in Section 4.5. αk=ck·I{ek≤tgt-ent }, ck+1=( ck+ ∆,ifek<tgt-ent ck−∆,ifek>tgt-ent, c0= 0 (3.2) Figure 10: Entropy of generated responses ( left) and avg@8 performance on AIME24 ( right) of Skywork- OR1-Math-7B across all stages. We use adaptive entropy control with tgt-ent=0.2 and ∆ = 0 .005. Under adaptive entropy control, the entropy of Skywork-OR1-Math-7B is generally lower-bounded by the target entropy 0.2 and the performance on AIME24 has been steadily improving. 3.2.6 No KL Loss To investigate the impact of the KL loss, we conducted the following ablation experiments. 16 Ablation Experiments 5: KL Loss vs. No KL Loss We consider token-level k3 loss in our ablation and the KL-regularized policy loss we employed is: Lβ(θ) =L(θ) +β TkX i∈TkMX j=1|yij|−1X t=0 πref at ij|st ij πθ at ij|st ij−logπref at ij|st ij πθ at ij|st ij−1! , where L(θ)is | https://arxiv.org/abs/2505.22312v1 |
the original policy loss defined in (3.1),βis the KL coefficient. We first run a stage 1 experiment with β=1e-3 based on DeepSeek-R1-Distill-Qwen-7B (reference policy). Then in stage 2, we conducted ablations based on the stage 1 checkpoint, comparing β= 1e-3with β= 0. The other hyper-parameters are reported in Table 4. The results can be found in Figure 11(a) and Figure 11(b). Batch Size Mini-batch Size Group Size Context Length TEntropy Control 256 128 16 Stage II 16K target entropy 0.2 Table 4: Shared hyperparameters in Ablation Experiments 5 based on stage1 checkpoint We observe that, in Stage 2, the KL loss strongly pulls the actor model’s policy back toward the reference model, causing the KL divergence to rapidly decrease toward zero (see Figure 11(a)). As a result, performance on AIME24 fails to improve significantly once the actor’s policy becomes too similar to the reference policy (see Figure 11(b)). Based on this observation, we set β= 0for all training stages of our released models. (a) (b) Figure 11: Results of Ablation Experiments 5. Left:KL divergence between the actor model and the reference model during RL training with different KL loss coefficient βin Ablation Experiments 5. Setting β= 1e-3pulls the actor model back towards the reference model strongly in stage 2. Right:The AIME24 avg@8 performance at temperature 1 during RL training of different βin Ablation Experiments 5. 17 4 Empirical Studies on Mitigating Policy Entropy Collapse Figure 12: Overview of our empirical studies on mitigating policy entropy collapse. Gray and green blocks: The potential benefits and possible approaches to enhance the model’s exploration capability and mitigate entropy collapse. Yellow blocks: The experimental variables in our empirical studies on keeping the model’s exploration capability and maintaining high plasticity. Exploration and exploitation represent one of the most fundamental dilemmas in RL training [ 22], particularly in on-policy algorithms. In brief, achieving better performance requires sufficient exploration. However, if the agent’s policy prematurely converges to a specific solution, that policy may be suboptimal, and such convergence hinders the exploration of diverse trajectories. An important metric for monitoring the convergence of RL algorithms is policy entropy. In general, when a model’s policy entropy converges to a very small value (e.g., near zero), the policy stabilizes. At this point, the model’s generation behavior becomes resistant to updates from training data, leading to reduced learning efficiency and diminished output diversity. To expose the model to more effective training signals and improve its out-of-distribution (OOD) performance, it is therefore critical to prevent premature entropy collapse in practice. This section investigates which hyperparameters and components of the policy update process help prevent entropy collapse and, in turn, improve OOD generalization. The overall framework of our empirical study on alleviating policy entropy collapse is illustrated in Figure 12. Initially, we hypothesize that the following two sources may influence the model’s entropy and convergence behavior: •Rollout diversity. If the rollout data contain a greater diversity of correct responses, this prevents the model from overfitting to a single correct trajectory. We examine how sampling-related hyperparameters – such as sampling temperature, rollout batch size, and group | https://arxiv.org/abs/2505.22312v1 |
size – affect the model’s policy entropy during RL training. •Policy update. We also investigate how different components of the policy update influence entropy. In this section, we focus primarily on the number of stochastic gradient descent (SGD) steps per training step and the use of additional entropy control methods (e.g., entropy loss). 18 After conducting exhaustive ablation experiments, we present our main results below. Empirical Results of Our Entropy Collapse Study 1.Faster entropy collapse generally leads to worse test performance. In Section 4.2 and Section 4.5, we show that appropriate entropy control, which prevents premature policy convergence, can yield improved test performance. 2.Increasing rollout diversity by enlarging the batch size and group size has only a minor effect on entropy dynamics, whereas using a higher sampling temperature significantly impacts initial entropy. See Section 4.3 for details. 3.Increasing the number of SGD steps per training step – whether by using more mini-batches or increasing data reuse – significantly accelerates entropy collapse and generally results in degraded test performance due to the introduction of off-policy data. See Section 4.4 for more information. 4.Our ablation experiments in Section 4.5 show that the entropy loss is highly sensitive to both the training data and the loss coefficient. By either adaptively adjusting the entropy loss coefficient or appropriately applying the clip-higher trick [ 34], entropy dynamics can be stabilized and lower- bounded, leading to improved test performance. 4.1 Ablation Setup All ablation experiments presented in Section 4 are conducted using the training pipeline described in Section 3.1. We start from the following baseline experiment based on DeepSeek-R1-Distill-Qwen-7B with its hyperparameters reported in Table 5, the key symbols used are defined as follows: •DRis the rollout batch size (the number of prompts used to generate responses in one training step). •DTis the mini-batch size (the number of prompts corresponding to the responses used per policy update step). •Nreuseis the number of times the rollout buffer is traversed. •gsis the group size (the number of responses generated for each prompt). •Tis the context length. •τis the sampling temperature. DR DT Nreuse gs T τ Learning Rate Entropy Control KL loss 64 64 1 16 16K 1.0 1e-6 No No Table 5: Hyperparameters of our baseline experiment in the ablation study presented in Section 4. Unless otherwise specified, the default training configurations for all ablation experiments in this section are aligned with those of the baseline experiment presented above. We use AIME24, AIME25, and LiveCodeBench [10] (2024.08–2025.02) as evaluation sets. The test performance reported in our ablation study is computed as the empirical mean of avg@8 performance on AIME24/25 and pass@1 performance on LiveCodeBench. Notably, the baseline experiment achieves 69.2% avg@8 on AIME24, 53.3% avg@8 on AIME25, and 50.5% pass@1 on LiveCodeBench after 2,700 training steps using 32 H800 GPUs. These results, which closely approximate the performance of our final Skywork-OR1-7B release, establish a strong baseline for analyzing key factors that affect test performance and contribute to entropy collapse. 19 Figure 13: Preliminary experiments on mitigating entropy collapse by introducing entropy loss. We tested two different coefficients αk=1e-3 and | https://arxiv.org/abs/2505.22312v1 |
5e-3, and found that the entropy loss with the higher coefficient αk, i.e., 5e-3, more effectively prevents entropy collapse and achieves higher test performance. Left: Accuracy curves on test benchmarks during RL training. Right: Entropy of generated responses during RL training. 4.2 Premature Entropy Collapse Generally Manifests as Worse Performance As previously noted, entropy dynamics during RL training reflect the degree of policy convergence. When the actor converges to a specific policy and enters a low-entropy state, both learning efficiency and rollout diversity tend to decline. In our preliminary experiments, we observed that the entropy of the actor model often decreased rapidly during training. To mitigate premature entropy collapse, we introduced an entropy loss term, hypothesizing that it would allow the actor to converge toward a better policy. Our results confirmed this hypothesis: test performance improved with the addition of entropy loss. Figure 13 presents the accuracy curves on test benchmarks and the entropy of generated responses from two preliminary experiments using different values of the entropy loss coefficient αk(1e-3 vs. 5e-3). The results show that using a higher coefficient (i.e., 5e-3) more effectively prevents entropy collapse and leads to better generalization performance. Furthermore, our ablation experiments in Section 4.4 reinforce this finding, showing that RL training accompanied by premature entropy collapse generally results in worse test performance. These observations motivate our integration of entropy control mechanisms into the training pipeline, as well as our systematic investigation into how hyperparameters and other RL components influence entropy dynamics. 4.3 The Impact of Rollout-Diversity-Related Hyperparameters We investigated how the rollout batch size DR, group size gs, and sampling temperature τinfluence entropy dynamics. Note that increasing the rollout batch size DRand group size gsduring the rollout stage results in a larger rollout budget, which typically requires greater computational resources to accelerate training. Therefore, we provide a detailed discussion of the impact of DRandgsin Section 5, which focuses on training-time computational resource allocation for improved test performance. Here, we present only the experimental results related to policy entropy. Specifically, we conducted ablation experiments using rollout batch sizes DR= 16 ,32,64and group sizes gs= 4,8,16, based on the baseline experiment described in Section 4.1 and analyzed in Section 5. Our results (Figure 14) indicate no significant differences in entropy dynamics across these on-policy configurations. Notably, none of these experiments exhibited entropy collapse. Regarding the sampling temperature τ, we found that using a properly chosen but relatively high temperature led to lower test accuracy during the initial training steps, but ultimately resulted in greater performance improvements. For further details, please refer to Section 3.2.4. 20 Figure 14: Entropy of generated responses during on-policy updates with different rollout batch sizes DR (left) and group size gs ( right). All the experiments exhibit similar entropy dynamics. On Policy =Off Policy ... ... ...... ×reuse One SGD step is performed using the whole rollout bufferRollout buffer is partitioned into mini-batches and reused by reuse times. Total SGD= ×reuse SGD steps are performed in one training step Figure 15: Illustration of on-policy vs. off-policy update in PPO-style policy loss. | https://arxiv.org/abs/2505.22312v1 |
On-policy update applies a single SGD step to the entire rollout batch, whereas off-policy update implements multiple SGD steps through rollout batch decomposition and reuse. The rollout batch is partitioned intoDR DTmini-batches, with each mini-batch undergoing an independent SGD step. Then, one can iterate over the rollout batch Nreuse times. Thus, the total number of SGD steps performed on one rollout batch isDR DT×Nreuse. 4.4 The Impact of Off-policy Update by Increasing NSGD Note that the policy loss (3.1)in MAGIC is PPO-style, which naturally allows for performing multiple SGD steps through rollout batch decomposition and reuse (as illustrated in Figure 15). Recalling the definitions of DR,DTandNreusefrom Section 4.1, it is clear that the number of SGD steps performed in one training step, i.e.NSGD, satisfies NSGD=DR DT·Nreuse. (4.1) 21 When DR=DTandNreuse= 1, the policy update is purely on-policy since NSGD= 1. In contrast, when DT< D RorNreuse≥2,NSGD≥2and the off-policy data is introduced into the policy update. In this section, we investigate how NSGDaffects the entropy dynamics and the test performance improvement. More SGD Steps, Faster Convergence with Worse Test Performance. We conducted the following ablation experiments on different NSGDvalues by decreasing DTor increasing Nreusegiven fixed DR. Ablation Experiments 6: The Impact of Different Numbers of SGD Steps NSGD Consider the quadruple (NSGD, DR, DT, Nreuse). We started from the baseline experiment (1,64,64,1) presented in Section 4.1 and adjusted either DTorNreuseto increase NSGD. The experiments are listed below: 1.NSGD= 1: The baseline experiment with the quadruple (1,64,64,1). 2.NSGD= 2: We ran two experiments with the quadruples (2,64,32,1) and (2,64,64,2). 3.NSGD= 4: We ran two experiments with the quadruples (4,64,16,1) and (4,64,64,4). The experimental results can be found in Figure 16. Figure 16: Results of Ablation Experiments 6. Off-policy training with increased NSGDby either decreasing DTor increasing Nreuseaccelerates entropy collapse and exhibits worse test performance. Left: Entropy of generated responses during RL training. Right: Test performance during RL training. As shown in Figure 16, experiments with NSGD∈ {2,4}exhibit faster policy convergence, with entropy decaying to very small values within a few training steps. As a result, test performance fails to improve consistently once the model enters a low-entropy state. In contrast, using an on-policy update with the configuration (1,64,64,1)significantly alleviates this issue, leading to a gradual decline in entropy and a steady, albeit slower, improvement in test performance. Ultimately, the on-policy update with configuration (1,64,64,1)achieves superior test performance when the number of training steps is sufficiently large. Off-Policy Data Harms Test Performance. We now investigate which factor in off-policy updates is more likely to contribute to degraded test performance. We identify the following two potential contributors that may influence the gradient direction in each SGD step: (1) the mini-batch size DT, and (2) the use of off-policy data. In the data reuse experiments with Nreuse∈ {2,4}, since DTis held constant and matches the value used in the on-policy setting, we attribute the degraded test performance to the use of off-policy data introduced through rollout batch reuse. In experiments that involve more mini-batches (i.e., DT∈ {16,32}), the performance drop compared to the on-policy update may be | https://arxiv.org/abs/2505.22312v1 |
due to both the smaller mini-batch size – 22 leading to greater gradient variance – and the presence of off-policy data. To better understand which factor contributes more significantly, we conducted the following ablation experiments. Ablation Experiments 7: On-policy vs. Off-policy with the Same SGD Data Size DT Consider the quadruple (NSGD, DR, DT, Nreuse). 1.Off-policy update: We considered two off-policy experiments in Ablation Experiments 6 with the quadruples (2,64,32,1) and (4,64,16,1), which have smaller DTcompared to the baseline (1,64,64,1). 2.On-policy update: We ran two experiments, configured with the quadruples (1,32,32,1) and (1,16,16,1) respectively as the on-policy counterparts to the off-policy update. These were based on the baseline configuration from Section 4.1. The experimental results are reported in Figure 17. Figure 17: Results of Ablation Experiments 7. On-policy experiments, i.e. NSGD= 1, do not exhibit premature entropy collapse and finally outperform the off-policy counterparts with the same DTwhen training step is sufficiently large. Left: Entropy of generated responses during RL training. Right: Test performance at temperature 1 during RL training. The experimental results shown in Figure 17 indicate that the on-policy update with a smaller DT– relative to the baseline experiment – still yields steady improvements in test performance, and premature entropy collapse does not occur. Ultimately, the on-policy update outperforms the off-policy update with the same DTwhen the number of training steps is sufficiently large. Based on these observations, we hypothesize that the degraded test performance in the off-policy update is primarily caused by the introduction of off-policy data in each SGD step. Can a Large DRin Off-Policy Updates Prevent Premature Entropy Collapse? Consider the off-policy experiment in Ablation Experiments 6 with the quadruple (NSGD, DR, DT, Nreuse) = (4 ,64,16,1). We attempted to increase the rollout batch size DRfrom 64 to 256 while keeping NSGD= 4fixed (i.e., resulting in the configuration (NSGD, DR, DT, Nreuse) = (4 ,256,64,1)), with the expectation that this would introduce more diverse samples and prevent convergence on single trajectory. However, our results in Figure 18 indicates that even with a larger DR, premature entropy collapse not only still occurs but may even do so more rapidly. 23 Figure 18: KeepingDR DT= 4andNreuse= 1, off-policy training with a larger DR, i.e., DR= 256, does not prevent the premature entropy collapse. Both off-policy experiments, i.e. NSGD= 4, exhibit faster entropy convergence compared with the on-policy experiment with NSGD= 1. 4.5 Preventing Premature Entropy Collapse As previously discussed, premature entropy collapse is often associated with degraded test performance. It is therefore reasonable to expect that proper entropy control can lead to improved outcomes. As shown earlier, increasing NSGDand introducing off-policy data accelerate entropy convergence. However, there are an increasing number of scenarios where the use of off-policy data is unavoidable – for example, in asynchronous training frameworks. Thus, it is also important to study entropy control mechanisms under off-policy settings. We begin by examining entropy regularization, a straightforward approach that attempts to prevent entropy collapse by directly adding an entropy loss term. Our preliminary experiments, presented in Section 4.2, show that applying entropy regularization with an appropriately chosen | https://arxiv.org/abs/2505.22312v1 |
coefficient can mitigate entropy collapse and improve test performance. However, we later observed that the effectiveness of entropy regularization is highly sensitive to both the choice of coefficient and the characteristics of the training data, making it difficult to select an optimal coefficient in advance. This motivates a dynamic adjustment of the entropy loss coefficient. In addition, we consider the clip-higher trick proposed in [ 34] as another means of entropy control. In the following, we present our detailed findings. Entropy Loss Is Sensitive to the Coefficient αk.To demonstrate the sensitivity of entropy loss to the choice of αk, we conduct the following ablation study. Ablation Experiments 8: Entropy Loss with Different Coefficients αk We conducted ablation studies on a wide range of constant coefficients αkbased on Skywork-OR1- Math-7B-stage1 ( notthe baseline experiment in Section 4.1 ). We select αk=1e-4, 5e-4, 1e-3, 5e-3, 1e-2. The other hyperparameters are reported in Table 6. The results are presented in Figure 19. From the results in Figure 19, we find that: •Forαk= 5e-4, 1e-3, 5e-3, and 1e-2, the entropy eventually rises sharply, leading to model collapse. The larger the αk, the more rapidly the entropy increases. 24 Batch Size Mini-batch Size Group Size T Temperature τKL Loss 64 32 16 Stage II 16K 1.0 No Table 6: Shared Hyperparameters in Ablation Experiments 8 Based on Skywork-OR1-Math-7B-stage1 Figure 19: The results of Ablation Experiments 8. Left:The entropy of generated responses during RL training.Right:The AIME24 avg@8 performance at temperature 1 during RL training. •Forαk= 1e-4, while entropy does not exhibit a continuous rise, it still collapses, persistently decreasing toward zero. Entropy Loss Is Sensitive to Training Data. From our two preliminary experiments, we observe that the entropy loss is highly sensitive to variations in training data. We conducted two experiments under identical configurations, both using an entropy loss coefficient of 1e-3. The only difference between the two setups was the training dataset used (both datasets belong to the math domain). The results, shown in Figure 20, reveal a striking difference in entropy dynamics: while the original dataset exhibited a steady decline in entropy throughout training, the new dataset resulted in a consistent upward trend in entropy. This finding highlights the data-dependent nature of tuning the entropy loss coefficient. Adjusting the Coefficient of Entropy Loss Adaptively. Based on our findings regarding the sensitivity of entropy loss, we propose a method called adaptive entropy control (see Section 3.2.5 for details), which dynamically adjusts the entropy loss coefficient during training. As shown in Figure 10, the entropy of Skywork-OR1-Math-7B remains lower-bounded by the target entropy throughout the RL training process. To further validate the effectiveness of adaptive entropy control, we conduct the following ablation experiments. Ablation Experiments 9: Effectiveness of Adaptive Entropy Control Consider the off-policy experiment in Ablation Experiments 6 with (NSGD, DR, DT, Nreuse) = (4,64,16,1), which exhibits fast entropy collapse and bad test performance. Note that there is no entropy loss in this experiment. We ran an experiment based on its configuration with adaptive entropy control (using a target entropy of 0.2) enabled. We report the results | https://arxiv.org/abs/2505.22312v1 |
in Figure 21. Aspreviouslyanalyzed, increasing NSGDacceleratespolicyconvergenceandleadstodegradedtestperformance. As shown in Figure 21, applying adaptive entropy control successfully prevents entropy collapse and results in higher test performance. However, it is worth noting that, although the coefficient is adjusted adaptively, entropy remains unstable when NSGDis large. We speculate that this is due to the entropy loss being computed over the entire vocabulary, which may increase the probability of many unintended tokens. Therefore, we do 25 Figure 20: Preliminary experiments investigating how training data affects the entropy during RL training. Both experiments used the same hyperparameter configurations with αk=1e-3 but differed in the training data. Both datasets are in math domain. simply switching the dataset resulted in dramatically different entropy evolution patterns not recommend using adaptive entropy control in scenarios where NSGDis large. Nonetheless, we find that when NSGD= 1or2, entropy dynamics remain acceptably stable under adaptive entropy control. Based on these findings, we adopt adaptive entropy control in the training of our Skywork-OR1 models. Figure 21: The results of Ablation Experiments 9. Applying adaptive entropy control prevents the entropy collapse, leading to a better test performance. Left:Entropy of generated responses during RL training. Right:Test performance during RL training. The Impact of the Clip-Higher Trick. We tested a popular trick called clip-higher [ 34] used in PPO-style policy loss to prevent the entropy collapse when NSGD>1. We conduct the following ablation experiments. 26 Ablation Experiments 10: The Impact of Different Higher-clip Ratios Consider the off-policy experiment in Ablation Experiments 6 with the quadruple (NSGD, DR, DT, Nreuse) = (4 ,64,16,1), which exhibits fast entropy collapse and poor test performance. Note that the clip ratio ϵ= 0.2was applied in this experiment. We raised the higher-clip ratio from 0.20to0.25,0.265, and 0.28while keeping the lower-clip ratio fixed at 0.2. We report the results in Figure 22. Our results, shown in Figure 22, indicate that using a properly chosen higher-clip ratio – e.g., 0.25 or 0.265 – can prevent premature entropy collapse and lead to better test performance. However, it is worth noting that when the higher-clip ratio is set to 0.28, as suggested in [ 34], entropy increases sharply, resulting in poor test performance. This suggests that the optimal higher-clip ratio is task-dependent. Figure 22: The results of Ablation Experiments 10. Increasing the higher-clip ratio to an adequate value (e.g., 0.25 and 0.265) yields slower convergence and better test performance. However, we find that when the higher-clip ratio is set to 0.28 as recommended in [ 34], then entropy rises sharply and test performance is not improved. Left:Entropy of generated responses during RL training. Right:Test performance during RL training. 5 Empirical Studies on Training Resource Allocation During the RL training process, our goal is to select hyperparameters that make training both efficient and effective. This objective gives rise to two practical questions: •Given fixed computational resources, how can we improve training efficiency? •Given additional computational resources, how should we allocate them to achieve better test perfor- mance or improved training efficiency? In this section, we address these questions in the context of long CoT scenarios, using results from exhaustive | https://arxiv.org/abs/2505.22312v1 |
ablation experiments as supporting evidence. The training process of online RL algorithms can generally be divided into two distinct phases: data rollout andpolicy update (which includes both forward and backward passes). Let tR,tT, and tOdenote the time spent on rollout, policy update, and other operations (e.g., reward computation, experience generation), respectively. The total time consumption under a synchronous training framework is: ttotal=tR+tT+tO. 27 Given a fixed context length, the rollout time tRis primarily influenced by the rollout batch size DRand the group size ( gs). As analyzed in Section 4.4, the policy update time tTdepends on the number of SGD steps NSGD, which is determined by the number of mini-batchesDR DTand the data reuse factor Nreuse. In the following subsections, we investigate how these factors impact both training efficiency and final performance. 5.1 Improving Training Efficiency with Fixed Computational Resources In this section, we aim to answer the first question: Given fixed computational resources, how can training efficiency be improved? Figure 23: Overview of empirical studies on improving training efficiency given fixed computational resources. Grey blocks: Potential approaches to enhance training efficiency and their underlying principles. Yellow blocks: Experimental variables in the empirical studies Rollout Time tRDominates the Total Training Time ttotal.A fundamental observation regarding long CoT models (e.g. Deepseek-R1-Distill model series) is that the total training time is primarily determined by the rollout time. Table 7 presents the values of ttotal,tR,tTandtOof Skywork-OR1-32B over 1000 training steps. Clearly, tRdominates ttotal. Time Usagetotal ttotalrollout tRpolicy update tTothers tOtR/ttotal tT/ttotal Hours 309 223 27 59 72.1% 8.7% Table 7: Analysis of training time usage of Skywork-OR1-32B for 1000 training steps. Since the primary bottleneck for ttotalin long CoT training is tR, it is reasonable to expect that appropriately increasing the number of SGD steps per training step, i.e., NSGD, will have minimal impact on ttotalwhile improving training efficiency. Therefore, in the following, we investigate the impact of the number of mini- batches (DR DT) and the data reuse times ( Nreuse) on both the total training time ttotaland test performance. The overall idea of our study is illustrated in Figure 23. More SGD Steps, More Training Efficiency but Worse Performance. We have already examined the impactofincreasing NSGDonentropydynamics, asdiscussedinAblationExperiments6(Section4.4). Consider 28 the configuration tuple (NSGD, DR, DT, Nreuse). We report the detailed time usage for the configurations (1, 64, 64, 1), (2, 64, 32, 1), and (4, 64, 16, 1) in Table 8. It is evident that increasing NSGDleads to a higher tT. However, the impact on the overall training time ttotalremains minor, provided that DRis fixed. Thus, the configurations with NSGD∈ {2,4}perform multiple SGD steps within comparable training time, improving training efficiency. That said, the experimental results in Section 4.4 show that accelerating training via rollout batch decomposition or data reuse leads to faster entropy collapse and poorer test performance. Therefore, we do not recommend increasing NSGDsolely for the purpose of improving training efficiency – unless appropriate mechanisms are in place to mitigate entropy collapse, particularly those caused by off-policy updates – as doing so may result in degraded generalization performance. Experiment (NSGD, DR, | https://arxiv.org/abs/2505.22312v1 |
DT, Nreuse)total ttotalrollout tRpolicy update tTothers tOtR/ttotal tT/ttotal (1,64,64,1) 116 90 8 18 77.6% 6.9% (2,64,32,1) 114 87 10 17 76.3% 8.7% (4,64,16,1) 118 90 12 16 76.3% 10.2% Table 8: Detailed time usage for three experiments from Ablation Experiments 6 over 1000 training steps. All the experiments utilized the same training resources (i.e., 32 H800 GPUs). 5.2 Improving Test Performance with More Computational Resources In this section, we address the second question: given more computational resources, how should training resources be allocated to achieve higher test performance or better training efficiency? Regarding training efficiency, two approaches may be considered. On the one hand, increasing the number of SGD steps – previously discussed – may seem promising. However, experimental findings do not support the effectiveness of this approach (see Section 5.1). On the other hand, under a fixed rollout budget (i.e., the number of samples to be rolled out), one might expect a significant reduction in rollout time tRas training resources are scaled up. In practice, however, this expectation is not fully realized. Table 9 shows the rollout time tRfor The number of H800 32 64 128 256 Rollout time tR(reduction) 375 270 (-105) 225 (-45) 205 (-20) Table 9: Rollout time tR(seconds) for generating 1024 responses in one training step. The data shows that as computational resources increase, the incremental reduction in tRdiminishes. 1024 samples under varying training resources. Notably, as training resources increase, the reduction in tR diminishes. This is because tRis primarily determined by the batch size and the time required to generate the longest response. Once sufficient resources are available, further scaling does not significantly reduce the processing time dominated by the generation of the longest sample. Therefore, when additional training resources are available, a more effective strategy is to increase the rollout budget appropriately, such that the rollout time tRremains roughly constant or increases only marginally. By leveraging a larger rollout buffer, more accurate gradient estimates can be obtained, which may improve training efficiency and enhance test performance. In the following, we focus on how the rollout budget – determined by the rollout batch size and group size – affects RL performance. The overall idea of these studies are illustrated in Figure 24 Larger Batch Size, Better Test Performance. To investigate how the rollout batch size DRaffects the training dynamics, we conducted the following ablation experiments. 29 Figure 24: Overview of empirical studies on the effect of an increased rollout budget when more training resources are available. Grey and green blocks : The motivation of the empirical studies. Yellow blocks : The experimental variables in the empirical studies. Ablation Experiments 11: The Impact of Rollout Batch Size DR Consider the quadruple (NSGD, DR, DT, Nreuse). We consider the baseline experiment with the quadruple (1,64,64,1) in Section 4.1 and two on-policy experiments in Ablation Experiments 7 with the quadruples (1,32,32,1) and (1,16,16,1) respectively. These three experiments were conducted using 64,32 and 16 H800 respectively. We present the experimental results in Figure 25. The results in Figure 25 indicate that increasing the rollout batch size DRin accordance with available | https://arxiv.org/abs/2505.22312v1 |
training resources can lead to better test performance with similar training time consumption. Larger Group Size, Better Test Performance. To investigate how the group size affects the training dynamics, we conducted the following ablation experiments. Ablation Experiments 12: The Impact of Group Size ( gs) Consider the baseline experiment with group size 16 in Section 4.1. We ran two additional on-policy experiments with gs= 8,4respectively. These three experiments were conducted using 64,32 and 16 H800 respectively. The experimental results are presented in Figure 26. It can be observed from Figure 26, given more training resources, increasing rollout budget by increasing the group size can lead to a better test performance with similar total training hours. 6 Dataset Preparation In this section, we introduce the processing pipeline for our RL training data. 30 Figure 25: Results of Ablation Experiments 11. Given more training resources, increasing the rollout budget by increasing DRachieves better test performance with similar total training hours. Figure 26: Results of Ablation Experiments 12. Given more training resources, increasing rollout budget by increasing the group size can achieve better test performance with similar total training hours. 6.1 Data Source Selection and Preprocessing For the math domain, we primarily focus on NuminaMath-1.5 [ 13], a comprehensive dataset containing 896K math problems drawn from widely used sources and advanced mathematical topics. Although the dataset is sufficiently large, its quality requires careful examination prior to use. For the code domain, we find that data source options are more limited, and the overall difficulty of available datasets is generally low relative to the capabilities of current models. In our pilot studies, we experimented with several popular datasets – including CODE-RL [ 12], TACO [ 14], and the Eurus-RL collection [ 2] – in their original mixtures, but obtained unsatisfactory results. Selection Criteria To select and curate high-quality data for RL, we adhere to the following general criteria for both data domains: 1.Verifiable : We exclude problems that cannot be verified, such as proof-based problems and code problems lacking test cases. 31 2.Correct : We filter out math problems with invalid or incorrect answers, as well as code problems without comprehensive test cases. 3.Challenging : We pre-filter problems for which all N generations from the base model are either entirely correct or entirely incorrect. Following these criteria, we incorporate challenging problems from NuminaMath-1.5 and other sources to enhance problem difficulty and diversity in our data mixture: 1) NuminaMath-1.5 subsets: amc aime, olympiads, olympiads ref, aops forum, cn contest, inequalities, and number theory. 2) DeepScaleR. 3) STILL- 3-Preview-RL-Data. 4) Omni-MATH. 5) AIME problems prior to 2024. For the code data mixture, we primarily consider problems from the following two sources, which offer sufficiently challenging coding questions: 1) LeetCode problems [30]. 2) TACO [15]. Preprocessing Pipeline For both math and coding problems, we first perform in-dataset deduplication to eliminate redundancy. For all collected math problems: •We use Math-Verify [ 11] to re-extract answers from the provided textual solutions and retain only those problems where the extracted answer matches the corresponding answer in the dataset. •We remove all instances | https://arxiv.org/abs/2505.22312v1 |
that contain external URLs or potential figures in the problem statement. •We then perform cross-dataset deduplication to eliminate potentially duplicated problems from similar sourcesanddecontaminateagainstAIME24andAIME25problems, followingDeepScaleR’sdeduplication scheme. This process yields approximately 105K math problems. For coding problems, we apply a more rigorous filtering process as follows: •We discard samples with empty, incomplete, or corrupted original unit test cases. •We programmatically verify all test cases using the provided original solutions. A sample is marked as valid only if the solution passes all corresponding test cases perfectly. •We conduct extensive deduplication based on embedding similarity across the collected coding problems, as many share the same problem with only slight variations in instructions. This results in a total of 13.7K coding questions (2.7K from LeetCode and 11K from TACO) in the final dataset. 6.2 Model-Aware Difficulty Estimation Due to the zero-advantage in GRPO when all sampled responses are either entirely correct or entirely incorrect within a group, we conduct an initial offline difficulty estimation for each problem relative to the models being trained. Specifically, for each problem, we perform N=16 rollouts for math problems and N=8 for coding questions using a temperature of 1.0 and a maximum token length of 32K , and use the percentage of correct solutions as a proxy for problem difficulty with respect to a given model. After verifying the correctness of the sampled solutions, we exclude problems with 0/N (all incorrect) or N/N (all correct) rollouts. We report the percentage statistics of discarded and retained math/code problems for both the 7B and 32B models as follows: 0 NCorrectN NCorrect Remaining (math/code) (math/code) (math/code) Deepseek-R1-Distill-Qwen-7B 21.4% / 28% 32.4% / 24% 46.2% / 48% Deepseek-R1-Distill-Qwen-32B 20.7% / 17.1% 42.0% / 45.4% 37.3% / 37.6% 32 6.3 Quality Assessment via Human and LLM-as-a-Judge During the data processing stage, we identified that many problems in the math portion were either incomplete or poorly formatted. Consequently, we conducted an additional round of strict human-LLM- combined inspection to ensure data quality. We sampled a few hundred questions from the remaining pool and asked human evaluators to assess whether each problem met the following criteria: 1.Clear Wording : Is the problem stated in a way that is easy to understand? 2.Complete Information : Does the problem provide all necessary details? 3.Good Formatting : Are the numbers, symbols, and equations clear and appropriately formatted? 4.No Distractions : Is the problem free of irrelevant information? We provide below examples of original problem statements that human evaluators identified as problematic: Incomplete Problems: •6. Five spherical surfaces can divide space int parts. (NuminaMath-1.5, Olympiads) •Which of the following numbers is equal to 33 million? (STILL-3-Preview-RL-Data) •Which number is greater than 0.7 (STILL-3-Preview-RL-Data) •Example 27 Find σ2(28) =?(NuminaMath-1.5, Number Theory) Irrelevant Information: •250. y=ln x3−1 .\n\n 250. y=ln x3−1 .\n\n The above text has been translated into English, retaining the original text’s line breaks and format. However, since the original text is a mathematical expression, the translation is identical to the original as mathematical expressions are generally universal and do not change between languages. (NuminaMath-1.5, Olympiads) •1. (12 points) The figure is | https://arxiv.org/abs/2505.22312v1 |
composed of 5 identical squares. The number of triangles that can be formed using the 12 points in the figure as vertices is.10. (12 points) The figure is composed of 5 identical squares. The number of triangles that can be formed using the 12 points in the figure as vertices is. (NuminaMath-1.5, Olympiads) Interestingly, these problems passed the difficulty estimation procedure (i.e., a model can produce a correct answer even when the problem is invalid or incomplete). This indicates that the models answered these problems correctly at least once during the 16 rollouts, suggesting they may have been trained on similar examples or that the answers were trivially guessable. To efficiently curate the entire dataset, we employed Llama-3.3-70B-Instruct and Qwen2.5-72B-Instruct to automatically filter out low-quality problems. Each model was prompted to evaluate a given math problem based on clarity, completeness, formatting, and relevance, and to identify reasons a problem might be considered low quality, ultimately providing a binary rating. This process mimics human assessment while being significantly more efficient. For each problem and each LLM judge, we collected 16 evaluations, resulting in a total of 32 votes per problem. We retained problems that received at least 9 valid votes and removed approximately 1K-2K math questions in total. 33 7 Math & Code Verifiers 7.1 Math Verifiers During the initial stage of all experiments on math reasoning, we conducted several preliminary analyses of the rule-based math verifiers available at the time. These verifiers included: •The original MATH verifier (verl version) •PRIME verifier •Qwen2.5 verifier •DeepScaleR’s verifier •Math-Verify We first sampled a small set of problems along with their associated solutions and answers, and manually examined the quality of their parsers and verifiers. We found that the Qwen2.5 verifier tends to lose information during the parsing process (e.g., when parsing \boxed{a^2}} $ , it fails to retain ^2). We also observed that the PRIME verifier can occasionally stall during execution. As a result, we excluded these two verifiers from further analysis. We then used rollout data from the difficulty estimation procedure and applied the remaining verifiers to evaluate the generated solutions. We plotted the number of problems at each difficulty level (0–8) in Figure 27: Figure 27: Distributions of the number of correct rollouts from DeepSeek-R1-Distill-Qwen-7B, obtained using four different verifiers on a subset of NuminaMath-1.5 problems. The numbers 0–8 indicate difficulty levels. The size of each sector represents the number of problems at a specific difficulty level. Based on a combination of verifier results and human judgments, we observed the following: •Both the original MATH verifier (verl version) and DeepScaleR’s verifier produced higher rates of false positives and false negatives. •For Math-Verify, some implementation details changed as we explored different versions. Therefore, we include both version 0.5.2 and the default version (0.6.0), which we extensively used in model development, noting only trivial differences between them. Note that Math-Verify may still yield incorrect results for solutions with non-standard formatting or mathematical expressions it does not support (e.g., problems with multiple answers). In our final implementation of the reward function, we verify whether the answer | https://arxiv.org/abs/2505.22312v1 |
in a text solution is correct using the following steps: 34 •Extract the answer that appears after the reasoning process. •Use Math-Verify’s parser to parse the answer and obtain its string representation. •If the string representation directly matches the gold answer, return True; otherwise, fall back to Math-Verify’s verify function. •Wrap the gold answer in boxed {}and run the verification to obtain the final result. We find that wrapping the gold answer with boxed{}is a crucial step. Parsing the gold answer directly can alter the mathematical expression. 7.2 Code Sandboxes For unit test execution, we constructed a highly efficient and secure local code sandbox based on Live- CodeBench’s implementation, leveraging subprocess processing. This sandbox supports various testing methods, including standard input-output testing, solution function unit testing, and assertion-based tests. To further enhance its security and robustness, we implemented the following measures: •Syntax validation: We first validate submitted solutions using Abstract Syntax Trees (AST). If syntax errors are detected, the sandbox immediately terminates the test and returns False. •Memory monitoring: During training, we identified potential memory leak risks in some generated solutions. To mitigate this, we integrated a memory monitoring mechanism for each test process. If a process exceeds 50GB of memory usage, the sandbox proactively terminates the test and returns False, effectively preventing resource exhaustion. •Parallel stability optimization: Initially, we used asynchronous testing combined with process pools for parallel execution. However, we later discovered that the sandbox could crash under this setup, leading to incorrect test results. To resolve this, we revised our approach to rely solely on multiprocessing, ensuring stable and efficient parallel execution. Additionally, we conducted a performance comparison between our sandbox and the PRIME sandbox. The results demonstrate the superior effectiveness of our implementation on specific datasets. Notably, the PRIME sandbox occasionally misclassified correct solutions as failures, whereas our sandbox more accurately evaluated solution correctness. It is also important to note a limitation of our sandbox identified during practical usage: it does not currently handle cases where the same input can yield multiple valid outputs. Such cases are common in real-world code testing scenarios involving non-deterministic or open-ended problems. 8 Experiments In this section, we present the experimental results of our three models: Skywork-OR1-Math-7B, Skywork- OR1-7B, and Skywork-OR1-32B. We begin with the details of the training configurations, followed by an analysis of the training results. Finally, we discuss the evaluation outcomes. 8.1 Training and Evaluation Details Training Configurations Below, we describe the training configurations of our Skywork models. The 7B and 32B models are fine-tuned based on DeepSeek-R1-Distill-Qwen-7B and DeepSeek-R1-Distill-Qwen-32B, respectively. We collect math and code problems from various sources and apply comprehensive preprocessing, difficulty filtering, and quality control. This ensures a problem mixture that is verifiable, valid, and challenging. See Section 6 for details. Based on this curated mixture, all three models are fine-tuned by optimizing the policy loss (3.1)with a constant learning rate of 1e-6, clip ratio of 0.2, target entropy of 0.2, sampling temperature of 1.0, and rejection sampling. Notably, we do not apply any KL loss in our training process, as 35 discussed in Section | https://arxiv.org/abs/2505.22312v1 |
3.2.6. Please refer to Section 3.1 for more details on the policy update procedure. All experiments use multi-stage training. We report the detailed configuration for each training stage in Table 10, Table 11, and Table 12. The released checkpoints correspond to step 2160 for Skywork-OR1-Math-7B, step 1320 for Skywork-OR1-7B, and step 1000 for Skywork-OR1-32B. Stage Steps Context Length TBatch Size Mini-batch Size Group Size 1 0-740 8K 256 128 16 2 740-1740 16K 256 128 16 3 1740-2080 32K 256 128 16 3.5 2080-2160 32K 128 64 64 Table 10: Training configurations of Skywork-OR1-Math-7B. Stage Steps Context Length TBatch Size Mini-batch Size Group size 1 0-660 16K 256 256 16 2 660-1320 32K 160 160 32 Table 11: Training configurations of Skywork-OR1-7B. Stage Steps Context Length TBatch Size Mini-batch Size Group Size 1 0-760 16K 256 256 16 2 760-1130 24K 160 160 32 Table 12: Training configurations of Skywork-OR1-32B. Benchmarks & Baselines We evaluate our models on challenging benchmarks. For math capabilities, we assessperformanceontheAmericanInvitationalMathematicsExamination(AIME)2024and2025. Forcoding capabilities, weuseLiveCodeBench[ 10](from2024-08to2025-02). Wecompareagainstseveralstrongbaselines, including DeepSeek-R1 [ 3], Qwen3-32B [ 32], QwQ-32B [ 25], Light-R1-32B [ 29], TinyR1-32B-Preview [ 27], and several 7B RL models based on DeepSeek-R1-Distill-Qwen-7B, such as AceReason-Nemotron-7B [ 1], AReaL-boba-RL-7B [18], and Light-R1-7B-DS [29]. Evaluation Setup We set the maximum generation length to 32,768 tokens for all models. For AIME24/25, we report avg@32 performance; for LiveCodeBench (2024-08 to 2025-02), we report avg@4 performance. Responses are generated using a temperature of 1 and top-p of 1. The avg@ nmetric is defined as avg@ n=1 nnX i=1I{(x, yi)is correct }, where xis the evaluation question and yiis the i-th response. 8.2 Evaluation Results of Skywork-OR1 models As shown in Table 13, Skywork-OR1 models achieve significant improvements over their base SFT models (e.g., the DeepSeek-R1-Distill series). Specifically, Skywork-OR1-32B achieves scores of 82.2 on AIME24, 73.3 on AIME25, and 63.0 on LiveCodeBench, outperforming strong contemporary models such as DeepSeek-R1 and Qwen3-32B on key math benchmarks, setting new SOTA records at the time of release. Skywork- OR1-7B scores 70.2 on AIME24, 54.6 on AIME25, and 47.6 on LiveCodeBench, demonstrating competitive 36 ModelAIME 24 avg@32AIME 25 avg@32LiveCodeBench (2024-08 - 2025-02) avg@4 7B Models DeepSeek-R1-Distill-Qwen-7B 55.5 39.2 37.6 Light-R1-7B-DS 59.1 44.3 39.5 AReaL-boba-RL-7B 61.9 48.3 - AceReason-Nemotron-7B 69.0 53.6 51.8 Skywork-OR1-Math-7B 69.8 52.3 43.6 Skywork-OR1-7B 70.2 54.6 47.6 ≥32B Models DeepSeek-R1-Distill-Qwen-32B 72.9 59.0 57.2 TinyR1-32B-Preview 78.1 65.3 61.6 Light-R1-32B 76.6 64.6 - QwQ-32B 79.5 65.3 61.6 Qwen3-32B 81.4 72.9 65.7 DeepSeek-R1 79.8 70.0 65.9 Skywork-OR1-32B 82.2 73.3 63.0 Table 13: Comparison of Skywork-OR1 models and other models on reasoning-related benchmarks. performance relative to similarly sized models across both math and coding tasks. Our earlier released model, Skywork-OR1-Math-7B, also delivers competitive results among models of similar size, scoring 69.8 on AIME24, 52.3 on AIME25, and 43.6 on LiveCodeBench. These SOTA results are especially noteworthy given that they are obtained through fine-tuning the DeepSeek-R1-Distill series – SFT base models with relatively modest initial performance – clearly demonstrating the substantial impact of our pipeline. 9 Conclusion In this work, we present Skywork-OR1, an effective and scalable reinforcement learning (RL) implementation for enhancing | https://arxiv.org/abs/2505.22312v1 |
the reasoning capabilities of long CoT models. Building upon the DeepSeek-R1-Distill model series, our RL approach achieves significant performance improvements on various mathematical and coding benchmarks. The Skywork-OR1-32B model outperforms both DeepSeek-R1 and Qwen3-32B on AIME24 and AIME25, while delivering comparable results on LiveCodeBench. Additionally, the Skywork-OR1-7B and Skywork-OR1-Math-7B models demonstrate competitive reasoning performance among similarly sized models. Our comprehensive ablation studies validate the effectiveness of the core components in our training pipeline, including data mixture and filtration, multi-stage training without advantage masking, high-temperature sampling, exclusion of KL loss, and adaptive entropy control. We conduct extensive investigations into entropy collapse phenomena, identifying key factors that influence entropy dynamics. Our findings show that preventing premature entropy collapse is critical for achieving optimal test performance, offering valuable insights for future research and development. Furthermore, we explore how different training resource allocations affect both training efficiency and final model performance. References [1]Yang Chen, Zhuolin Yang, Zihan Liu, Chankyu Lee, Mohammad Shoeybi Peng Xu, and Wei Ping Bryan Catanzaro. Acereason-nemotron: Advancing math and code reasoning through reinforcement learnin, 2025. 37 [2]Ganqu Cui, Lifan Yuan, Zefan Wang, Hanbin Wang, Wendi Li, Bingxiang He, Yuchen Fan, Tianyu Yu, Qixin Xu, Weize Chen, Jiarui Yuan, Huayu Chen, Kaiyan Zhang, Xingtai Lv, Shuo Wang, Yuan Yao, Xu Han, Hao Peng, Yu Cheng, Zhiyuan Liu, Maosong Sun, Bowen Zhou, and Ning Ding. Process reinforcement through implicit rewards. CoRR, abs/2502.01456, 2025. [3] DeepSeek-AI. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. [4]Bofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei Li, Chenghao Ma, Liang Chen, Runxin Xu, Zhengyang Tang, Benyou Wang, Daoguang Zan, Shanghaoran Quan, Ge Zhang, Lei Sha, Yichang Zhang, Xuancheng Ren, Tianyu Liu, and Baobao Chang. Omni-math: A universal olympiad level mathematic benchmark for large language models, 2024. [5]Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models.arXiv preprint arXiv:2407.21783 , 2024. [6]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [7]Jujie He, Jiacai Liu, Chris Yuhao Liu, Rui Yan, Chaojie Wang, Peng Cheng, Xiaoyu Zhang, Fuxiang Zhang, Jiacheng Xu, Wei Shen, Siyuan Li, Liang Zeng, Tianwen Wei, Cheng Cheng, Bo An, Yang Liu, and Yahui Zhou. Skywork open reasoner series. https://capricious-hydrogen-41c.notion.site/S kywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680 , 2025. Notion Blog. [8]Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, Xiangyu Zhang, and Heung-Yeung Shum. Open- reasoner-zero: An open source approach to scaling up reinforcement learning on the base model, 2025. [9]Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Hel- yar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [10]Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint , 2024. [11]Hynek Kydlíček. Math-verify: A | https://arxiv.org/abs/2505.22312v1 |
robust mathematical expression evaluation system. https://github.c om/huggingface/Math-Verify , 2025. Version 0.6.1. [12]Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven Chu-Hong Hoi. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 , 2022. [13]Jia LI, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Costa Huang, Kashif Rasul, Longhui Yu, Albert Jiang, Ziju Shen, Zihan Qin, Bin Dong, Li Zhou, Yann Fleureau, Guillaume Lample, and Stanislas Polu. Numinamath. [https://huggingface.co/AI-MO/NuminaMath-1.5](ht tps://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.p df), 2024. [14]Rongao Li, Jie Fu, Bo-Wen Zhang, Tao Huang, Zhihong Sun, Chen Lyu, Guang Liu, Zhi Jin, and Ge Li. TACO: topics in algorithmic code generation dataset. CoRR, abs/2312.14852, 2023. [15]Rongao Li, Jie Fu, Bo-Wen Zhang, Tao Huang, Zhihong Sun, Chen Lyu, Guang Liu, Zhi Jin, and Ge Li. Taco: Topics in algorithmic code generation dataset. arXiv preprint arXiv:2312.14852 , 2023. [16]Michael Luo, Sijun Tan, Roy Huang, Ameen Patel, Alpay Ariyak, Qingyang Wu, Xiaoxiang Shi, Rachel Xin, Colin Cai, Maurice Weber, Ce Zhang, Li Erran Li, Raluca Ada Popa, and Ion Stoica. Deepcoder: 38 A fully open-source 14b coder at o3-mini level. https://pretty-radio-b75.notion.site/DeepCoder -A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51 , 2025. Notion Blog. [17]Michael Luo, Sijun Tan, Justin Wong, Xiaoxiang Shi, William Y. Tang, Manan Roongta, Colin Cai, Jeffrey Luo, Li Erran Li, Raluca Ada Popa, and Ion Stoica. Deepscaler: Surpassing o1-preview with a 1.5b model by scaling rl. https://pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-P review-with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2 , 2025. Notion Blog. [18] Ant Research RL Lab. Areal: Ant reasoning rl. https://github.com/inclusionAI/AReaL , 2025. [19]John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438 , 2015. [20]John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [21]Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [22]Richard S. Sutton and Andrew G. Barto. Reinforcement learning: An introduction . MIT press, 2nd edition, 2018. [23]Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems, pages 1057–1063, 1999. [24]Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. [25] Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, March 2025. [26]RUCAIBox STILL Team. Still-3-1.5b-preview: Enhancing slow thinking abilities of small models through reinforcement learning. 2025. [27] TinyR1 Team. Superdistillation achieves near-r1 performance with just 5 [28]Liang Wen, Yunke Cai, Fenrui Xiao, Xin He, Qi An, Zhenyu Duan, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, et al. Light-r1: Curriculum sft, dpo and rl for long cot from scratch and beyond. arXiv preprint arXiv:2503.10460 , 2025. [29]Liang Wen, Fenrui Xiao, Xin He, Yunke Cai, | https://arxiv.org/abs/2505.22312v1 |
Zhenyu Duan Qi An, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, Haosheng Zou, Yongchao Deng, Shousheng Jia, and Xiangzheng Zhang. Light-r1: Surpassing r1-distill from scratch with $1000 through curriculum sft & dpo, 2025. [30]Yunhui Xia, Wei Shen, Yan Wang, Jason Klein Liu, Huifeng Sun, Siyue Wu, Jian Hu, and Xiaolong Xu. Leetcodedataset: A temporal dataset for robust evaluation and efficient training of code llms. arXiv preprint arXiv:2504.14655 , 2025. [31]Tian Xie, Zitian Gao, Qingnan Ren, Haoming Luo, Yuqian Hong, Bryan Dai, Joey Zhou, Kai Qiu, Zhirong Wu, and Chong Luo. Logic-rl: Unleashing llm reasoning with rule-based reinforcement learning, 2025. [32]An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi 39 Yang, Jing Zhou, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng, Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang, Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, and Zihan Qiu. Qwen3 technical report. arXiv preprint arXiv:2505.09388 , 2025. [33]An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [34]Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 , 2025. [35]Yufeng Yuan, Qiying Yu, Xiaochen Zuo, Ruofei Zhu, Wenyuan Xu, Jiaze Chen, Chengyi Wang, TianTian Fan, Zhengyin Du, Xiangpeng Wei, et al. Vapo: Efficient and reliable reinforcement learning for advanced reasoning tasks. arXiv preprint arXiv:2504.05118 , 2025. [36]Yuxin Zuo, Kaiyan Zhang, Shang Qu, Li Sheng, Xuekai Zhu, Biqing Qi, Youbang Sun, Ganqu Cui, Ning Ding, and Bowen Zhou. Ttrl: Test-time reinforcement learning. arXiv preprint arXiv:2504.16084 , 2025. 40 | https://arxiv.org/abs/2505.22312v1 |
Advancing Multimodal Reasoning via Reinforcement Learning with Cold Start Lai Wei1,3Yuting Li1Kaipeng Zheng1,2Chen Wang3Yue Wang3Linghe Kong1 Lichao Sun4Weiran Huang1,2∗ 1School of Computer Science, Shanghai Jiao Tong University 2Shanghai Innovation Institute 3Zhongguancun Academy4Lehigh University MathVision MathVerse MathVista We-Math Average1020304050607080Score 27.044.068.2 64.1 50.8Qwen2.5-VL-3B LMM-R1-3B TBAC-VLR1-3B VLAA-Thinker-3B Ours (3B) MathVision MathVerse MathVista We-Math Average1020304050607080Score 30.348.673.4 70.4 55.7Qwen2.5-VL-7B MM-Eureka-7B ThinkLite-VL-7B VLAA-Thinker-7B Ours (7B) Figure 1: Performance comparison between our models and other advanced models on different multimodal reasoning benchmarks at both the 3B and 7B scales. Abstract Recent advancements in large language models (LLMs) have demonstrated im- pressive chain-of-thought reasoning capabilities, with reinforcement learning (RL) playing a crucial role in this progress. While “aha moment” patterns–where models exhibit self-correction through reflection–are often attributed to emergent proper- ties from RL, we first demonstrate that these patterns exist in multimodal LLMs (MLLMs) prior to RL training but may not necessarily correlate with improved reasoning performance. Building on these insights, we present a comprehensive study on enhancing multimodal reasoning through a two-stage approach: (1) super- vised fine-tuning (SFT) as a cold start with structured chain-of-thought reasoning patterns, followed by (2) reinforcement learning via GRPO to further refine these capabilities. Our extensive experiments show that this combined approach con- sistently outperforms both SFT-only and RL-only methods across challenging multimodal reasoning benchmarks. The resulting models achieve state-of-the-art performance among open-source MLLMs at both 3B and 7B scales, with our 7B model showing substantial improvements over base models (e.g., 66.3% →73.4% on MathVista, 62.9% →70.4% on We-Math) and our 3B model achieving perfor- mance competitive with several 7B models. Overall, this work provides practical guidance for building advanced multimodal reasoning models. Our code is avail- able at https://github.com/waltonfuture/RL-with-Cold-Start . ∗Correspondence to Weiran Huang (weiran.huang@outlook.com).arXiv:2505.22334v1 [cs.CL] 28 May 2025 1 Introduction LLMs have demonstrated remarkable progress in chain-of-thought reasoning, most notably exempli- fied by OpenAI’s o1, o3, and o4 models [ 19]. Follow-up studies have sought to replicate and extend these complex reasoning abilities, revealing that targeted post-training methods can further improve model performance on challenging tasks [ 13,14,44,63]. Among these, DeepSeek-R1-Zero [ 14] demonstrated that an “aha moment” can autonomously emerge during reinforcement learning (RL), showcasing advanced problem-solving strategies. DeepSeek-R1 [ 14] further showed that RL with supervised fine-tuning (SFT) as a cold start is particularly effective at enhancing the overall reasoning ability of LLMs. While these advances predominantly focus on text-only models, the potential of incorporating similarly complex reasoning strategies within Multimodal Large Language Models (MLLMs) has remained relatively underexplored. Recent follow-up works have attempted to advance the reasoning ability in MLLMs [10, 34, 54, 58, 60,69] using various techniques, including SFT and RL scaling. Many works focus on Zero RL (i.e., directly applying RL without cold start) training, claiming that the “aha moment” can emerge through Zero RL [ 34,69] in MLLMs, which indicate improved reasoning ability. Some concurrent works assert that Zero RL outperforms the combination of SFT and RL [4, 61]. In this paper, we first observe that the so-called “aha moment” pattern already exists in MLLMs before RL training, but this presence does not necessarily correlate with | https://arxiv.org/abs/2505.22334v1 |
improved reasoning capabilities. Our analysis reveals that, while RL may increase the frequency of such a reflective pattern, it may be more of a mirage rather than an indicator of genuine improvements in reasoning capability. This finding challenges the prevailing assumptions about emergent reasoning in MLLMs and suggests a need for more deliberate approaches to enhancing multimodal reasoning. Motivated by DeepSeek-R1’s approach of collecting cold-start data to fine-tune the model as the initial RL actor, we conduct a comprehensive empirical study of SFT and RL in a unified framework in multimodal domain. Our methodology consists of two stages: (1) a cold start phase where we develop multimodal Chain-of-Thought [ 51] (CoT) patterns for supervised fine-tuning, and (2) a reinforcement learning phase using GRPO [ 40] to further enhance reasoning capabilities. In particular, we systematically investigate how different cold start strategies during SFT affect downstream RL performance in the multimodal domain, aiming to establish a robust foundation for the subsequent RL training. Our experiments demonstrate that SFT-based cold start provides a strong basis for RL scaling. Our approach achieves state-of-the-art results among open-source models at both parameter scales, as illustrated in Figure 1. For the 7B model, we observe substantial improvements over the base models (e.g., 66.3% →73.4% on MathVista, 62.9% →70.4% on We-Math), surpassing all models at the same scale and even outperforming some larger models, including GPT-4o and Skywork R1V-38B. Notably, our best 7B model achieves an average score improvement of +6.19 points across all benchmarks compared to the base model. Similarly, our 3B model demonstrates significant gains, achieving an average improvement of +10.84 points and performance competitive with, or even exceeding, several 7B models such as Qwen2.5-VL-7B and VLAA-Thinker-7B–highlighting the effectiveness of our training paradigm. To further elucidate the interplay between SFT and RL, we conduct extensive ablation studies exploring how various SFT strategies and data qualities impact subsequent RL performance. Our findings reveal that while both SFT-only and RL-only approaches yield notable improvements over base models, their combination consistently delivers superior performance across a wide range of experiments. Furthermore, we observe a strong positive correlation between cold start (SFT) performance and final model quality after RL, suggesting that investing in high-quality supervision during the initial stage is crucial for maximizing overall gains. Our deeper analysis indicates that the structural patterns present in Chain-of-Thought reasoning can be learned independently of solution correctness, underscoring the importance of reasoning format, while the presence of the “aha moment” pattern does not necessarily correlate with enhanced reasoning ability. Our contribution can be summarized as follows: •We empirically investigate the prevalence and effectiveness of “aha moment” patterns in MLLMs, demonstrating that these reflective patterns exist prior to RL but may not necessarily indicate enhanced reasoning capabilities. 2 5678910111213Frequency (%) 6.076.5911.98With 'aha moment' 878889909192939495 93.96 93.41 88.02Without 'aha moment' 18.5018.7519.0019.2519.5019.7520.0020.25Accuracy (%)20.17 18.98 18.68With 'aha moment' 22232425262728 22.2423.0227.56Without 'aha moment'Frequency Accuracy Qwen2.5-VL VLAA-Thinker MM-EUREKAFigure 2: The frequency and accuracy of models’ responses with and without “aha moment”. The results show that the presence of “aha moment” does not necessarily correlate with higher accuracy. •Based on the above | https://arxiv.org/abs/2505.22334v1 |
insights, we present the first comprehensive study of the impact of supervised fine-tuning as a cold start for reinforcement learning in the multimodal domain, exploring diverse SFT strategies to enhance reasoning abilities. •Empirical results demonstrate that an SFT-based cold start provides a robust foundation for RL scaling. The combination of targeted SFT and subsequent RL yields substantial improvements across challenging multimodal reasoning benchmarks. Using this simple yet effective approach, we train state-of-the-art models at both the 3B and 7B parameter scales. 2 Related Work Recently, the mathematical reasoning abilities of MLLMs have become a central focus of re- search [ 11,12,15–17,23,57,67,70]. In contrast to traditional LLM-based mathematical reason- ing [32,62], which primarily relies on text, multimodal approaches must both process and interpret visual inputs, significantly increasing the complexity of tasks such as geometric problem-solving and chart interpretation [ 5,33]. Several works in this field have sought to collect or synthesize a large scale of math-centric multimodal data with explicit reasoning steps [ 9,35,42,65]. Notably, the recent emer- gence of o1-like reasoning models [ 19] represents an initial step toward activating the slow-thinking capabilities of MLLMs, as demonstrated by several SFT-based methods, such as LLaV A-CoT [ 54], LLaV A-Reasoner [ 66], MAmmoTH-VL [ 15], and Mulberry [ 58]. Moreover, some concurrent works have further explored reinforcement learning approaches, particularly GRPO, in the post-training stage of MLLMs to enhance performance on multimodal reasoning tasks [ 6,10,34,36,41,50,60,69]. These approaches typically follow these strategies: either directly applying supervised fine-tuning (SFT) with long reasoning data or reinforcement learning (RL) to train models from scratch, or first performing SFT before applying RL. In contrast, our work explores a different direction by focusing on how the cold start stage impacts the subsequent RL stage for MLLMs in the unified two-stage post-training (SFT+RL) process. We systematically study various cold start strategies during the SFT phase and empirically demonstrate how they build strong reasoning foundations before applying RL, resulting in more effective multimodal reasoning capabilities. 3Observation: Aha Moment Already Exists but May Not Indicate Advanced Reasoning Ability Recent works claim that reinforcement learning (usually GRPO [ 40]) can trigger reflective thinking patterns, often characterized as an “aha moment” and considered an emergent phenomenon, which showcases the improved reasoning ability of the models [ 34,69]. In our study, we observe that such reflective patterns—indicative of an aha moment—can actually be found in Qwen2.5-VL [ 2] before conducting GRPO. Specifically, we sample the model’s responses 16 times for each query in the MathVision dataset [ 49] and analyze the presence of keywords such as “re-evaluate” and “re-check” in the responses. Interestingly, our analysis reveals that while these reflective expressions already exist, their presence does not necessarily correlate with improved reasoning capabilities. In particular, we conduct 3 experiments on Qwen2.5-VL-7B and other two models tuned using GRPO based on Qwen2.5-VL-7B: MM-EUREKA [ 34] and VLAA-Thinker [ 4]. According to Figure 2, we find that the frequency of “aha moment”-like expressions increases from 6.07 in Qwen2.5-VL to 6.59 in VLAA-Thinker and 11.98 in MM-EUREKA. However, the associated accuracies when such expressions appear actually drop. | https://arxiv.org/abs/2505.22334v1 |
Specifically, the accuracy is 20.17 for Qwen2.5-VL, but only 18.98 for VLAA-Thinker and 18.68 for MM-EUREKA in these “aha moment” cases. These findings suggest that the so-called “aha moment” may be more of a mirage than a reliable indicator of genuine reasoning ability. While GRPO can increase the frequency of such patterns, they may not correlate with actual enhancement of reasoning performance. In many cases, these reflective patterns are ineffective or even misleading, sometimes introducing hallucinations rather than improving solution quality. We illustrate this issue with a representative example below, where the model does begin to reflect and re-evaluate—but its reasoning remains flawed and hallucinatory. We also provide more case studies in Appendix D.1. Demo: An example of “Aha Moment” which does not indicate effective reflection pattern. Question: In the given parallelogram ABCD, if the sum of angle B and angle D measures 100◦, what is the measure of angle A? Qwen2.5-VL’s Response: To solve this problem, we need to use the properties of a parallelogram. ... Now, assume: angleB=x, angleD= 180◦−x.Given: x+ (180◦−x) = 100◦⇒180◦= 100◦ This is incorrect, so we need to re-evaluate the problem. The correct interpretation should be that the sum of angleBand angle Dis180◦, not100◦. Therefore, the measure of angle Ais:140◦ To address this, we argue that reinforcement learning alone may struggle to induce new and effective reasoning strategies. Instead, we propose a two-stage approach: (1) supervised fine-tuning (SFT) to initialize the model with high-quality, reasoning-rich patterns, followed by (2) reinforcement learning to activate and refine these patterns. 4 Reinforcement Learning with Cold Start In this section, we present our methodology for enhancing multimodal reasoning capabilities via reinforcement learning with cold start. Our method consists of two main stages: (1) a cold start phase where we explore to construct diverse reasoning patterns in Chain-of-Thought [ 51] (CoT) data for supervised fine-tuning, and (2) a reinforcement learning phase to further activate and improve the model’s reasoning capability. 4.1 Cold Start As demonstrated in the previous section, we observe that the current aha-moment like reflection patterns may not be effective, and directly applying RL does not sufficiently enable models to acquire strong reasoning capabilities in multimodal domain. We argue that it is crucial to first conduct supervised fine-tuning (SFT) as a cold start to equip the model with strong reasoning abilities before introducing RL. In our study, we focus on exploring efficient ways to construct multimodal cold-start data for advanced reasoning. Specifically, we leverage synthetic data to generate different types of CoT reasoning patterns via distillation. Distilled-CoT. Synthesizing data using model distillation is a powerful and widely-used tech- nique [ 7,24,25]. Here, we use larger models (Qwen2.5-VL-7B [ 2] and Qwen2.5-VL-32B [ 2]) to generate CoT responses from a seed dataset with ground truth annotations by rejection sam- pling [ 46,47]. These distilled datasets are used for the supervised fine-tuning that serves as a cold start stage. More details of the distillation are shown in Appendix A.2. Reflection-CoT. Recent studies highlight the great potential in solving challenging problems through explicit reflection [ 9,14,39]. To explore this, we consider two | https://arxiv.org/abs/2505.22334v1 |
reflection-based CoT settings. Reflection-CoT (v1): In the rejection sampling process, we obtain both correct and incorrect responses, denoted as y+andy−. We explore a simple two-step reasoning pattern inspired by prior work [9]: 4 Teacher MLLM MLLMDistillationSFT DatasetSupervised Fine-Tuning (Cold Start) Seed DatasetRL Dataset Reinforcement Learning MLLM (RL)MLLM (SFT) Reinforcement Learning MLLM (SFT+RL) RL DatasetFigure 3: Method overview. Our approach consists of two stages: (1) a cold start phase using supervised fine-tuning with Chain-of-Thought data, and (2) a reinforcement learning phase using GRPO to further enhance reasoning capabilities. y−+“Wait, perhaps we could consider it from a different perspective. Let’s re-evaluate the problem step by step to ensure accuracy. ” +y+ This approach combines the incorrect response with the correct one, and resembles the Best-of-N test-time scaling strategy (with N=2), where the model selects the best response during multiple generations via reflection. Reflection-CoT (v2): As noted in Section 3, we find that aha moment already exists before RL training. Building on this, we conduct further rejection sampling using Qwen2.5-VL-32B to collect correct responses that exhibit such reflection pattern. Caption-CoT. We utilize a classical multimodal CoT pattern [ 11,45,54], where the model first describes an image and then provides an answer. This approach aims to encourage the model to focus on image details before engaging in reasoning. Specifically, we generate a caption for each image using Qwen2.5-VL-7B [ 2] and concatenate the caption with the previously sampled correct response. Self-Critic-CoT. Additionally, we adopt the prompting strategy in Wen et al. [53] to collect Self- Critic-CoT via rejection sampling. Specifically, Self-Critic-CoT is an iterative reasoning structure where models engage in self-comment and refinement of their initial responses. In this process, the model first generates a draft answer, then critically evaluates its own response by providing detailed comments, before producing a refined final output. The prompt used for this process is detailed in Appendix A.1. Generally, we explore several popular and straightforward methods to construct the CoT datasets for cold start. By conducting SFT on the aforementioned datasets, we aim to establish a robust cold start foundation for subsequent RL training stages. 4.2 Reinforcement Learning After the cold start stage, we employ reinforcement learning using GRPO algorithm [ 40], following the strategy in DeepSeek-R1 [ 14], to further activate the reasoning ability of the models. In particular, GRPO optimizes computational efficiency by eliminating the need for a separate value model; instead, it directly utilizes group-normalized rewards to estimate advantages. Specifically, for a question qand the correlated image Ifrom the training dataset Q, GRPO samples a group of responses O={oi}G i=1 from the old policy πoldand then optimizes the policy model by maximizing the following objective: J(θ) =E(q,I)∼Q,{oi}G i=1∼πθold(O|q,I) 1 GGX i=11 |oi||oi|X t=1( min" γi,t(θ)ˆAi,t,clip(γi,t(θ),1−ϵ,1 +ϵ)ˆAi,t# −βDKLh πθ∥πrefi) , 5 Table 1: Performance comparison on different multimodal reasoning benchmarks. Our models achieve state-of-the-art performance at the scale of 3B and 7B. Models Size MathVision MathVerse MathVista We-Math Average Close-Source Models GPT-4o [18] - 33.95 48.83 59.50 65.00 51.82 Claude-3.5-Sonnet [ 1] - 46.48 57.64 68.20 73.05 61.34 Open-Source Multi-Modal Large Reasoning Models QvQ-Preview [2] 72B 35.56 52.81 | https://arxiv.org/abs/2505.22334v1 |
69.20 65.29 55.72 Skywork R1V [36] 38B 39.31 40.91 67.50 60.06 51.95 Open-Source 7B Multi-Modal Models Qwen2.5-VL [2] 7B 24.87 43.83 66.30 62.87 49.47 MM-Eureka [34] 7B 28.36 49.52 70.40 66.03 53.58 OpenVLThinker [10] 7B 25.30 42.79 64.10 63.91 49.03 ThinkLite-VL [50] 7B 27.71 47.41 69.00 67.99 53.03 VLAA-Thinker [4] 7B 26.61 47.49 66.20 66.49 51.70 Ours 7B 30.26 (+5.39) 48.58 (+4.75) 73.40 (+7.10) 70.40 (+7.53) 55.66 (+6.19) Open-Source 3B Multi-Modal Models Qwen2.5-VL [2] 3B 19.47 33.58 56.30 50.63 40.00 LMM-R1 [60] 3B 25.30 42.79 64.10 63.91 49.03 TBAC-VLR1 [55] 3B 25.03 38.17 59.80 63.28 46.57 VLAA-Thinker [4] 3B 22.41 35.44 59.40 56.03 43.32 Ours 3B 27.04 (+7.57) 44.03 (+10.45) 68.20 (+11.90) 64.08 (+13.45) 50.84 (+10.84) where γi,t(θ) =πθ(oi,t|q,oi,<t) πθold(oi,t|q,oi,<t),πrefrepresents the reference model, and the term DKLintroduces a KL divergence constraint to limit how much the model can deviate from this reference. The advantage estimate ˆAimeasures how much better the response oiis compared to the average response, which is computed using a group of rewards {r1, r2, . . . , r G}for the responses in set O: ˆAi=ri−mean ({r1,r2,...,rG}) std({r1,r2,...,rG}). 5 Experiments To validate the effectiveness of our method, we demonstrate our experimental setup and main results in the following sections. 5.1 Experimental Setup Training Datasets. For the cold start training phase, we curate a diverse dataset of 50k examples from established open-source resources, including Geometry3K [ 27], GeoQA [ 5], GeoQA-Plus [ 3], Geos [ 38], AI2D [ 21], TQA [ 22], FigureQA [ 20], TabMWP [ 30], ChartQA [ 33], IconQA [ 28], Clevr-Math [ 26], M3CoT [ 8], and ScienceQA [ 29]. These datasets cover a range of tasks, such as chart understanding, scientific question answering, and geometric reasoning. We apply different sampling strategies to construct Chain-of-Thought trajectories following the methodology described in Section 4.1. For the subsequent reinforcement learning phase after cold start, we apply the same dataset as the cold start phase. Baselines. We compare our models with a series of advanced multimodal large language models. GPT-4o [ 18] and Claude-3.5-Sonnet [ 1] are advanced close-source MLLMs. QvQ-Preview-72B [ 2] and Skywork R1V-32B [ 36] are multimodal large reasoning models specifically designed for R1-like chain-of-thought capabilities. As for 3B models, we compare with LMM-R1-3B [ 60], TBAC- VLR1-3B [ 55], and VLAA-Thinker-Qwen2.5VL-3B [ 4]. For models at 7B scale, our comparisons include MM-Eureka-7B [ 34], OpenVLThinker-7B [ 10], ThinkLite-VL-7B [ 50], and VLAA-Thinker- Qwen2.5VL-7B [ 4]. These 3B and 7B models are all trained from Qwen2.5-VL that are enhanced 6 Table 2: Effective rank and ∆eRank of our models before and after post-training on four multimodal reasoning benchmarks. Effective Rank [52] MathVision MathVerse MathVista We-Math Qwen2.5-VL-3B [2] 74.30 66.25 66.43 60.21 Ours (3B) 101.68 85.91 79.59 79.96 Difference of Effective Rank ( ∆eRank ) +27.38 +19.66 +13.16 +19.75 Qwen2.5-VL-7B [2] 73.58 62.35 63.85 60.14 Ours (7B) 90.67 77.16 75.85 73.16 Difference of Effective Rank ( ∆eRank ) +17.09 +14.81 +12.00 +13.02 multimodal reasoning through various techniques such as fine-tuning, reinforcement learning, and iterative self-improvement for mathematical reasoning. Evaluation Benchmarks. We evaluate the MLLMs on four prominent | https://arxiv.org/abs/2505.22334v1 |
multimodal mathematical reasoning benchmarks: MathVision [ 48], MathVista [ 31], MathVerse [ 64], and We-Math [ 37]. These benchmarks offer comprehensive evaluations with diverse problem types, including geometry, charts, and tables, featuring multi-subject math problems and meticulously categorized visual math challenges across various knowledge concepts and granularity levels. We provide our evaluation details in Appendix A.3. 5.2 Main Results In our experiments, we conduct reinforcement learning (RL) using GRPO [ 40] with a cold-start strategy. Specifically, we begin with supervised fine-tuning (SFT) on a distilled dataset generated from Qwen2.5-VL-32B [ 2], which serves as a strong teacher model. This distilled data is then used to initialize smaller Qwen2.5-VL variants at both the 7B and 3B scales as the cold start stage. Following this initialization, we apply GRPO to further enhance the models’ multimodal reasoning capabilities. The performance of our models is summarized in Table 1. Our approach yields consistently strong performance across all four multimodal reasoning benchmarks, demonstrating the effectiveness of our training paradigm. In particular, our 7B model delivers an overall average improvement of a +6.19 score over the base model (Qwen2.5-VL-7B). It also achieves state-of-the-art performance among all open-source 7B-scale models, outperforming strong baselines such as MM-Eureka [ 34], VLAA-Thinker [ 4], and ThinkLite-VL [ 50]. Furthermore, our model surpasses many larger models, such as GPT-4o [ 18] and Skywork R1V [ 36]. It is also competitive with QvQ-72B [ 43]. Besides, the 3B variant of our model also sets a new state-of-the-art among open-source 3B multimodal models. It outperforms all existing 3B baselines, including LMM-R1, TBAC-VLR1, and VLAA-Thinker, by significant margins on all four benchmarks. Notably, our 3B model can achieve competitive results on par with, and in some cases surpassing, larger 7B models such as Qwen2.5-VL-7B and VLAA-Thinker-7B. In addition, we also calculate the effective rank and the difference ( ∆eRank ) [52] for models before and after post-training in Table 2. On a fix-sized model, the value of effective rank usually correlates with the amount of knowledge the model comprehends. After post-training (SFT and RL), extra knowledge is injected into the model, which leads to a consistent increase in effective rank on different benchmarks. Overall, these results highlight the effectiveness of our cold-start RL pipeline in enhancing reasoning capabilities. They also demonstrate the potential of scalable training strategies in narrowing the performance gap between small and large multimodal language models. 6 Ablation Studies We conduct various ablation studies to explore the interplay between supervised fine-tuning (SFT) and reinforcement learning (RL). The experiments below are conducted based on Qwen2.5-VL-3B. 6.1 The Effectiveness of Cold Start To better understand the impact of our cold-start strategy, we conduct an ablation study comparing four variants: the base model, RL-only training, SFT-only training (cold start), and the full pipeline 7 Table 3: Ablation study comparing the effectiveness of different training strategies: Supervised Fine-Tuning (SFT) only, Reinforcement Learning (RL) only, and the combined SFT+RL approach on Qwen2.5-VL models at 3B and 7B scales. Method MathVision MathVerse MathVista We-Math Average Qwen2.5-VL-3B [2] 19.47 33.58 56.30 50.63 40.00 + SFT 25.20 44.39 67.60 60.63 49.46 | https://arxiv.org/abs/2505.22334v1 |
+ RL 25.10 39.95 66.50 63.62 48.79 + SFT and RL 27.04 44.03 68.20 64.08 50.84 Qwen2.5-VL-7B [2] 24.87 43.83 66.30 62.87 49.47 + SFT 28.62 46.90 71.20 67.76 53.62 + RL 29.80 49.29 73.50 67.82 55.10 + SFT and RL 30.26 48.58 73.40 70.40 55.66 Table 4: Ablation study on different cold-start strategies for Qwen2.5-VL-3B. We find that the cold start performance correlates well with the final performance. Strategies MathVision MathVerse MathVista We-Math Average None (Base model) 19.47 33.58 56.30 50.63 40.00 + RL 25.10 39.95 66.50 63.62 48.79 Cold Start (32B-Distilled-CoT) 25.20 44.39 67.60 60.63 49.46 + RL 27.04 44.03 68.20 64.08 50.84 Cold Start (7B-Distilled-CoT) 24.40 41.95 64.30 59.14 47.45 + RL 25.86 43.65 66.40 64.13 50.01 Cold Start (Caption-CoT) 22.83 40.74 62.70 58.97 46.31 + RL 25.26 42.23 65.10 63.16 48.94 Cold Start (Reflection-CoT v1) 23.98 40.43 65.00 60.17 47.39 + RL 24.21 41.98 66.10 62.59 48.72 Cold Start (Self-Critic-CoT) 23.22 40.91 61.40 57.76 45.87 + RL 25.46 41.72 66.90 64.08 49.54 combining SFT and RL. The results are presented in Table 3. In particular, for 3B models, SFT provides a substantial boost in performance over the base model, improving the average score from 40.00 to 49.46. While RL alone also improves performance to 48.79, it is slightly less effective than SFT. This suggests that RL without advanced reasoning prior is less effective at guiding the model toward structured problem-solving. When SFT is followed by RL, the model achieves a new peak of 50.84, demonstrating that the cold-start stage equips the model with strong reasoning patterns, which are further refined by RL. The 7B models follow a similar trend. Starting from a stronger base (49.47), both SFT (+4.15) and RL (+5.63) lead to notable improvements. The best performance is achieved through the combination of SFT and RL, which yields an average score of 55.66, outperforming either individual component. These results collectively highlight that cold-start (SFT) plays a crucial role in unlocking the full potential of reinforcement learning for multimodal reasoning. SFT with high-quality CoT data represents a more direct and effective approach to enhancing a model’s fundamental reasoning capabilities, rather than relying solely on RL. When combined, SFT and RL offer a complementary and scalable path to enhance multimodal reasoning. 6.2 Different Types of CoT Data in SFT We explore various cold-start strategies beyond direct distillation from a 32B teacher model (Qwen2.5- VL-32B), focusing specifically on the 3B student model. These other strategies are detailed in Section 4.1 based on rejection sampling using Qwen2.5-VL-7B. As shown in Table 4, the choice of cold-start strategy significantly influences final model performance. Among all strategies, using CoT data distilled from a strong 32B teacher model yields the best results, achieving an average score of 49.46 after SFT and 50.84 after subsequent RL. Direct distillation from Qwen2.5-VL-7B also provides substantial improvements over the base model, but is slightly less effective than the 32B teacher, suggesting that the quality and scale of the teacher model are important for constructing high- quality cold start data. Other strategies, such as Caption-CoT, Reflection-CoT, and Self-Critic-CoT, 8 | https://arxiv.org/abs/2505.22334v1 |
20222426Accuracy (%)MathVision 3436384042MathVerse 565860626466MathVista 50.052.555.057.560.062.565.0We-Math 404244464850AverageBase Model Data with Aha Moment Randomly Selected DataFigure 4: Comparison of model performance when trained on data with “aha moment” patterns (Reflection-CoT v2) versus randomly selected 32B-distilled data. Model trained on randomly selected data consistently outperform that trained on “aha moment” data, suggesting that these reflective patterns do not necessarily correlate with advanced reasoning capabilities. Table 5: Impact of data quality on model performance. Results show that while rejection sampling (correct data) yields the best performance, models trained on unjudged or even incorrect data still outperform the base model. Strategies MathVision MathVerse MathVista We-Math Average None (Base model) 19.47 33.58 56.30 50.63 40.00 + RL 25.10 39.95 66.50 63.62 48.79 Cold Start (Correct Data) 25.20 44.39 67.60 60.63 49.46 + RL 27.04 44.03 68.20 64.08 50.84 Cold Start (Unjudged Data) 25.59 42.08 65.80 60.06 48.38 + RL 27.83 42.44 67.10 62.64 50.00 Cold Start (Wrong Data) 24.41 37.77 63.30 55.40 45.22 + RL 24.44 40.38 64.70 63.51 48.26 also lead to notable gains compared to the base model, but their improvements are generally smaller than those achieved by simple teacher distillation. For example, Caption-CoT and Reflection-CoT (v1) achieve average scores of 46.31 and 47.39 after SFT, respectively, while Self-Critic-CoT yields 45.87. Nevertheless, all these strategies benefit from additional RL, with performance consistently increasing after the RL stage. In addition, we observe a strong positive correlation between the performance of the cold start stage (SFT) and the final model performance after conducting RL. This suggests that selecting an effective cold start strategy is crucial for maximizing the overall benefits. In practice, investing effort in constructing strong SFT data may yield greater performance gains than relying on RL alone to enhance the model’s reasoning capabilities. 6.3 Revisiting “Aha Moment” As we observe in Section 3, the so-called “aha moment” in MLLMs does not necessarily correspond to genuinely useful reasoning patterns. To further investigate this, we conduct a targeted experiment. As described in Section 4.1, we build a dataset by applying rejection sampling to Qwen2.5-VL-32B, retaining only correct responses that exhibit the “aha moment” pattern. This yields 10K samples, referred to as Reflection-CoT (v2). For comparison, we also randomly sample 10K instances from the 32B-distilled dataset. We then perform SFT on each dataset, followed by GRPO. Results in Figure 4 reveal that the model trained on “aha moment” data actually underperforms the model trained on randomly selected data. This suggests that the presence of the “aha moment” in the responses does not necessarily translate into improved reasoning performance. 6.4 Data Quality in SFT We investigate the impact of data quality on model performance by examining different sampling strategies for SFT data collection: (1) Rejection Sampling: Selecting only responses that yield correct answers (our default approach). (2) Wrong-Only Sampling: Deliberately selecting responses that lead to incorrect answers. (3) Unjudged Sampling: Randomly selecting responses without considering correctness. As shown in Table 5, the highest performance is achieved with rejection sampling, 9 where only correct data is used. Interestingly, models trained on unjudged data—where correctness is not considered—still demonstrate notable improvements in | https://arxiv.org/abs/2505.22334v1 |
reasoning ability. Even when trained exclusively on incorrect data, the model outperforms the base model after the cold start phase and continues to benefit from subsequent RL. These results suggest that the structural patterns present in Chain-of-Thought reasoning can be learned independently of solution correctness, highlighting the importance of reasoning format in addition to answer accuracy. 7 Conclusion In this paper, we demonstrate that “aha moment” patterns already exist in MLLMs before RL training but may not necessarily correlate with improved reasoning capabilities. We propose a two-stage approach to enhance multimodal reasoning: first conducting supervised fine-tuning (SFT) with high-quality Chain-of-Thought data to establish a strong foundation, followed by reinforcement learning (RL) to further refine these capabilities. Our extensive experiments show that this combined approach consistently outperforms both SFT-only and RL-only methods, achieving state-of-the-art performance among open-source MLLMs at both 3B and 7B scales. Acknowledgement This project is supported by the National Natural Science Foundation of China (No. 62406192), Open- ing Project of the State Key Laboratory of General Artificial Intelligence (No. SKLAGI2024OP12), Tencent WeChat Rhino-Bird Focused Research Program, and Doubao LLM Fund. References [1]Anthropic. The claude 3 model family: Opus, sonnet, haiku. https://www-cdn.anthropic.com/ de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf , 2024. Preprint. [2]Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. [3]Jie Cao and Jing Xiao. An augmented benchmark dataset for geometric question answering through dual parallel text encoding. In Proceedings of the 29th international conference on computational linguistics , pages 1511–1520, 2022. [4]Hardy Chen, Haoqin Tu, Fali Wang, Hui Liu, Xianfeng Tang, Xinya Du, Yuyin Zhou, and Cihang Xie. Sft or rl? an early investigation into training r1-like reasoning large vision-language models. https://github.com/UCSC-VLAA/VLAA-Thinking , 2025. [5]Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang, Lingbo Liu, Eric Xing, and Liang Lin. Geoqa: A geometric question answering benchmark towards multimodal numerical reasoning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 , pages 513–523, 2021. [6]Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V , 2025. Accessed: 2025-02-02. [7]Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. Sharegpt4v: Improving large multi-modal models with better captions. In European Conference on Computer Vision , pages 370–387. Springer, 2024. [8]Qiguang Chen, Libo Qin, Jin Zhang, Zhi Chen, Xiao Xu, and Wanxiang Che. M3cot: A novel benchmark for multi-domain multi-step multi-modal chain-of-thought. arXiv preprint arXiv:2405.16473 , 2024. [9]Kanzhi Cheng, Yantao Li, Fangzhi Xu, Jianbing Zhang, Hao Zhou, and Yang Liu. Vision-language models can self-improve reasoning via reflection. arXiv preprint arXiv:2411.00855 , 2024. [10] Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, and Kai-Wei Chang. Openvlthinker: An early exploration to complex vision-language reasoning via iterative self-improvement. arXiv preprint arXiv:2503.17352 , 2025. [11] Yuhao Dong, Zuyan Liu, Hai-Long Sun, Jingkang Yang, Winston Hu, Yongming Rao, and Ziwei Liu. Insight-v: Exploring long-chain visual reasoning with multimodal large language models. arXiv preprint arXiv:2411.14432 , 2024. 10 [12] Dawei Gao, | https://arxiv.org/abs/2505.22334v1 |
Haibin Wang, Yaliang Li, Xiuyu Sun, Yichen Qian, Bolin Ding, and Jingren Zhou. Text-to-sql empowered by large language models: A benchmark evaluation. arXiv preprint arXiv:2308.15363 , 2023. [13] Google. Introducing gemini 2.0: our new ai model for the agentic era, 2024. URL https: //blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/ #ceo-message . [14] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [15] Jarvis Guo, Tuney Zheng, Yuelin Bai, Bo Li, Yubo Wang, King Zhu, Yizhi Li, Graham Neubig, Wenhu Chen, and Xiang Yue. Mammoth-vl: Eliciting multimodal reasoning with instruction tuning at scale. arXiv preprint arXiv:2412.05237 , 2024. [16] Xiaotian Han, Yiren Jian, Xuefeng Hu, Haogeng Liu, Yiqi Wang, Qihang Fan, Yuang Ai, Huaibo Huang, Ran He, Zhenheng Yang, et al. Infimm-webmath-40b: Advancing multimodal pre-training for enhanced mathematical reasoning. In The 4th Workshop on Mathematical Reasoning and AI at NeurIPS’24 , 2024. [17] Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, and Ranjay Krishna. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. arXiv preprint arXiv:2406.09403 , 2024. [18] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [19] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Hel- yar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [20] Samira Ebrahimi Kahou, Vincent Michalski, Adam Atkinson, Ákos Kádár, Adam Trischler, and Yoshua Bengio. Figureqa: An annotated figure dataset for visual reasoning. arXiv preprint arXiv:1710.07300 , 2017. [21] Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14 , pages 235–251. Springer, 2016. [22] Daesik Kim, Seonhoon Kim, and Nojun Kwak. Textbook question answering with multi-modal context graph understanding and self-supervised open-set comprehension. arXiv preprint arXiv:1811.00232 , 2018. [23] Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326 , 2024. [24] Dacheng Li, Shiyi Cao, Tyler Griggs, Shu Liu, Xiangxi Mo, Eric Tang, Sumanth Hegde, Kourosh Hakhamaneshi, Shishir G Patil, Matei Zaharia, et al. Llms can easily learn to reason from demonstrations structure, not content, is what matters! arXiv preprint arXiv:2502.07374 , 2025. [25] Yuetai Li, Xiang Yue, Zhangchen Xu, Fengqing Jiang, Luyao Niu, Bill Yuchen Lin, Bhaskar Ramasubra- manian, and Radha Poovendran. Small models struggle to learn from strong reasoners. arXiv preprint arXiv:2502.12143 , 2025. [26] Adam Dahlgren Lindström and Savitha Sam Abraham. Clevr-math: A dataset for compositional language, visual and mathematical reasoning. arXiv preprint arXiv:2208.05358 , 2022. [27] Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. Inter- gps: Interpretable | https://arxiv.org/abs/2505.22334v1 |
geometry problem solving with formal language and symbolic reasoning. arXiv preprint arXiv:2105.04165 , 2021. [28] Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. Iconqa: A new benchmark for abstract diagram understanding and visual language reasoning. arXiv preprint arXiv:2110.13214 , 2021. [29] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems , 35:2507–2521, 2022. 11 [30] Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. arXiv preprint arXiv:2209.14610 , 2022. [31] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255 , 2023. [32] Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583 , 2023. [33] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answering about charts with visual and logical reasoning. arXiv preprint arXiv:2203.10244 , 2022. [34] Fanqing Meng, Lingxiao Du, Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Botian Shi, Wenhai Wang, Junjun He, Kaipeng Zhang, Ping Luo, Yu Qiao, Qiaosheng Zhang, and Wenqi Shao. Mm-eureka: Exploring visual aha moment with rule-based large-scale reinforcement learning, 2025. URL https://github.com/ModalMinds/MM-EUREKA . [35] Shuai Peng, Di Fu, Liangcai Gao, Xiuqin Zhong, Hongguang Fu, and Zhi Tang. Multimath: Bridging visual and mathematical reasoning for large language models. arXiv preprint arXiv:2409.00147 , 2024. [36] Yi Peng, Xiaokun Wang, Yichen Wei, Jiangbo Pei, Weijie Qiu, Ai Jian, Yunzhuo Hao, Jiachun Pan, Tianyidan Xie, Li Ge, et al. Skywork r1v: Pioneering multimodal reasoning with chain-of-thought. arXiv preprint arXiv:2504.05599 , 2025. [37] Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, et al. We-math: Does your large multimodal model achieve human-like mathematical reasoning? arXiv preprint arXiv:2407.01284 , 2024. [38] Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. Solving geometry problems: Combining text and diagram interpretation. In Proceedings of the 2015 conference on empirical methods in natural language processing , pages 1466–1476, 2015. [39] Darsh J Shah, Peter Rushton, Somanshu Singla, Mohit Parmar, Kurt Smith, Yash Vanjani, Ashish Vaswani, Adarsh Chaluvaraju, Andrew Hojel, Andrew Ma, et al. Rethinking reflection in pre-training. arXiv preprint arXiv:2504.04022 , 2025. [40] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [41] Haozhan Shen, Zilun Zhang, Qianqian Zhang, Ruochen Xu, and Tiancheng Zhao. Vlm-r1: A stable and generalizable r1-style large vision-language model. https://github.com/om-ai-lab/VLM-R1 , | https://arxiv.org/abs/2505.22334v1 |
2025. Accessed: 2025-02-15. [42] Wenhao Shi, Zhiqiang Hu, Yi Bin, Junhua Liu, Yang Yang, See-Kiong Ng, Lidong Bing, and Roy Ka-Wei Lee. Math-llava: Bootstrapping mathematical reasoning for multimodal large language models. arXiv preprint arXiv:2406.17294 , 2024. [43] Qwen Team. Qvq: To see the world with wisdom, December 2024. URL https://qwenlm.github.io/ blog/qvq-72b-preview/ . [44] Qwen Team. Qwq: Reflect deeply on the boundaries of the unknown, November 2024. URL https: //qwenlm.github.io/blog/qwq-32b-preview/ . [45] Omkar Thawakar, Dinura Dissanayake, Ketan More, Ritesh Thawkar, Ahmed Heakl, Noor Ahsan, Yuhao Li, Mohammed Zumri, Jean Lahoud, Rao Muhammad Anwer, et al. Llamav-o1: Rethinking step-by-step visual reasoning in llms. arXiv preprint arXiv:2501.06186 , 2025. [46] Yuxuan Tong, Xiwen Zhang, Rui Wang, Ruidong Wu, and Junxian He. Dart-math: Difficulty-aware rejection tuning for mathematical problem-solving. Advances in Neural Information Processing Systems , 37:7821–7846, 2024. [47] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. 12 [48] Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Houxing Ren, Aojun Zhou, Mingjie Zhan, and Hong- sheng Li. Measuring multimodal mathematical reasoning with math-vision dataset. Advances in Neural Information Processing Systems , 37:95095–95169, 2025. [49] Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 9426–9439, 2024. [50] Xiyao Wang, Zhengyuan Yang, Chao Feng, Hongjin Lu, Linjie Li, Chung-Ching Lin, Kevin Lin, Furong Huang, and Lijuan Wang. Sota with less: Mcts-guided sample selection for data-efficient visual reasoning self-improvement. arXiv preprint arXiv:2504.07934 , 2025. [51] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems , 35:24824–24837, 2022. [52] Lai Wei, Zhiquan Tan, Chenghai Li, Jindong Wang, and Weiran Huang. Large language model evaluation via matrix entropy. arXiv preprint arXiv:2401.17139 , 2024. [53] Pengcheng Wen, Jiaming Ji, Chi-Min Chan, Juntao Dai, Donghai Hong, Yaodong Yang, Sirui Han, and Yike Guo. Thinkpatterns-21k: A systematic study on the impact of thinking patterns in llms. arXiv preprint arXiv:2503.12918 , 2025. [54] Guowei Xu, Peng Jin, Hao Li, Yibing Song, Lichao Sun, and Li Yuan. Llava-cot: Let vision language models reason step-by-step. CoRR , abs/2411.10440, 2024. [55] Junzhe Xu and Yuyang yin. Tbac-vlr1-3b-preview, 2025. URL https://huggingface.co/ TencentBAC/TBAC-VLR1-3B-preview . [56] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [57] Zhen Yang, Jinhao Chen, Zhengxiao Du, Wenmeng Yu, Weihan Wang, Wenyi Hong, Zhihuan Jiang, Bin Xu, Yuxiao Dong, and Jie Tang. Mathglm-vision: Solving mathematical problems with multi-modal large language model. arXiv preprint arXiv:2409.13729 , 2024. [58] Huanjin Yao, Jiaxing Huang, Wenhao Wu, Jingyi Zhang, Yibo Wang, Shunyu Liu, Yingjie Wang, Yuxin Song, Haocheng Feng, | https://arxiv.org/abs/2505.22334v1 |
Li Shen, et al. Mulberry: Empowering mllm with o1-like reasoning and reflection via collective monte carlo tree search. arXiv preprint arXiv:2412.18319 , 2024. [59] Zheng Yaowei, Lu Junting, Wang Shenzhi, Feng Zhangchi, Kuang Dongdong, and Xiong Yuwen. Easyr1: An efficient, scalable, multi-modality rl training framework. https://github.com/hiyouga/EasyR1 , 2025. [60] Peng Yingzhe, Zhang Gongrui, Zhang Miaosen, You Zhiyuan, Liu Jie, Zhu Qipeng, Yang Kai, Xu Xingzhong, Geng Xin, and Yang Xu. Lmm-r1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl, 2025. [61] En Yu, Kangheng Lin, Liang Zhao, Jisheng Yin, Yana Wei, Yuang Peng, Haoran Wei, Jianjian Sun, Chunrui Han, Zheng Ge, et al. Perception-r1: Pioneering perception policy with reinforcement learning. arXiv preprint arXiv:2504.07954 , 2025. [62] Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284 , 2023. [63] Di Zhang, Jianbo Wu, Jingdi Lei, Tong Che, Jiatong Li, Tong Xie, Xiaoshui Huang, Shufei Zhang, Marco Pavone, Yuqiang Li, et al. Llama-berry: Pairwise optimization for o1-like olympiad-level mathematical reasoning. arXiv preprint arXiv:2410.02884 , 2024. [64] Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Yu Qiao, et al. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? In European Conference on Computer Vision , pages 169–186. Springer, 2024. [65] Renrui Zhang, Xinyu Wei, Dongzhi Jiang, Ziyu Guo, Shicheng Li, Yichi Zhang, Chengzhuo Tong, Jiaming Liu, Aojun Zhou, Bin Wei, et al. Mavis: Mathematical visual instruction tuning with an automatic data engine. arXiv preprint arXiv:2407.08739 , 2024. 13 [66] Ruohong Zhang, Bowen Zhang, Yanghao Li, Haotian Zhang, Zhiqing Sun, Zhe Gan, Yinfei Yang, Ruoming Pang, and Yiming Yang. Improve vision language model chain-of-thought reasoning. arXiv preprint arXiv:2410.16198 , 2024. [67] Yu Zhang, Kehai Chen, Xuefeng Bai, Zhao Kang, Quanjiang Guo, and Min Zhang. Question-guided knowledge graph re-scoring and injection for knowledge graph question answering. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 8972–8985, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-emnlp.524. URL https://aclanthology.org/2024. findings-emnlp.524/ . [68] Yuze Zhao, Jintao Huang, Jinghan Hu, Xingjun Wang, Yunlin Mao, Daoze Zhang, Zeyinzi Jiang, Zhikai Wu, Baole Ai, Ang Wang, Wenmeng Zhou, and Yingda Chen. Swift:a scalable lightweight infrastructure for fine-tuning, 2024. URL https://arxiv.org/abs/2408.05517 . [69] Hengguang Zhou, Xirui Li, Ruochen Wang, Minhao Cheng, Tianyi Zhou, and Cho-Jui Hsieh. R1-zero’s "aha moment" in visual reasoning on a 2b non-sft model, 2025. URL https://arxiv.org/abs/2503. 05132 . [70] Wenwen Zhuang, Xin Huang, Xiantao Zhang, and Jin Zeng. Math-puma: Progressive upward multimodal alignment to enhance mathematical reasoning. arXiv preprint arXiv:2408.08640 , 2024. 14 Appendix A More Implementation Details A.1 Prompts We provide the prompt used to synthesize Self-Critic-CoT [53] via rejection sampling. When analyzing any query or task, please follow the structure below: 1. Draft Response: Generate an initial response. 2. Critical Comments: Analyze your draft response by considering: • Potential weaknesses or gaps | https://arxiv.org/abs/2505.22334v1 |
• Logical flaws or inconsistencies • Missing perspectives or alternatives • Areas for improvement • Suggestions for a better version • Steering toward the given answer The critical comments should: • Be specific and actionable • Reference particular parts of the draft • Suggest concrete improvements • Consider different angles or approaches • Guide towards a more comprehensive solution Output Format: •Draft Response: Your initial complete response to the instruction. •Critical Comments: Your analysis of the draft response, highlighting areas for improvement and suggest- ing specific enhancements. •Final Answer: Put your final answer within \boxed{} . A.2 Rejection Sampling To construct the distillation datasets from Qwen2.5-VL-7B and Qwen2.5-VL-32B, we employ a rejection sampling technique [ 46]. We try at most 24 times to obtain one correct response. We observe that this approach results in approximately the same amount of data (both 52K samples) for Qwen2.5-VL-7B and Qwen2.5-VL-32B. A.3 Benchmarks We provide some details about the benchmarks we use to evaluate the models’ reasoning ability. MathVision [ 48] is a challenging benchmark containing 3040 mathematical problems with visual contexts from real-world math competitions across 12 grades. It covers 16 subjects over 5 difficulty levels, including specialized topics like Analytic Geometry, Combinatorial Geometry, and Topology. 15 •MathVision [48] is a challenging benchmark containing 3040 mathematical problems with visual contexts from real-world math competitions across 12 grades. It covers 16 subjects over 5 difficulty levels, including specialized topics like Analytic Geometry, Combinatorial Geometry, and Topology. •MathVista [31] is a comprehensive benchmark for evaluating mathematical reasoning in visual contexts. It contains 1000 questions featuring diverse problem types including geometry, charts, and tables. •MathVerse [64] is an all-around visual math benchmark designed for an equitable and in-depth evaluation of MLLMs. The test set contains 3940 multi-subject math problems with diagrams from publicly available sources, focusing on Plane Geometry and Solid Geometry. •We-Math [37] meticulously collect and categorize 1740 visual math problems in the test set, spanning 67 hierarchical knowledge concepts and 5 layers of knowledge granularity. For all benchmarks, we prompt the models to place their final answers within a designated box format. We then employ Qwen2.5-32B-Instruct [ 56] to evaluate answer correctness by comparing the extracted responses with ground truth answers, which often contain complex mathematical expressions. Note that our reported benchmark scores may differ from those in the original papers due to variations in evaluation protocols. A.4 Training Details We conduct post-training using Qwen2.5-VL-3B and Qwen2.5-VL-7B [ 2]. In particular, we employ ms-swift [ 68] framework for the cold start training (SFT), using 3 epochs with a learning rate of 1×10−5. Besides, we adopt the EasyR1 [ 59] framework for GRPO training. We mainly follow the default hyperparameters in EasyR1. We set the training episodes to 2 with a learning rate of 1×10−6. During GRPO training, we adopt accuracy reward, which assigns rewards solely based on the correctness of the generated response: a correct final answer receives a reward of +1, while an incorrect one receives a reward of 0. B Compute Resources We conduct our experiments using NVIDIA H800-80G and A800-40G GPUs. The experimental | https://arxiv.org/abs/2505.22334v1 |
time using 8 A800 for training Qwen2.5-VL-7B using GRPO is around 12 hours. C Limitations While our study provides comprehensive insights into the interplay between supervised fine-tuning (SFT) and reinforcement learning (RL) for multimodal reasoning, several limitations remain. Firstly, we primarily evaluate models at the 3B and 7B parameter scales due to limited computation resources. The generalizability of our findings to larger models (e.g., 30B, 70B, or beyond) remains to be validated. Our approach adopts GRPO [ 40] as the RL optimization algorithm, which is currectly most popular. We do not systematically compare alternative RL methods, which we leave as future work. In addition, the identification of “aha moment” patterns is based on keyword matching and heuristic rules, which may introduce subjectivity or incompleteness. D Case Studies We provide several detailed case studies of our observation and curated CoT datasets as follows. D.1 “Aha Moment” in MLLM In the first case, the correct answer should be 130 °, but the model fails to arrive at this solution. The associated reflection pattern exhibits hallucinations and proves ineffective. In contrast, in the second case, the model successfully answers the question, and the reflection pattern contributes meaningfully to the reasoning process. 16 The First Case. Question: In the given parallelogram ABCD, if the sum of angle B and angle D measures 100°, what is the measure of angle A? Model’s response: To solve this problem, we need to use the properties of a parallelogram. The properties are: 1. Opposite sides are parallel. 2. Opposite angles are equal. 3. Consecutive angles are supplementary (they add up to 180◦). Given: The sum of angle Band angle Dis100◦. SinceAD∥BC andAB∥CD , angles BandDare consecutive angles. Therefore, they are supplementary, meaning they add up to 180◦. So, we can write: angleB+angleD= 180◦ Given: angleB+angleD= 100◦ This implies that the given sum ( 100◦) is incorrect for the sum of angles BandDin a parallelogram, which should be 180◦. However, let’s proceed with the given information. Since angle Band angle Dare consecutive angles, we can find the measure of angle Aby using the fact that opposite angles are equal in a parallelogram. Therefore, angle Ais equal to angle C. Now, assume: angleB=x, angleD= 180◦−x Given: x+ (180◦−x) = 100◦⇒180◦= 100◦ This is incorrect, so we need to re-evaluate the problem. The correct interpretation should be that the sum of angle Band angle Dis180◦, not100◦. Therefore, the measure of angle Ais: 140◦ The Second Case. Question: In the given diagram, quadrilateral ABCD is inscribed in circle O. The diameter BE is part of circle O and AE is connected to it. If the measure of angle BCD is twice the measure of angle BAD, then what is the degree measure of angle DAE? Model’s response: To solve the problem, we need to use some properties of cyclic quadrilaterals and the relationships between angles in a circle. 1. Identify the given information: • Quadrilateral ABCD is inscribed in circle O. • Diameter BE of circle Omeans that BE is a straight line passing through the center of the circle. •AE is connected to | https://arxiv.org/abs/2505.22334v1 |
BandE. •∠BCD = 2×∠BAD . 2. Use the property of cyclic quadrilaterals: In a cyclic quadrilateral, the sum of the opposite angles is 180◦. Therefore, ∠BAD +∠BCD = 180◦. 3. Express ∠BCD in terms of ∠BAD : Given∠BCD = 2×∠BAD , substitute this into the equation: ∠BAD + 2×∠BAD = 180◦, 3×∠BAD = 180◦, ∠BAD = 60◦, ∠BCD = 2×60◦= 120◦. 4. Determine ∠ADE : SinceBE is a diameter, ∠ADE is a right angle because the angle subtended by a diameter in a semicircle is 90◦: ∠ADE = 90◦. 17 5. Determine ∠DAE : ∠DAE is part of triangle ADE , where ∠ADE = 90◦. The sum of the angles in triangle ADE is180◦: ∠DAE +∠ADE +∠AED = 180◦, ∠DAE + 90◦+∠AED = 180◦, ∠DAE +∠AED = 90◦. 6. Determine ∠AED : ∠AED is the same as ∠BCD because they subtend the same arc CD : ∠AED = 120◦. 7. Solve for ∠DAE : Substitute ∠AED = 120◦into the equation: ∠DAE + 120◦= 90◦, ∠DAE =−30◦. Since∠DAE cannot be negative, we must re-evaluate the problem. The correct interpretation is that ∠AED = 60◦and not 120◦, because ∠AED does not subtend the same arc as ∠BCD . Instead, the triangle should be analyzed differently, considering interior angles. Thus, the correct value is: 30◦. D.2 Distilled Data from Different Models The main difference between the Qwen2.5-VL-7B distilled CoT and the Qwen2.5-VL-32B distilled CoT lies in their level of detail and organization. The Qwen2.5-VL-32B distilled CoT is more comprehensive and better structured, offering richer, more informative reasoning steps compared to the more concise and less detailed Qwen2.5-VL-7B distilled CoT. The Distilled CoT Data from Qwen2.5-VL-7B and Qwen2.5-VL-32B. Question: In the accompanying diagram, ABC is an equilateral triangle with a perimeter of 30. What is the length of altitude h of this triangle? Choices: 5√ 2,5√ 3,10√ 2,10√ 3 Qwen2.5-VL-7B’s response: To find the length of the altitude hof an equilateral triangle with a perimeter of 30, we can follow these steps: 1. Calculate the side length of the equilateral triangle: • The perimeter of an equilateral triangle is the sum of the lengths of its three sides. • Given the perimeter is 30, we can find the length of one side by dividing the perimeter by 3: Side length =30 3= 10 2. Use the properties of an equilateral triangle: • In an equilateral triangle, the altitude also bisects the base, creating two 30-60-90 right triangles. • In a 30-60-90 triangle, the ratio of the sides opposite the 30◦,60◦, and 90◦angles is 1 :√ 3 : 2 . 3. Determine the length of the altitude h: • The altitude his the side opposite the 60◦angle in the right triangle. • The side opposite the 30◦angle (which is half the base) is 5 (since the base is 10 and it is bisected). • Using the ratio 1 :√ 3 : 2 , the altitude his: h= 5√ 3 Therefore, the length of the altitude his: 5√ 3 Qwen2.5-VL-32B’s response: Step-by-Step Solution: 1. Understand the Problem: • We are given an equilateral triangle △ABC with a perimeter | https://arxiv.org/abs/2505.22334v1 |
of 30. • We need to find the length of the altitude hof this triangle. 18 2. Find the Side Length of the Triangle: • An equilateral triangle has all three sides of equal length. • Let the side length of the triangle be s. • The perimeter of the triangle is given by: Perimeter =s+s+s= 3s • We are given that the perimeter is 30: 3s= 30 • Solving for s:s=30 3= 10 • Therefore, the side length of the triangle is s= 10 . 3. Use the Formula for the Altitude of an Equilateral Triangle: • The altitude hof an equilateral triangle with side length sis given by the formula: h=√ 3 2s • Substitute s= 10 into the formula: h=√ 3 2·10 • Simplify the expression: h= 5√ 3 4. Verify the Result: • The formula for the altitude of an equilateral triangle is derived from the Pythagorean theorem. • In an equilateral triangle, the altitude bisects the base, creating two 30-60-90 right triangles. • In a 30-60-90 triangle, the sides are in the ratio 1 :√ 3 : 2 . •The altitude corresponds to the side opposite the 60-degree angle, which is√ 3 2times the hypotenuse (the side of the equilateral triangle). • This confirms that the formula h=√ 3 2sis correct. 5. Select the Correct Answer: • The length of the altitude his5√ 3. 5√ 3 19 | https://arxiv.org/abs/2505.22334v1 |
arXiv:2505.22338v1 [cs.CL] 28 May 2025TEXT2GRAD: REINFORCEMENT LEARNING FROM NATURAL LANGUAGE FEEDBACK Hanyang Wang1,∗, Lu Wang2, Chaoyun Zhang2, Tianjun Mao3, Si Qin2, Qingwei Lin2, Saravan Rajmohan2, Dongmei Zhang2 1University of Chicago2Microsoft3Fudan University ∗Work done during an internship at Microsoft Correspondence to: Lu Wang, wlu@microsoft.com ABSTRACT Traditional RLHF optimizes language models with coarse, scalar rewards that mask the fine-grained reasons behind success or failure, leading to slow and opaque learning. Recent work augments RL with textual critiques through prompting or reflection, improving interpretability but leaving model parameters untouched. We introduce TEXT2GRAD, a reinforcement-learning paradigm that turns free-form textual feedback into span-level gradients . Given human (or programmatic) critiques, TEXT2GRAD aligns each feedback phrase with the relevant token spans, converts these alignments into differentiable reward signals, and performs gradient updates that directly refine the offending por- tions of the model’s policy. This yields precise, feedback-conditioned adjustments instead of global nudges. TEXT2GRAD is realized through three components: (1) a high-quality feedback–annotation pipeline that pairs critiques with token spans; (2) a fine-grained reward model that predicts span-level reward on answer while generating explanatory critiques; and (3) a span-level policy optimizer that back-propagates natural-language gradients . Across summarization, code generation, and question answering, TEXT2GRAD consistently surpasses scalar-reward RL and prompt-only baselines, provid- ing both higher task metrics and richer interpretability. Our results demonstrate that natural-language feedback, when converted to gradients, is a powerful signal for fine-grained policy optimization. The code for our method is available at https://github.com/microsoft/Text2Grad . 1 Introduction NL-Gradient Based PPOReward Model Policy LM Updat ePlease Summarize the following post : I just sent the completed, unedited novel to … I'm a first time author with a 200 page unpublished novel. I need help finding a good editor.Query : Sampled Trajectory: -3Reward Model OK, I know it is bad. But which part is bad? PPO Policy LM Updat e Scalar Rewards Traditional PPO Span -Level Reward : I'm a first time … finding a good editor . 0 0 -1 -1 … 1 1 1 1 Textual Feedback : The summary should include information about them being worried that … I see the problem! Let me fix it now! TEXT2GRAD Figure 1: Comparison of PPO and T EXT2GRADFree form natural language feedback is abundant in real world applications [ 1]. Users leave sugges- tions in reviews, developers comment on code pull requests, and customers critique responses from virtual assistants. Unlike scalar ratings or prefer- ence scores, this form of feedback is inherently rich and expressive. It not only identifies which parts of an output are correct or problematic but also explains why, providing detailed signals that can guide model improvement. Despite its ubiquity and usefulness, most existing learning paradigms fail to fully utilize such feed- back. Reinforcement learning from human feed- back (RLHF) has become a dominant method for aligning large language models (LLMs) with hu- man preferences [ 2,3,4,5,6]. RLHF typically converts feedback into scalar values derived from preference comparisons, and updates the model using policy optimiza- tion algorithms such as Proximal Policy Optimization (PPO) [ 7] and Direct Preference Optimization (DPO) [ 5]. While | https://arxiv.org/abs/2505.22338v1 |
these methods have led to impressive improvements in model helpfulness and safety, they also introduce limitations. By reducing rich, contextual critiques to a single scalar reward, RLHF discards token-level information about what was right or wrong, and where. This makes credit assignment imprecise, slows convergence, and limits interpretability [8, 9, 10]. An alternative line of research maintains feedback in its natural language form. Methods such as ReAct [ 11] and Reflexion [ 12] prompt the model to reflect on its outputs [ 13], generate critiques, and use them to self-correct in subsequent steps [ 14]. These approaches are inspired by how humans operate in open-ended tasks through reasoning, explanation, and dialogue, rather than numeric reward [ 15,16,17]. Natural language feedback in this context improves transparency and sometimes leads to better task performance. However, these methods do not update the model parameters, meaning the model does not internalize the feedback it receives. As a result, the same mistakes must be corrected repeatedly, and feedback remains ephemeral [18, 19, 20]. In this paper, we propose TEXT2GRAD, a novel framework that transforms free form textual feedback into actionable gradients for policy optimization. As shown in Figure 1, unlike prior work that either compresses feedback into scalar rewards or applies textual critiques only at inference time, TEXT2GRAD brings feedback into the training loop. Given a human or programmatic critique, our method aligns feedback clauses with relevant output token spans, converts these alignments into span-level reward signals, and computes a natural language gradient. This gradient is then used to perform policy updates that precisely adjust the parts of the model responsible for the error. The result is more targeted, efficient, and interpretable learning. TEXT2GRAD is built on a full pipeline for learning from text. First, we construct a high quality annotation pipeline that uses GPT-4o to label model outputs with both scalar scores and span-level critiques, following recent work in automated feedback generation [ 21,22]. Second, we train a dual-headed reward model inspired by generative reward modeling [23] that jointly predicts span-level reward distributions and generates corresponding textual rationales. Third, we apply span-level policy optimization using a variant of Proximal Policy Optimization that integrates these fine-grained reward signals, drawing on advances in token-aware credit assignment [24] and text-based gradients [25]. We evaluate TEXT2GRAD on summarization [ 26], code generation [ 27], and open-domain question answering [ 28]. In all tasks, our method outperforms scalar-reward RLHF and prompt-based reflection baselines, showing stronger alignment with human preferences, higher sample efficiency, and more interpretable learning dynamics. These results suggest that natural language feedback can be more than an interpretability tool—it can be converted into principled gradients to train more capable and aligned models. Overall, this paper makes the following contributions: •We introduce the problem of learning from natural language feedback via gradient-based optimization and present TEXT2GRAD as the first complete framework to address it. •We develop a scalable annotation pipeline and a dual-headed reward model that together produce span-level rewards and explanatory critiques, enabling interpretable fine-grained supervision. •We demonstrate that TEXT2GRAD outperforms strong scalar-reward and prompt-based baselines across summariza- tion, | https://arxiv.org/abs/2505.22338v1 |
code generation, and question answering benchmarks. TEXT2GRAD demonstrates that natural language feedback, when properly aligned and grounded, can serve as a direct training signal rather than just auxiliary guidance, opening a new path for building language models that learn from human-like supervision. 2 Related Work RLHF with scalar rewards Reinforcement learning from human feedback replaces supervised labels with a reward model trained on pairwise human preferences [ 29,3]. The reward is a single scalar, and policy optimization methods such as Proximal Policy Optimization and Direct Preference Optimization update the language model toward higher scores [ 7,5]. This recipe has advanced instruction following, safety, and summarization; a 1.3B InstructGPT model aligned in this way outperformed 175B GPT 3 on adherence and toxicity [ 3,4,2]. Subsequent work studies reward hacking and data noise [ 30,31,32]. Despite those successes, scalar rewards collapse multidimensional critiques into one number, obscure where an error occurs, and demand careful regularization such as Kullback Leibler penalties to remain stable [10, 9]. Natural language feedback at inference time A complementary line of research keeps feedback in natural language but applies it only while the model is running. ReAct interleaves chain of thought reasoning with tool use to refine 2 answers in question answering and text games [ 11]. Reflexion stores self generated critiques between attempts and improves coding and decision tasks [ 12]. Language Feedback Training incorporates human written refinements during supervised fine tuning [ 3]. Surveys categorize the many emerging feedback formats [ 18,22]. These methods lift interpretability and sometimes quality, yet the model weights stay frozen, so lessons are not retained and error corrections must be rediscovered each time [19, 20]. TEXT2GRAD draws inspiration from both threads yet differs in a crucial way, by training a reward model that generates interpretable textual critiques, uniquely leveraging natural language gradients in token-level PPO to drive fast, interpretable policy improvements. 3 Method This section details TEXT2GRAD, a novel framework for Reinforcement Learning from Natural Language Feedback. We begin by motivating and defining the concept of the natural language gradient, then outline the system architecture. Next, we address two core challenges, constructing a reward model that delivers actionable feedback, and translating this feedback into precise policy updates, to demonstrate how natural language feedback can be directly integrated into gradient. 3.1 Natural Language Gradient: Definition and Motivation Traditional policy gradient methods optimize an expected scalar return J(θ) =Ey∼πθ(·|x) R(y) ,where R(y)is a sequence–level reward. Such scalar signals obscure which tokens contribute to success or failure, limiting diagnostic insight. To address this, we introduce the Natural Language Gradient (NL-Gradient) , which transforms textual critiques into token-level gradient signals. Definition 1 (Natural Language Gradient) Given a generated sequence y= (y1, . . . , y T)and its textual critique c, let{δt}T t=1denote pseudo-rewards derived from aligning critique content to tokens. The NL-Gradient is defined by ∇NL(c→y) =TX t=1δt∇θlogπθ(yt|x, y<t). Here, δtencodes the critique’s local intensity on token yt, enabling: (1) Fine-Grained Guidance: Pseudo-rewards δt highlight specific tokens needing improvement. (2) Interpretability: Each update step is grounded in human-readable feedback. (3) Transferability: The model learns a mapping | https://arxiv.org/abs/2505.22338v1 |
from text to gradient signals, facilitating generalization across tasks. 3.2 Overview of T EXT2GRAD The central objective of TEXT2GRAD is to construct a Natural Language Gradient that directly drives policy updates. This requires solving two key challenges: (1) converting free-form textual critiques—which humans can easily highlight—to fine-grained numerical feedback, and (2) leveraging these numerical signals to compute token-level advantages and update the policy. To address these challenges, as shown in Figure 2, TEXT2GRAD comprises three steps: Dual-Feedback Reward Annotation , which uses GPT-4o to produce high-quality paired critiques and scores; Reward Model Training , which learns to map query-response into text critiques and calibrated numerical rewards; andNL-Gradient Policy Optimization , which leverages per-token advantages and applies NL-Gradient PPO updates. Together, these phases realize end-to-end Natural Language Gradient descent for LLM. 3.3 Reward Labeling Natural language gradient optimization requires dense, interpretable feedback that can be mapped to token-level learning signals. To support this, we develop a general reward annotation framework that produces both free-form critiques and structured span-level numerical feedback. This dual-feedback design is applicable across diverse tasks, and supports the construction of token-level pseudo-rewards required for fine-grained policy optimization. Dual-Feedback Annotation Given a prompt xand a generated response y= (y1, . . . , y T), we aim to annotate each sample with a natural language critique c, describing strengths or weaknesses of the response in free text, and a structured span-level reward map A(y), where each span is assigned a label from {positive ,neutral ,negative }. 3 Figure 2: An overview of T EXT2GRAD. In practice, we prompt a strong LLM (e.g., GPT-4o) to output both feedback modalities. For example, in a summarization task, the model may generate a textual critique such as: “The summary omits key information about the character’s concern that the manuscript may be rejected. ” followed by a structured JSON object assigning sentiment values to spans in the summary: { "Good spans": ["a first time author"], "Poor spans": ["was hopeful about the submission"] } Span-level feedback is particularly useful for aligning language-level reasoning with output structure. We focus annotation efforts on non-neutral spans, as these provide the most useful learning signal while reducing labeling overhead. Reasoning-Augmented Annotation For datasets lacking human-written textual feedback, we employ a multi-step prompting strategy to elicit structured feedback from the LLM. Given a response y, the model is guided to: (1) Reason about the response quality step-by-step; (2) Output a critique cbased on this reasoning; (3) Produce a span-level JSON mapA(y)associating specific subphrases with sentiment labels. Formally, the reward labeler outputs: RLLM(x, y) = ( c,A(y)),where A(y) :sk7→ℓkmaps span skto label ℓk∈ {positive ,neutral ,negative }. This dual annotation process allows the model to derive semantically grounded, structured supervision even in the absence of explicit references. The full prompt used for reasoning- augmented annotation is provided in Appendix C. Token-Level Reward Mapping Although feedback is annotated at the span level, policy optimization requires token- level rewards. We convert each labeled span skinto token-aligned supervision by assigning a uniform pseudo-reward δt∈ {− 1,0,+1}to each token: δt= +1,ift∈skandA(y)[sk] =positive , −1,ift∈skandA(y)[sk] =negative , | https://arxiv.org/abs/2505.22338v1 |
0, otherwise . To reduce labeling cost while retaining informativeness, we adopt a class-prioritized strategy: only positive and negative spans are explicitly labeled, while neutral spans are left unannotated and default to δt= 0. This yields a token-level reward vector δ= (δ1, . . . , δ T), which supports token-wise advantage estimation and construction of 4 the NL-Gradient (see Section 3.5). This component enables scalable, interpretable, and task-general supervision from natural language feedback. 3.4 Reward Model Learning To support natural language gradient optimization, we train a reward model Rϕthat jointly generates free-form critiques and structured token-level feedback in a unified autoregressive format. Instead of predicting scalar scores, we frame reward modeling as a text generation task—producing both natural language evaluations and span-level labels as output sequences. Model Objective. Given a prompt xand model response y= (y1, . . . , y T), the reward model outputs a sequence z= [c;A(y)], where cis a critique and A(y)is a JSON-formatted map labeling spans in yaspositive ,negative , orneutral . We model this as conditional language generation: pϕ(z|x, y) =Q|z| t=1pϕ(zt|z<t, x, y),and optimize via maximum likelihood with a cross-entropy loss: LR(ϕ) =−E(x,y,z )∈DR[logpϕ(z|x, y)]. This formulation provides three advantages: (1) flexibility across tasks via textual supervision; (2) fine-grained gradient flow through tokenized outputs; and (3) interpretable feedback combining explanation and token-level reward in one model. Each training instance is serialized as [x;y;z], and the model is fine-tuned using teacher forcing under a standard causal LM objective. This unified, text-based approach simplifies the pipeline while enabling both structured and natural language feedback to drive token-level learning in T EXT2GRAD. 3.5 NL-Gradient Policy Optimization Traditional RL methods rely on sequence-level scalar rewards, which obscure token-level credit assignment and limit precision. This is especially problematic in tasks like summarization and code generation, where only specific parts of the output may be incorrect. To address this, TEXT2GRAD uses dense token-level pseudo-rewards {δt} derived from structured textual feedback to enable fine-grained advantage estimation: At=PT k=tγk−trtotal,A k− Vψ(x, y<t),where rtotal,A k=δk+rKL k. Given a response y, we query the trained reward model Rϕto generate a natural language critique and span-level reward map, which is parsed into token-wise rewards {δt}T t=1. These are used to construct the Natural Language Gradient (NL-Gradient) :gNL=PT t=1δt· ∇θlogπθ(yt|x, y<t),providing localized learning signals aligned with feedback. We then compute token-level advantages using GAE and integrate them into the PPO objective: LPPO(θ) =Et[min ( ρtAt,clip(ρt,1−ϵ,1 +ϵ)At)]−βH(πθ(· |x, y<t)), where ρtis the importance ratio, His the entropy bonus, and ϵis the clipping threshold that stabilizes updates by constraining large policy shifts. By transforming natural language feedback into token-level gradients, TEXT2GRAD enables interpretable, precise, and efficient policy optimization. 3.6 Theoretical Analysis: Discriminative Power of Token-Level Rewards Our theoretical analysis shows that token-level rewards derived from textual feedback lead to sharper and more discriminative advantage estimates than end-of-sequence rewards. Under our formulation, the advantage at timestep tis computed as AA t=PT k=t(γλ)k−tδk, where δkare pseudo-rewards aligned to tokens via natural language critiques. In contrast, end-of-sequence rewards yield AB t= (γλ)T−tPT k=tδk, discounting all feedback uniformly. The difference between these | https://arxiv.org/abs/2505.22338v1 |
settings is given by ∆AA t−∆AB t=PT−1 k=t(γλ)k−t∆δk, which amplifies early feedback differences. For typical settings where γλ≈0.95, a token-level reward at step k=T−20is weighted nearly 0.95−20≈2.8 times more than it would be under end-of-sequence supervision—showing that natural language-guided token-level feedback is nearly 3 ×more effective for early credit assignment. This yields more informative gradients and improves the policy’s ability to localize and correct errors in long-form outputs. The full derivation and comparison are provided in Appendix A. 5 Table 1: Reward model performance comparison. DatasetPositive Token Negative TokenWin-Rate (W:T:L) Human-anno Acc. Prec. Rec. Prec. Rec. SLF5K 0.58 0.63 0.58 0.43 62:9:29 86% SLF5K (w/o CoT) 0.63 0.46 0.53 0.40 – – UltraFeedback 0.66 0.43 0.46 0.22 53:9:38 82% UltraFeedback (w/o CoT) 0.61 0.59 0.40 0.35 – – KodCode 0.64 0.68 0.84 0.71 72:7:21 94% KodCode (w/o CoT) 0.62 0.61 0.75 0.78 – – 4 Experiments We evaluate TEXT2GRAD on summarization, code generation, and question answering to test its ability to transform natural language feedback into fine-grained policy updates. Our experiments demonstrate that TEXT2GRAD outperforms scalar-reward baselines such as PPO, with improved sample efficiency, faster convergence, and better accuracy. 4.1 Datasets Overview SLF5K [26]: A summarization dataset with 5,000 Reddit posts, human-written summaries, and feedback. We use all 5,000 samples for SFT, reward modeling, and policy training, with 500 held out for evaluation. KodCode [28]: A code generation benchmark with 447K question–solution–test triplets across 12 domains. We sample 9K GPT-4o completions (with ChatGPT-3.5 negatives) to train the reward model, and use KodCode-Light-RL-10k for policy optimization. UltraFeedback [27]: A QA dataset with 64K prompts and 256K completions from 17 models. Following Huang et al. [33], we split the data into 30% SFT, 50% reward modeling, and 20% RL. 4.2 Reward Model Evaluation A central component of TEXT2GRAD is the generative reward model, trained to emulate the evaluative reasoning of advanced LLMs (i.e., GPT-4o) by producing structured, token-level feedback. Experimental Setup We fine-tune Llama3.1-8B-Instruct [34] to serve as the reward model across all tasks. The model is trained to output both a natural language critique and a span-level reward map, using supervision generated by GPT-4o. To ensure high-quality labels, we use a chain-of-thought (CoT) prompting strategy [ 15,35] in which GPT-4o first reasons through the correctness of a model response, then articulates strengths and weaknesses, and finally highlights token spans as positive ornegative . This structured annotation improves feedback precision and interpretability, enabling richer training signals than scalar-only supervision. Main Results Table 1 presents the precision and recall for token-level feedback identification, as well as the win rate of span-level reward signals in pairwise comparisons, with and without CoT reasoning, alongside human-annotated alignment accuracy. Across all datasets, the CoT-based reward model consistently outperforms the ablated variant without textual reasoning. On SLF5K, our model achieved a 62% win rate over the non-CoT variant and aligned with human annotations at 86% accuracy. Although precision for positive spans slightly decreased (58% vs. 63%), recall improved significantly (63% vs. 46%), indicating better coverage and less overfitting to obvious phrases. Similar trends were observed on UltraFeedback and KodCode, with | https://arxiv.org/abs/2505.22338v1 |
particularly strong span-level grounding in the code domain (KodCode win rate: 72%). These results confirm that natural language feedback—when structured through reasoning—enables more accurate and discriminative reward modeling, forming the basis for token-level learning in TEXT2GRAD. The pairwise-comparison prompt is listed in Appendix E. In addition, we evaluate the reliability of our span-level reward predictions by comparing them with human-annotated results across three datasets, using 50 sampled cases from each. Our method achieves an alignment rate of over 82% with human judgments, indicating that the span-level rewards are both reliable and effective for identifying good or bad spans in model outputs, and are well-suited for guiding policy updates. Additional reward model evaluation results are listed in Appendix C. 6 4.3 SLF5K [26]: Summarization We evaluate TEXT2GRAD on the SLF5K dataset [ 26], which involves generating high-quality summaries of Reddit posts that closely align with human-written references. This task provides natural language feedback and span- level annotations, making it well-suited for evaluating the effectiveness of token-level reward modeling. Additional hyperparameters are provided in Appendix D. Table 2: Performance on SLF5K dataset. All models use Llama-3.1-8B Instruct. SFT=Supervised Fine-Tuning, SFT+Reflection adds reward-guided reflection. T EXT2GRAD is our approach. Model ROUGE-1 ROUGE-2 ROUGE-L BLEU BERTScore F1 ChatGPT-3.5 0.155 0.059 0.108 0.020 0.844 ChatGPT-4o 0.296 0.066 0.203 0.030 0.886 8B-SFT 0.285 0.078 0.195 0.032 0.875 8B-SFT+Reflection 0.329 0.087 0.225 0.041 0.888 8B-PPO 0.365 0.132 0.262 0.075 0.893 TEXT2GRAD-8B 0.400 0.155 0.291 0.094 0.902 Experimental Setup We use Llama3.1-8B-Instruct [34] as the base policy model. It is first fine-tuned using supervised learning on SLF5K to control output length and content coverage, and subsequently optimized using our NL-Gradient method. We compare TEXT2GRAD against several baselines: (1) PPO [ 7] trained with scalar rewards, (2) supervised fine-tuning (SFT), and (3) SFT augmented with reward-guided reflection strategies [ 12,36]. We also include GPT-3.5 and GPT-4o outputs as reference points. Evaluation metrics include ROUGE [ 37], BLEU [ 38], BERTScore [39], and LLM-as-a-Judge [40]. 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Training Samples (Thousands)5.355.405.455.505.555.605.655.70Sequence-level Reward (PPO)Early StopPPO EMA (span=250) Text2Grad EMA (span=250) 0.070.080.090.100.110.120.13 Token-level Reward (Text2Grad)0.070.080.090.100.110.120.13 Token-level Reward (Text2Grad)SLF5K (a) Reward curve for SLF5K dataset. 0 20 40 60 80 100 Percentage (%)PPO vs SFTText2Grad vs SFTText2Grad vs PPO 46.0% 11.8% 42.2%59.2% 8.2% 32.6%49.6% 12.8% 37.6%First Model Win Ties Second Model Win (b) GPT4 Judge Comparison on different Methods for SLF5K dataset. Figure 3: Combined figure for SLF5K dataset analysis. Main Results TEXT2GRAD consistently outperforms all baselines, as shown in Table 2. It achieves the highest scores across all automatic metrics, including a substantial +25.3% BLEU improvement over PPO and a +3.3 ROUGE-L gain over the strongest reflection baseline. These results demonstrate the benefits of fine-grained token-level reward modeling enabled by structured natural language feedback. Human preference evaluation using GPT-4-as-a-Judge (Figure 3b) shows that TEXT2GRAD achieves a 12% win-rate improvement over PPO. This confirms that its summaries are not only better aligned with references but also more coherent and informative from a qualitative perspective. Additionally, the reward curve in Figure 3a shows that TEXT2GRAD converges significantly faster, reaching optimal performance | https://arxiv.org/abs/2505.22338v1 |
within 75% of training steps compared to PPO’s 97%. These highlight the efficiency of token-level feedback in accelerating training and guiding interpretable policy updates. 4.4 KodCode [28]: Code Generation We evaluate TEXT2GRAD on the KodCode dataset [ 28], which focuses on generating correct and efficient Python solutions across 12 diverse problem domains. This task highlights the importance of span-level feedback in structured text generation where subtle errors can invalidate the entire output. 7 Experimental Setup We use Llama3.1-8B-Instruct [34] as the policy model. To train the reward model, we sample 10,000 prompt–completion pairs from the supervised dataset. GPT-4o solutions serve as high-quality references, while GPT-3.5 completions provide challenging negatives to construct pairwise training examples. Annotations include textual critiques and span-level labels derived. We compare TEXT2GRAD against standard PPO [ 7] and several strong baselines, including pre-trained models (Llama- 3.2-Instruct, CodeLlama-34B, and Gemini Pro). Evaluation is conducted using pass@1 accuracy on HumanEval [ 41], MBPP [ 42], and their enhanced counterparts (HumanEval+ and MBPP+ [ 43]), which include additional test cases for robustness evaluation. Table 3: Performance comparison on code benchmarks. Results show pass@1 scores (%). HumanEval+ and MBPP+ include additional test cases beyond the base benchmarks. Model SizeHumanEval MBPPAvg. Base Plus Base Plus Pre-Trained Models Llama-3.2-Instruct 3B 48.2 43.9 61.9 51.3 51.3 Llama-3.1-Instruct 8B 64.0 58.5 66.7 55.0 61.1 CodeLlama 34B 51.8 43.9 69.3 56.3 55.3 Gemini Pro – 63.4 55.5 72.9 57.9 62.4 Fine-tuned Models (Llama3.1-8B-Instruct) PPO 8B 64.6 61.0 68.5 55.8 62.5 TEXT2GRAD (w/o CoT) 8B 63.8 57.3 62.2 53.4 59.2 TEXT2GRAD 8B 67.7 61.6 73.3 61.6 66.1Main Results Table 3 shows that TEXT2GRAD outperforms all fine-tuned and pre-trained baselines across both standard and enhanced benchmarks. Compared to PPO, TEXT2GRAD achieves a +5.8 point im- provement on MBPP+ and a +3.6 point gain on HumanEval+, demonstrating its ability to generalize to challenging and previously unseen test cases. The ablated version of TEXT2GRAD, without natural language feedback, underperforms by 6.9 points on average, highlighting the critical role of structured critiques in token-level optimization. These results confirm thatTEXT2GRAD effectively leverages textual feedback to correct fine-grained coding errors, producing semantically robust programs that generalize across both clean and adversarial evaluation settings. 4.5 UltraFeedback [27]: Open-Domain Question Answering To evaluate TEXT2GRAD on general-purpose alignment and reasoning, we test it on the UltraFeedback dataset [ 27], which comprises diverse prompts spanning multiple domains and difficulty levels. This task assesses how well models trained with natural language feedback generalize to open-ended questions, factual correctness, and multi-turn coherence. Experimental Setup We use Llama3.1-8B-Instruct as the policy backbone. For evaluation, we benchmark against standard PPO and report performance on three widely adopted QA alignment metrics: (1) AlpacaEval 2.0 [44], which measures instruction-following via GPT-4-based pairwise comparisons, (2) ARC-Challenge [45], which tests science and commonsense reasoning, and (3) MT-Bench [46], which evaluates multi-turn conversation quality. Main Results As shown in Table 4, TEXT2GRAD consistently improves over both the base SFT model and PPO across all metrics. On AlpacaEval 2.0, TEXT2GRAD achieves a 12.1-point gain over the base model and a 2.3-point improvement over PPO, indicating stronger instruction align- ment | https://arxiv.org/abs/2505.22338v1 |
and preference satisfaction. On ARC-Challenge, TEXT2GRAD shows improved reasoning (+3.9 vs. base, +1.7 vs. PPO), while MT-Bench results highlight better multi-turn dialogue performance. Table 4: Performance comparison across benchmarks for Llama3-8B- Instruct trained with Ultrafeedback. Model AlpacaEval 2.0 ARC-C MTBench GPT-4 30.2 96 .4 7 .93 GPT-3.5 22.7 85.2 6.91 Llama3-8B 22.6 80 .5 6 .87 Llama3-8B PPO 32.4 82 .7 7 .43 TEXT2GRAD (w/o CoT) 28.6 83 .1 7 .49 TEXT2GRAD 34.7 84.4 7.58Our ablation study clarifies the significance of structured feedback by examining the impact of excluding CoT reasoning during the annota- tion phase, where feedback is provided directly as span-level scores without prior natural lan- guage explanations. Results show that training without CoT reasoning leads to a consistent decline in performance across all metrics, with a notable drop on AlpacaEval (-6.1 points). This underscores the critical role of natural language explanations in generating effective token-level supervision. These results rein- 8 force that NL-Gradient optimization, guided by explicit feedback, enhances both alignment and reasoning. 4.6 Case Study Policy LMPolicy LM SamplingScalar Reward Model -2 Natural Language Reward Model The base code fails to check (`lst[0] + lst[1]` and `lst[1] + lst[0 ]` … only processing `for char in lst[0]:` then `for char in lst[1 ]:` … while the base code uses a stack , requiring O(n).NL-Gradient PPOYou are given a list of two strings, both strings consist of open … PPO Figure 4: A case study from the code generation scenario comparing PPO vs. T EXT2GRAD. Figure 4 shows how TEXT2GRAD corrects a faulty implementation of match_parens while standard PPO fails. The policy LM first produces a buggy patch. A scalar reward model gives PPO a single negative score (–2), leaving the optimiser without guidance on where the error resides. After several updates, it still ignores the two cross–concatenation checks required by the hidden tests. TEXT2GRAD proceeds differently. The natural language reward model highlights the exact faulty span for char in lst[0] ... and explains that the code “ fails to check lst[0] + lst[1] andlst[1] + lst[0] .” This critique is aligned with the offending tokens and converted into negative rewards for that span and positive rewards for the rest. A single NL–Gradient update rewrites only the highlighted lines. The resulting function passes all unit tests. This example underscores the advantages of T EXT2GRAD. Additional qualitative results appear in Appendix F. 5 Conclusion vspace-3mm We presented TEXT2GRAD, a new framework for learning from natural language feedback by converting free form textual critiques into span-level reward signals and actionable gradients. Unlike traditional RLHF approaches that rely on scalar rewards or inference-time prompting strategies, TEXT2GRAD directly incorporates feedback into the training process through token-aware policy updates. This enables precise credit assignment and more interpretable learning dynamics. Experimental results across summarization, code generation, and question answering demonstrate thatTEXT2GRAD consistently outperforms scalar-reward PPO and prompt-based baselines in both alignment quality and sample efficiency. Overall, TEXT2GRAD opens a new direction for fine-grained, feedback-driven optimization of language models, moving beyond scalar supervision toward more human-like, interpretable, and effective learning. References [1]Chaoyun Zhang, Zicheng Ma, Yuhao | https://arxiv.org/abs/2505.22338v1 |
Wu, Shilin He, Si Qin, Minghua Ma, Xiaoting Qin, Yu Kang, Yuyi Liang, Xiaoyu Gou, et al. Allhands: Ask me anything on large-scale verbatim feedback via large language models. arXiv preprint arXiv:2403.15157 , 2024. [2]Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea V oss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in neural information processing systems , 33:3008–3021, 2020. [3]Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. [4] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 , 2022. 9 [5]Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. [6]Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [7] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [8]Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217 , 2023. [9]Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. Fine-grained human feedback gives better rewards for language model training. Advances in Neural Information Processing Systems , 36:59008–59033, 2023. [10] Sebastian Raschka. Build a Large Language Model (From Scratch) . Simon and Schuster, 2024. [11] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR) , 2023. [12] Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems , 36:8634–8652, 2023. [13] Chaoyun Zhang, Liqun Li, Shilin He, Xu Zhang, Bo Qiao, Si Qin, Minghua Ma, Yu Kang, Qingwei Lin, Saravan Rajmohan, et al. Ufo: A ui-focused agent for windows os interaction. arXiv preprint arXiv:2402.07939 , 2024. [14] Chaoyun Zhang, He Huang, Chiming Ni, Jian Mu, Si Qin, Shilin He, Lu Wang, Fangkai Yang, Pu Zhao, Chao Du, et al. Ufo2: The desktop agentos. arXiv preprint arXiv:2504.14603 , 2025. [15] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [16] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, | https://arxiv.org/abs/2505.22338v1 |
Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332 , 2021. [17] Chaoyun Zhang, Shilin He, Jiaxu Qian, Bowen Li, Liqun Li, Si Qin, Yu Kang, Minghua Ma, Guyue Liu, Qingwei Lin, et al. Large language model-brained gui agents: A survey. arXiv preprint arXiv:2411.18279 , 2024. [18] Patrick Fernandes, Aman Madaan, Emmy Liu, António Farinhas, Pedro Henrique Martins, Amanda Bertsch, José GC de Souza, Shuyan Zhou, Tongshuang Wu, Graham Neubig, et al. Bridging the gap: A survey on integrating (human) feedback for natural language generation. Transactions of the Association for Computational Linguistics , 11:1643–1668, 2023. [19] Alexander Pan, Erik Jones, Meena Jagadeesan, and Jacob Steinhardt. Feedback loops with language models drive in-context reward hacking. arXiv preprint arXiv:2402.06627 , 2024. [20] Archit Sharma, Sedrick Scott Keh, Eric Mitchell, Chelsea Finn, Kushal Arora, and Thomas Kollar. A critical evaluation of ai feedback for aligning large language models. Advances in Neural Information Processing Systems , 37:29166–29190, 2024. [21] Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, et al. Rlaif vs. rlhf: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267 , 2023. [22] Weixin Liang, Yuhui Zhang, Hancheng Cao, Binglu Wang, Daisy Yi Ding, Xinyu Yang, Kailas V odrahalli, Siyu He, Daniel Scott Smith, Yian Yin, et al. Can large language models provide useful feedback on research papers? a large-scale empirical analysis. NEJM AI , 1(8):AIoa2400196, 2024. [23] Dakota Mahan, Duy Van Phung, Rafael Rafailov, Chase Blagden, Nathan Lile, Louis Castricato, Jan-Philipp Fränken, Chelsea Finn, and Alon Albalak. Generative reward models. arXiv preprint arXiv:2410.12832 , 2024. [24] Zhipeng Chen, Kun Zhou, Wayne Xin Zhao, Junchen Wan, Fuzheng Zhang, Di Zhang, and Ji-Rong Wen. Improving large language models via fine-grained reinforcement learning with minimum editing constraint. arXiv preprint arXiv:2401.06081 , 2024. 10 [25] Mert Yuksekgonul, Federico Bianchi, Joseph Boen, Sheng Liu, Zhi Huang, Carlos Guestrin, and James Zou. Textgrad: Automatic" differentiation" via text. arXiv preprint arXiv:2406.07496 , 2024. [26] Jérémy Scheurer, Jon Ander Campos, Tomasz Korbak, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, and Ethan Perez. Training language models with language feedback at scale. arXiv preprint arXiv:2303.16755 , 2023. [27] Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. Ultrafeedback: Boosting language models with high-quality feedback. 2023. [28] Zhangchen Xu, Yang Liu, Yueqin Yin, Mingyuan Zhou, and Radha Poovendran. Kodcode: A diverse, challenging, and verifiable synthetic dataset for coding. arXiv preprint arXiv:2503.02951 , 2025. [29] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems , 30, 2017. [30] Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, et al. Secrets of rlhf in large language models part ii: Reward modeling. arXiv preprint arXiv:2401.06080 , 2024. [31] Nathan Lambert. Reinforcement learning from human feedback. arXiv preprint arXiv:2504.12501 , 2025. [32] | https://arxiv.org/abs/2505.22338v1 |
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, et al. Aligning large multimodal models with factually augmented rlhf. arXiv preprint arXiv:2309.14525 , 2023. [33] Chenghua Huang, Zhizhen Fan, Lu Wang, Fangkai Yang, Pu Zhao, Zeqi Lin, Qingwei Lin, Dongmei Zhang, Saravan Rajmohan, and Qi Zhang. Self-evolved reward learning for llms. arXiv preprint arXiv:2411.00418 , 2024. [34] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv e-prints , pages arXiv–2407, 2024. [35] Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qingwei Lin, and Dongmei Zhang. Everything of thoughts: Defying the law of penrose triangle for thought generation. In Findings of the Association for Computational Linguistics ACL 2024 , pages 1638–1662, 2024. [36] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. Advances in Neural Information Processing Systems , 36:46534–46594, 2023. [37] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out , pages 74–81, 2004. [38] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics , pages 311–318, 2002. [39] Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675 , 2019. [40] Jiawei Gu, Xuhui Jiang, Zhichao Shi, Hexiang Tan, Xuehao Zhai, Chengjin Xu, Wei Li, Yinghan Shen, Shengjie Ma, Honghao Liu, et al. A survey on llm-as-a-judge. arXiv preprint arXiv:2411.15594 , 2024. [41] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021. [42] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 , 2021. [43] Zhaojian Yu, Yilun Zhao, Arman Cohan, and Xiao-Ping Zhang. Humaneval pro and mbpp pro: Evaluating large language models on self-invoking code generation. arXiv preprint arXiv:2412.21199 , 2024. [44] Yann Dubois, Balázs Galambosi, Percy Liang, and Tatsunori B Hashimoto. Length-controlled alpacaeval: A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475 , 2024. [45] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457 , 2018. [46] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623, 2023. 11 A Discriminative Power of Token-Level Rewards A key design choice in our method is to | https://arxiv.org/abs/2505.22338v1 |
provide dense, token-level feedback rather than sparse, end-of-sequence rewards. Intuitively, localized reward signals allow the policy to attribute credit or blame more precisely to specific parts of the output. In this section, we formalize this intuition and show how token-level rewards lead to sharper and more discriminative advantage estimates, thereby improving policy learning. Background. In reinforcement learning, policy updates are guided by the advantage function, which measures how much better (or worse) an action is compared to the policy’s expected value. Using Generalized Advantage Estimation (GAE), the advantage at timestep tis computed from the temporal-difference (TD) errors: At=T−tX l=0(γλ)lδt+l,where δt=rt+γV(st+1)−V(st), andVis the value function, γis the discount factor, and λis the GAE parameter. Comparing Token-Level vs. End-of-Sequence Reward Settings. We define two settings for reward assignment: Setting A: Token-Level Rewards. Each token may receive its own feedback: •rtoken,A t̸= 0for many t∈[1, T] • Total reward: rtotal,A t =rtoken,A t +rKL t Setting B: End-of-Sequence Reward. Only the final token is rewarded: •rtoken,B t = 0for all t < T •rtoken,B T̸= 0; total reward: rtotal,B t =rtoken,B t +rKL t Letτ1andτ2be two trajectories, where τ1is qualitatively better than τ2. Define ∆rt=rtoken,A t (τ1)−rtoken,A t (τ2), and assume all KL terms and value functions are held constant for simplicity (the general case follows similarly). Advantage Difference Across Trajectories. The advantage difference under each setting is: ∆AA t=TX k=t(γλ)k−t∆rk,∆AB t= (γλ)T−tTX k=t∆rk. Even ifPT k=t∆rkis the same in both cases (i.e., the same total reward difference), ∆AA t>∆AB twhenever any ∆rk>0fork < T , because: (γλ)k−t>(γλ)T−t,for all k < T. This means the earlier the reward difference occurs in the sequence, the more strongly it is emphasized in Setting A relative to Setting B. Amplification of Early Signal. To quantify this difference, define the amplification factor: α(k, T) =(γλ)k−t (γλ)T−t= (γλ)−(T−k). For a typical value of γλ= 0.95and a gap of T−k= 20 steps (i.e., the difference occurs 20 tokens before the final token), we have: α(k, T)≈0.95−20≈2.8, meaning that in Setting A, the advantage function weights early reward differences nearly 3× more than in Setting B. This analysis confirms that token-level feedback improves the discriminative power of the advantage signal: even if the total reward difference is the same, Setting A assigns more importance to earlier deviations in quality. This sharper signal allows the policy to learn localized corrections—e.g., improving grammar or factual consistency in specific parts of a summary—rather than attributing success or failure to the entire sequence. As a result, our method enables faster convergence and better fine-tuning, especially on open-ended tasks where quality varies across tokens. 12 B GPT-4o CoT thinking Annotation Prompt B.1 SLF5K Listing 1: SLF5K GPT-4o Annotation Prompt 1Please critique the following summary of a post and provide feedback in the specified JSON format : 2 3--- 4 5** Original Post :** 6{ post } 7 8** Generated Summary :** 9{ generated_summary } 10 11--- 12 13** Definitions :** 14- ** good_spans **: 0-2 phrases from the summary that greatly improve its quality by accurately and concisely capturing the original post ’s core meaning or key | https://arxiv.org/abs/2505.22338v1 |
details , as explained in ’textual_feedback ’. Empty if none apply . 15- ** poor_spans **: 0-2 phrases from the summary that noticeably harm its quality due to inaccuracy , redundancy , poor wording , or being less important and replaceable with more critical content , as explained in ’textual_feedback ’. Empty if none apply . 16 17--- 18 19** Instructions :** 201. Identify the summary ’s most essential strengths that reflect the original post accurately and its most critical weaknesses that misrepresent or confuse it. 212. Select 0-2 of the most significant phrases for ’good_spans ’ and ’poor_spans ’, keeping them concise and impactful , with brief justifications . Include none if no phrases stand out. 223. Ensure ’good_spans ’ and ’poor_spans ’ are directly supported by the analysis in ’textual_feedback ’. 23 24--- 25 26** Chain of Thought :** 27First , carefully analyze both the original post and the generated summary : 281. What are the key points of the original post ? 292. Which of these key points are accurately captured in the summary ? 303. What important information is missing from the summary ? 314. Are there any inaccuracies or misrepresentations in the summary ? 325. Which specific phrases in the summary represent its strongest elements ? 336. Which specific phrases in the summary represent its weakest elements ? 34 35Based on this analysis , formulate your textual feedback and identify the good and poor spans . 13 36 37--- 38 39** Output Format :** 40Provide a concise , one - paragraph critique and the GOOD / POOR spans in this JSON structure : 41‘‘‘json 42{ 43 " textual_feedback ": " Your critique here summarizing key strengths and weaknesses in one paragraph .", 44 " good_spans ": [" phrase1 ", " phrase2 "], // 0-2 concise phrases from the generated summary , tied to textual_feedback , or [] if none 45 " poor_spans ": [" phrase1 ", " phrase2 "] // 0-2 concise phrases from the generated summary , tied to textual_feedback , or [] if none 46} 47‘‘‘ 48 49Focus on precision : include only the most impactful phrases of the generated summary , avoiding excessive or minor details . B.2 UltraFeedback Listing 2: UltraFeedback GPT-4o Annotation Prompt 1<CritiquePrompt > 2 3 <Instructions > Critique a response to a user input and provide feedback in JSON format :</ Instructions > 4 5 6 <EvaluationCriteria > 7 <Criterion name =" Accuracy "> Does it correctly address the input ?</ Criterion > 8 <Criterion name =" Relevance "> Does it stay on topic ?</ Criterion > 9 <Criterion name =" Clarity ">Is it easy to understand ?</ Criterion > 10 <Criterion name =" Completeness "> Does it cover the input ’s core needs ?</ Criterion > 11 </ EvaluationCriteria > 12 13 14 <SpanGuidelines > 15 <GoodSpans > 16 <Description > Phrases from the response that best capture its strengths (e.g., accurate , relevant , clear ). Select only the most essential and impactful phrases , directly tied to ’textual_feedback ’. </ Description > 17 </ GoodSpans > 18 | https://arxiv.org/abs/2505.22338v1 |
<PoorSpans > 19 <Description > Phrases from the response that best highlight its weaknesses (e.g., inaccurate , irrelevant , vague ). Select only the most essential and impactful phrases , directly tied to ’textual_feedback ’. </ Description > 14 20 </ PoorSpans > 21 <Requirement > Spans must be exact quotes from the response .</ Requirement > 22 </ SpanGuidelines > 23 24 25 <ReflectionProcess > 26 <Step >First , carefully analyze the user input to understand the core question or request .</ Step > 27 <Step >Next , examine the generated response against each evaluation criterion .</ Step > 28 <Step >For each criterion , identify specific strengths and weaknesses with supporting evidence from the response .</ Step > 29 <Step > Consider how well the response addresses the user ’s explicit and implicit needs .</ Step > 30 <Step > Finally , synthesize your analysis into a coherent critique that highlights the most important points .</ Step > 31 </ ReflectionProcess > 32 33 34 <Separator >---</ Separator > 35 36 37 <UserInput >{ entry [’prompt ’]} </ UserInput > 38 <GeneratedResponse >{ entry [’ response ’]} </ GeneratedResponse > 39 40 41 <Separator >---</ Separator > 42 43 44 <OutputFormat > 45 <Description > Provide the critique in the following JSON structure :</ Description > 46 <JSONExample > 47 {{ 48 " textual_feedback ": "One - paragraph critique summarizing strengths and weaknesses , tied to spans .", 49 " good_spans ": [" phrase1 ", " phrase2 ", ...] , // Impactful phrases from <GeneratedResponse >, or [] if none 50 " poor_spans ": [" phrase1 ", " phrase2 ", ...] // Impactful phrases from <GeneratedResponse >, or [] if none 51 }} 52 </ JSONExample > 53 </ OutputFormat > 54 55</ CritiquePrompt > 15 B.3 KodCode Listing 3: KodCode GPT-4o Annotation Prompt 1Analyze the following code solution for the given problem : 2 3Problem Description : 4’’’ 5{ problem } 6’’’ 7 8Submitted Code : 9’’’ 10{ solution } 11’’’ 12 13Test Results : 14Passed : { passed } 15 16{%- if not passed -%} 17Test Question : 18{ test_question } 19 20Error Output : 21{ stdout } 22{%- endif -%} 23 24Please analyze the code and identify the following in JSON format : 25 261. Identify any error - causing code segments directly from the submitted solution . 272. Provide detailed feedback on the code ’s functionality , issues , and improvement suggestions . 28 - First , understand what the code is trying to accomplish 29 - Analyze the algorithm and approach used 30 - Identify any logical errors or inefficiencies 31 - Consider edge cases and potential improvements 323. Point out any code segments from the solution that work but could be improved . 33 34Return your analysis in this JSON structure : 35‘‘‘json 36{ 37 " Code Feedback ": " Provide a detailed explanation of the code ’s functionality , any potential issues , and suggestions for improvement . Use markdown formatting for better readability .", 38 " wrong_code ": [" Extract ONLY the | https://arxiv.org/abs/2505.22338v1 |
problematic code segments FROM THE SUBMITTED SOLUTION that cause failures . Must be exact quotes . Leave empty [] if none found ."] , 39 " improvement_code ": [" Extract ONLY the working but improvable code segments FROM THE SUBMITTED SOLUTION . Must be exact quotes . Leave empty [] if none needed ."] 40} 41‘‘‘ 42Note : For ’wrong_code ’ and ’improvement_code ’, only include direct quotes from the submitted code above , not suggested fixes .""" 16 C Additional Reward Model Performance Results Table 5: Span prediction quality on UltraFeedback dataset. GT/Pred shows the count of ground truth and predicted spans, with E-P indicating exact matches and partial matches. OUI (Overlap Unit Index) measures boundary precision. Dataset GOOD Spans POOR Spans UltraFeedback 1150/736 (292-237) 812/400 (101-104) OUI 0.40 0.27 Table 6: Code suggestion quality on KodCode dataset. Exact Match shows the proportion of suggestions that perfectly match ground truth. RougeL measures lexical overlap between predicted and reference code segments. Code Type Exact Match RougeL Improvement 0.47 0.64 Wrong 0.55 0.59 Table 7: Additional metrics for all models on the SLF5K dataset, including perplexity and BERTScore components. Human perplexity: 37.375. Model Perplexity BERTScore Precision Recall ChatGPT-3.5 27.288 0.806 0.884 ChatGPT-4o 53.242 0.879 0.894 8B 19.248 0.848 0.894 8B-SFT 44.103 0.865 0.885 8B-SFT+Reflection 34.823 0.880 0.897 8B-PPO 28.472 0.892 0.895 TEXT2GRAD-8B 25.423 0.903 0.902 17 D Hyperparamesters of NL-Gradient PPO Optimization We report the full training hyperparameters used for NL-Gradient PPO optimization across three datasets: SLF5K, UltraFeedback, and KodCode. All experiments use the Llama 3 or Llama 3.1 8B Instruct variant as both the base policy and reward models. For the SLF5K dataset (Table 8), we adopt a linear learning rate scheduler with a base learning rate of 1×10−6. The model is trained using a batch size of 12 and gradient accumulation over 12 steps. PPO is run for 4 epochs per batch with mini-batch size of 1. The KL penalty is enforced with a full KL term, an initial coefficient of 0.2, and a target KL divergence of 6. Adaptive KL control is enabled, and training is conducted over 4 full epochs. For the UltraFeedback dataset (Table 9), we use a cosine learning rate scheduler with the same learning rate of 1×10−6. The training is conducted with a batch size of 8, gradient accumulation over 8 steps, and 4 PPO epochs per batch. The KL penalty remains full, but with a lower initial KL coefficient of 0.05 and a tighter target KL of 3. The training is distributed across 8 GPUs using DeepSpeed ZeRO-3 parallelism for efficiency and scalability. For the KodCode dataset (Table 10), we apply a more conservative learning rate of 5×10−7with a cosine scheduler. The batch and gradient accumulation sizes match UltraFeedback, but the target KL is stricter at 1, with an initial KL coefficient of 0.07. The model is again trained with 4 PPO epochs per batch and a full KL penalty. Training is conducted on 8 GPUs using DeepSpeed ZeRO-1 parallelism. These configurations were selected to balance stability and efficiency, while ensuring sufficient capacity for fine-grained | https://arxiv.org/abs/2505.22338v1 |
span-level optimization guided by natural language feedback. Table 8: Hyperparameters for NL-Gradient PPO Training on SLF5K Hyperparameter Value Base Policy Model Llama 3.1 8B Instruct Base Reward Model Llama 3.1 8B Instruct Learning Rate 1×10−6 LR Scheduler Linear Batch Size 12 Mini-batch Size 1 PPO Epochs 4 Gradient Accumulation Steps 12 Initial KL Coefficient 0.2 Target KL 6 KL Penalty Full Adaptive KL Control True Training Epochs 4 18 Table 9: Hyperparameters for NL-Gradient PPO Training on UltraFeedback Hyperparameter Value Base Policy Model Llama 3 8B Instruct Base Reward Model Llama 3.1 8B Instruct Learning Rate 1×10−6 LR Scheduler Cosine Batch Size 8 Mini-batch Size 1 PPO Epochs 4 Gradient Accumulation Steps 8 Initial KL Coefficient 0.05 Target KL 3 KL Penalty Full Adaptive KL Control True Training Epochs 4 Dataset Ultrafeedback Number of GPUs 8 Parallelism DeepSpeed ZeRO-3 Table 10: Hyperparameters for NL-Gradient PPO Training on KodCode Hyperparameter Value Base Policy Model Llama 3.1 8B Instruct Base Reward Model Llama 3.1 8B Instruct Learning Rate 5×10−7 LR Scheduler Cosine Batch Size 8 Mini-batch Size 1 PPO Epochs 4 Gradient Accumulation Steps 8 Initial KL Coefficient 0.07 Target KL 1 KL Penalty Full Adaptive KL Control True Training Epochs 4 Dataset KodCode Number of GPUs 8 Parallelism DeepSpeed ZeRO-1 19 E GPT-4o Judge CoT Influence Annotation Prompt E.1 SLF5K Evaluation Prompt The following prompt template was used to evaluate model responses on the SLF5K dataset. To prevent position bias in the evaluation, the order of model responses (analysis_1 and analysis_2) was randomly shuffled for each comparison: Listing 4: SLF5K Evaluation Prompt 1Compare and evaluate two different summaries of the same query . You must respond in valid JSON format . 2 3Original Query : 4{ query } 5 6{ analysis_1_label }: 7{ response_1 } 8 9{ analysis_2_label }: 10{ response_2 } 11 12Evaluation Criteria : 131. Accuracy (0 -10): 14 - Does it capture the main points correctly ? 15 - Is it faithful to the original content ? 16 - Are there any factual errors ? 17 182. Completeness (0 -10): 19 - Are all key points included ? 20 - Is any important information missing ? 21 - Does it cover the core message ? 22 233. Conciseness (0 -10): 24 - Is it clear and to the point ? 25 - Does it avoid unnecessary details ? 26 - Is the language efficient ? 27 284. Coherence (0 -10): 29 - Is the summary well - organized ? 30 - Does it flow logically ? 31 - Is it easy to understand ? 32 33Compare both summaries and evaluate them . Respond ONLY with a JSON object in this exact format : 34{ 35 "{ score_key_1 }": { 36 " strengths ": [" specific strength 1", " specific strength 2", ...] , 37 " weaknesses ": [" specific weakness 1", " specific weakness 2", ...] 38 " score ": <overall score between 0-10>, 39 " accuracy ": <score between 0-10>, 40 " completeness ": <score between 0-10>, 41 " conciseness ": <score between 0-10>, 42 " coherence ": <score | https://arxiv.org/abs/2505.22338v1 |
between 0-10>, 43 }, 44 "{ score_key_2 }": { 20 45 " strengths ": [" specific strength 1", " specific strength 2", ...] , 46 " weaknesses ": [" specific weakness 1", " specific weakness 2", ...] 47 " score ": <overall score between 0-10>, 48 " accuracy ": <score between 0-10>, 49 " completeness ": <score between 0-10>, 50 " conciseness ": <score between 0-10>, 51 " coherence ": <score between 0-10>, 52 } 53} E.2 KodCode Evaluation Prompt The following prompt template was used to evaluate the quality of code span selections for the KodCode dataset, which resulted in the win-rate metrics (72.17 : 7.01 : 20.82) comparing Chain-of-Thought (CoT) feedback quality: Listing 5: KodCode Evaluation Prompt 1Evaluate the precision and specificity of code span selections in two different analyses . 2 3Problem : 4{ problem } 5 6Solution Code : 7{ solution } 8 9{ analysis_1_label }: 10Selected spans : { spans_1 } 11Suggestions : { improve_1 } 12 13{ analysis_2_label }: 14Selected spans : { spans_2 } 15Suggestions : { improve_2 } 16 17Please evaluate the quality of span selections in JSON format , focusing on precision and minimality : 18{ 19 "{ score_key_1 }": { 20 " score ": (0 -10 score for span selection precision ), 21 " Reason ": " Explain the reason for the score " 22 }, 23 "{ score_key_2 }": { 24 " score ": (0 -10 score for span selection precision ), 25 " Reason ": " Explain the reason for the score " 26 }, 27 " comparison ": " Explain which analysis has more precise and minimal span selections " 28} 29 30Guidelines for span evaluation : 311. Each span should capture ONLY the specific problematic code , nothing more 21 322. General or overly broad selections ( like entire functions ) are penalized 333. Spans should not include irrelevant surrounding code 344. Multiple small precise spans are better than one large span 355. Spans must directly relate to the identified issue E.3 UltraFeedback Evaluation Prompt The following prompt template was used to evaluate the precision and specificity of text span selections for the UltraFeedback dataset: Listing 6: UltraFeedback Evaluation Prompt 1""" Evaluate the precision and specificity of text span selections in two different analyses . 2 3Problem : 4{ problem } 5 6Response : 7{ solution } 8 9{ analysis_1_label }: 10Problematic spans : { spans_1 } 11Good spans : { improve_1 } 12 13{ analysis_2_label }: 14Problematic spans : { spans_2 } 15Good spans : { improve_2 } 16 17Please evaluate the quality of span selections in JSON format , focusing on precision and minimality : 18{ 19 "{ score_key_1 }": { 20 " score ": (0 -10 score for span selection precision ), 21 " Reason ": " Explain the reason for the score " 22 }, 23 "{ score_key_2 }": { 24 " score ": (0 -10 score for span selection precision ), 25 " Reason ": " Explain the reason for the score " 26 }, 27 " comparison ": " Explain which analysis | https://arxiv.org/abs/2505.22338v1 |
has more precise span selections while avoiding unnecessary or overly broad spans " 28} 29 30Guidelines for span evaluation : 311. Each problematic span should capture ONLY the specific problematic text , nothing more 322. General or overly broad selections are penalized 333. Spans should not include irrelevant surrounding text 344. Multiple small precise spans are better than one large span 355. Spans must directly relate to the identified issue 366. Good spans should highlight the best parts of the response 22 F Case Studies on HumanEval We present three case studies from the HumanEval benchmark to demonstrate the effectiveness of our approach. F.1 Special Factorial Problem Statement 1def special_factorial (n): 2 """ The Brazilian factorial is defined as: 3 brazilian_factorial (n) = n! * (n -1)! * (n -2)! * ... * 1! 4 where n > 0 5 6 For example : 7 >>> special_factorial (4) 8 288 9 10 The function will receive an integer as input and should return the special 11 factorial of this integer . 12 """ Instruct Model Solution 1def special_factorial (n): 2 if n < 0: 3 raise ValueError (" Input must be a non - negative integer ") 4 5 result = 1 6 for i in range (1, n + 1): 7 result *= math . factorial (i) 8 9 return result PPO Model Solution 1def special_factorial (n): 2 result = 1 3 for i in range (1, n + 1): 4 result *= i 5 return result Our Approach 1import math 2def special_factorial (n): 3 if not isinstance (n, int) or n <= 0: 4 raise ValueError (" Input must be a positive integer .") 5 6 result = 1 7 for i in range (1, n + 1): 8 result *= math . factorial (i) 23 9 10 return result F.2 Prime Fibonacci Numbers Problem Statement 1def prime_fib (n: int ): 2 """ 3 prime_fib returns n-th number that is a Fibonacci number and it ’s also prime . 4 >>> prime_fib (1) 5 2 6 >>> prime_fib (2) 7 3 8 >>> prime_fib (3) 9 5 10 >>> prime_fib (4) 11 13 12 >>> prime_fib (5) 13 89 14 """ Instruct Model Solution 1def is_prime (num: int ) -> bool : 2 if num < 2: 3 return False 4 for i in range (2, int( num ** 0.5) + 1): 5 if num % i == 0: 6 return False 7 return True 8def prime_fib (n: int ) -> int: 9 a, b = 0, 1 10 count = 0 11 while True : 12 a, b = b, a + b 13 if is_prime (b): 14 count += 1 15 if count == n: 16 return b PPO Model Solution 1def prime_fib (n: int ): 2 a, b = 0, 1 3 count = 0 4 while True : 5 a, b = b, a + b 6 if is_prime (b): 7 count += 1 8 if count == n: 24 9 return b Our Approach 1def is_prime (num: int ) -> bool : 2 if num | https://arxiv.org/abs/2505.22338v1 |
< 2: 3 return False 4 for i in range (2, int( num ** 0.5) + 1): 5 if num % i == 0: 6 return False 7 return True 8def prime_fib (n: int ) -> int: 9 a, b = 0, 1 10 count = 0 11 while True : 12 a, b = b, a + b 13 if is_prime (b): 14 count += 1 15 if count == n: 16 return b F.3 File Name Validation Problem Statement 1def file_name_check ( file_name ): 2 """ Create a function which takes a string representing a file ’s name , and returns 3 ’Yes ’ if the the file ’s name is valid , and returns ’No ’ otherwise . 4 A file ’s name is considered to be valid if and only if all the following conditions 5 are met: 6 - There should not be more than three digits (’0’-’9’) in the file ’s name . 7 - The file ’s name contains exactly one dot ’.’ 8 - The substring before the dot should not be empty , and it starts with a letter from 9 the latin alphapet (’a ’-’z’ and ’A ’-’Z ’). 10 - The substring after the dot should be one of these : [’txt ’, ’exe ’, ’dll ’] 11 Examples : 12 file_name_check (" example .txt ") # => ’Yes ’ 13 file_name_check ("1 example .dll ") # => ’No ’ (the name should start with a latin alphapet letter ) 14 """ Instruct Model Solution 1def file_name_check ( file_name ): 2 # First , split the file ’s name by the dot 3 name , extension = file_name . rsplit (’.’, 1) 25 4 5 # Check if the extension is valid 6 valid_extensions = [’txt ’, ’exe ’, ’dll ’] 7 if extension not in valid_extensions : 8 return ’No ’ 9 10 # Check if the substring before the dot is not empty 11 if not name : 12 return ’No ’ 13 14 # Check if the substring before the dot starts with a letter from the latin alphabet 15 if not name [0]. isalpha (): 16 return ’No ’ 17 18 # Check if there are more than three digits in the file ’s name 19 if sum(c. isdigit () for c in file_name ) > 3: 20 return ’No ’ 21 22 # If all conditions are met , return ’Yes ’ 23 return ’Yes ’ PPO Model Solution 1def file_name_check ( file_name ): 2 # Split the file name into two parts : the name and the extension 3 name , extension = file_name . split (’.’) 4 5 # Check if the name starts with a latin alphapet letter 6 if not name [0]. isalpha (): 7 return ’No ’ 8 9 # Check if there are more than three digits in the name 10 if sum(c. isdigit () for c in name ) > 3: 11 return ’No ’ 12 13 # Check if the extension is valid 14 if extension not in | https://arxiv.org/abs/2505.22338v1 |
[’txt ’, ’exe ’, ’dll ’]: 15 return ’No ’ 16 17 # Check if the file name contains exactly one dot 18 if file_name . count (’.’) != 1: 19 return ’No ’ 20 21 # If all checks pass , return ’Yes ’ 22 return ’Yes ’ Our Approach 1def file_name_check ( file_name ): 2 """ 3 Checks if a file name is valid according to the given conditions . 4 26 5 A file ’s name is considered to be valid if and only if all the following conditions 6 are met: 7 - There should not be more than three digits (’0’-’9’) in the file ’s name . 8 - The file ’s name contains exactly one dot ’.’. 9 - The substring before the dot should not be empty , and it starts with a letter from 10 the latin alphabet (’a ’-’z’ and ’A ’-’Z ’). 11 - The substring after the dot should be one of these : [’txt ’, ’exe ’, ’dll ’]. 12 13 Args : 14 file_name ( str): The name of the file to check . 15 16 Returns : 17 str : ’Yes ’ if the file name is valid , ’No ’ otherwise . 18 """ 19 20 # Check if the file name contains exactly one dot 21 if file_name . count (’.’) != 1: 22 return ’No ’ 23 24 # Split the file name into two parts : before and after the dot 25 before_dot , after_dot = file_name . split (’.’) 26 27 # Check if the substring before the dot is not empty and starts with a letter 28 if not before_dot or not before_dot [0]. isalpha (): 29 return ’No ’ 30 31 # Check if the substring before the dot contains more than three digits 32 if sum(c. isdigit () for c in before_dot ) > 3: 33 return ’No ’ 34 35 # Check if the substring after the dot is one of the allowed extensions 36 if after_dot not in [’txt ’, ’exe ’, ’dll ’]: 37 return ’No ’ 38 39 # If all conditions are met , return ’Yes ’ 40 return ’Yes ’ G Limitations Despite its effectiveness, TEXT2GRAD has two primary limitations. First, its performance depends on the quality of the reward model. While our model achieves moderate alignment with reference critiques, further improvements in critique generation and span-level accuracy could enhance downstream policy optimization—particularly in tasks requiring nuanced feedback. Second, generating and applying token-level rewards introduces computational overhead compared to scalar reward methods, which may impact scalability in large-scale deployments. In future work, we aim to improve reward model precision and efficiency, and to extend our framework to broader generation settings, including open-ended tasks where fine-grained feedback is harder to define. 27 5 10 15 20 25 30 35 40 Training Samples (Thousands)0.20.30.40.50.6Sequence-level Reward (PPO)Early StopPPO Text2Grad 0.900.920.940.960.98 Token-level Reward (Text2Grad)KodCode(a) Training dynamics on KodCode dataset 10 20 30 40 Training Samples (Thousands)0.000.050.100.150.20Sequence-level Reward (PPO)Early StopPPO Text2Grad 0.19500.19750.20000.20250.20500.20750.21000.2125 Token-level Reward (Text2Grad)Early StopUltraFeedback (b) Training dynamics | https://arxiv.org/abs/2505.22338v1 |
on UltraFeedback dataset Figure 5: Comparative analysis of training dynamics between TEXT2GRAD and standard PPO. The results demonstrate thatTEXT2GRAD (red line) achieves more stable and consistent learning progress, while standard PPO (blue line) shows significant volatility and unstable oscillations throughout the training process. H Training Dynamics on KodCode and UltraFeedback I Hyperparameters for Natural Language Reward Model Training Parameter Value Base Model Llama-3.1-8B-Instruct Hardware 8× NVIDIA V100 (32GB) Parallelization DeepSpeed ZeRO-3 Batch Size 1 per GPU Learning Rate 1e-5 Epochs 3 Prompt Max Length 950 tokens Max Sequence Length 1250 tokens LoRA Rank (r) 16 LoRA Alpha 32 LoRA Target Modules q_proj, v_proj LoRA Dropout 0.1 Optimizer AdamW Adam β1, β2 0.9, 0.999 Adam ϵ 1e-8 Weight Decay 3e-7 Gradient Clipping 1.0 FP16 Enabled Table 11: Hyperparameters for Natural Language Reward Model Training 28 J Ablation Study for SLF5K To demonstrate the effectiveness of our proposed components, we conduct ablation studies on the SLF5K dataset. Figure 6 shows the performance comparison when removing the Chain-of-Thought (CoT) reasoning process and the span-level reward signals from our framework. 0 20 40 60 80 100 Percentage (%)PPO vs SFTTEXT2GRAD (w/o CoT) vs SFTTEXT2GRAD (w/ CoT) vs PPO 46.0% 11.8% 42.2%52.0% 10.8% 37.2%48.4% 11.8% 39.8%First Model Win Ties Second Model Win Figure 6: Ablation study results on SLF5K dataset. The graph shows the win rate comparison between our full model and the variant without CoT reasoning and span-level reward signals during reward model training. The results demonstrate that these components are crucial for achieving optimal performance in natural language gradient policy optimization. K Pseudocode for the Text2Grad Framework Algorithm 1 TEXT2GRAD: Reinforcement Learning from Natural Language Feedback (Overall Framework) Input: Set of prompts for policy training. Output: Optimized policy πθ. Phase 1: Dual-Feedback Reward Annotation (Described in Section 3.3) 1:Initialize dataset for reward model training DR← ∅. 2:Generate initial responses yifor a set of prompts xi(e.g., using a base policy). 3:for all prompt xiand its corresponding response yido 4: (ci,A(yi),δi)←GenerateDualFeedback (xi, yi) ▷See Algorithm 2 5: Letzi←[ci;A(yi)] ▷ ciis critique, A(yi)is span-JSON 6: Add(xi, yi, zi)toDR. 7:end for Phase 2: Reward Model Training (Described in Section 3.4) 8:Rϕ←TrainRewardModel (DR) ▷See Algorithm 3 Phase 3: NL-Gradient Policy Optimization (Described in Section 3.5) 9:Initialize policy πθ(e.g., with a pre-trained LLM) and value function Vψ. 10:πθ←OptimizePolicyWithNLGradient (πθ, Rϕ, Vψ) ▷See Algorithm 4 11:return Optimized policy πθ. 29 Algorithm 2 Dual-Feedback Reward Annotation (Section 3.3) 1:procedure GENERATE DUALFEEDBACK (x, y) 2: Input: Prompt x, generated response y= (y1, . . . , y T). 3: Output: Natural language critique c, structured span-level reward map A(y), token-level pseudo-rewards δ. // Dual-Feedback Annotation using a strong LLM (e.g., GPT-4o) 4: ifhuman-written feedback is lacking (Reasoning-Augmented Annotation) then 5: Guide LLM to: 6: (1) Reason about the quality of response ystep-by-step. 7: (2) Output a critique cbased on this reasoning. 8: (3) Produce a span-level JSON map A(y)associating spans sk⊂ywith labels ℓk∈ {positive ,neutral ,negative }. 9: else 10: Prompt LLM to output critique cand span-level JSON map A(y). 11: end if 12: ▷Formally, RLLM(x, y) = (c,A(y)), where A(y) :sk7→ℓk // Token-Level | https://arxiv.org/abs/2505.22338v1 |
Reward Mapping 13: Initialize token-level pseudo-rewards δ= (δ1, . . . , δ T)with zeros. 14: for all labeled span skinA(y)do 15: Letℓk=A(y)[sk]. 16: ifℓk=positive then 17: for all token index tsuch that yt∈skdo 18: δt←+1. 19: end for 20: else if ℓk=negative then 21: for all token index tsuch that yt∈skdo 22: δt← −1. 23: end for 24: end if ▷neutral spans are typically unannotated and default to δt= 0. 25: end for 26: return c,A(y),δ. 27:end procedure Algorithm 3 Reward Model Training (Section 3.4) 1:procedure TRAIN REWARD MODEL (DR) 2: Input: Dataset DR={(xi, yi, zi)}N i=1, where zi= [ci;A(yi)]. 3: Output: Trained reward model Rϕ. 4: Initialize reward model parameters ϕ. 5: The reward model Rϕis trained to predict zgiven x, y:pϕ(z|x, y) =Q|z| j=1pϕ(zj|z<j, x, y). 6: Define the loss function: LR(ϕ) =−E(x,y,z )∈DR[logpϕ(z|x, y)]. 7: Train Rϕby minimizing LR(ϕ)onDRusing teacher forcing and a standard causal LM objective. 8: return Trained reward model Rϕ. 9:end procedure 30 Algorithm 4 NL-Gradient Policy Optimization (Section 3.5) 1:procedure OPTIMIZE POLICY WITHNLG RADIENT (πθinit, Rϕ, Vψinit) 2: Input: Initial policy πθinit, trained reward model Rϕ, initial value function Vψinit. 3: Hyperparameters: Learning rates, PPO clipping ϵ, entropy bonus β, GAE γ, λ. 4: Output: Optimized policy πθ. 5: Initialize policy πθ←πθinit, value function Vψ←Vψinit. 6: foreach iteration iter= 1, . . . , MaxIterations do 7: Letπθold←πθ. 8: Initialize a batch of rollouts B ← ∅ . 9: foreach sample s= 1, . . . , NumSamplesPerIteration do 10: Sample prompt x. 11: Generate response y= (y1, . . . , y T)∼πθold(· |x). 12: Generate feedback z′= [c′;A′(y)]∼Rϕ(z′|x, y). 13: ParseA′(y)to get token-level pseudo-rewards δ′= (δ′ 1, . . . , δ′ T)(using lines 11-20 of Alg. 2). 14: Fort= 1, . . . , T :rtotal,A t ←δ′ t+rKL t ▷ rKL tis an optional KL-penalty term. 15: Compute advantages A1, . . . , A T. For t=T . . . 1: 16: At=PT k=tγk−trtotal,A k−Vψ(x, y<t). (Or use GAE: At=PT−t−1 l=0(γλ)l(rtotal,A t+l+ γVψ(x, y<t+l+1)−Vψ(x, y<t+l))) 17: Add(x, y, δ′,A,rtotal,A)toB. 18: end for 19: foreach epoch e= 1, . . . , NumEpochsPPO do 20: for all (x, y, δ′,A,rtotal,A)inBdo 21: Fort= 1, . . . , T : 22: ρt(θ) =πθ(yt|x,y<t) πθold(yt|x,y<t). 23: LCLIP t(θ) = min ( ρt(θ)At,clip(ρt(θ),1−ϵ,1 +ϵ)At). 24: LVF t(ψ) = (Vψ(x, y<t)−(PT k=tγk−trtotal,A k))2.▷Value target is discounted sum of rewards. 25: LENT t(θ) =H(πθ(· |x, y<t)). 26: end for 27: LPPO(θ) =EB,t LCLIP t(θ)−βLENT t(θ) . 28: LVF(ψ) =EB,t LVF t(ψ) . 29: Update policy parameters: θ←optimizer_step (θ,∇θLPPO(θ)). 30: Update value function parameters: ψ←optimizer_step (ψ,∇ψLVF(ψ)). 31: end for 32: end for 33: return Optimized policy πθ. 34:end procedure 31 | https://arxiv.org/abs/2505.22338v1 |
1 Empowering Intelligent Low-altitude Economy with Large AI Model Deployment Zhonghao Lyu, Member, IEEE , Yulan Gao, Member, IEEE , Junting Chen, Member, IEEE , Hongyang Du, Member, IEEE , Jie Xu, Fellow, IEEE , Kaibin Huang, Fellow, IEEE , and Dong In Kim, Life Fellow, IEEE Abstract —Low-altitude economy (LAE) represents an emerg- ing economic paradigm that redefines commercial and social aerial activities. Large artificial intelligence models (LAIMs) offer transformative potential to further enhance the intelli- gence of LAE services. However, deploying LAIMs in LAE poses several challenges, including the significant gap between their computational/storage demands and the limited onboard resources of LAE entities, the mismatch between lab-trained LAIMs and dynamic physical environments, and the inefficiencies of traditional decoupled designs for sensing, communication, and computation. To address these issues, we first propose a hierarchical system architecture tailored for LAIM deployment and present representative LAE application scenarios. Next, we explore key enabling techniques that facilitate the mutual co- evolution of LAIMs and low-altitude systems, and introduce a task-oriented execution pipeline for scalable and adaptive service delivery. Then, the proposed framework is validated through real- world case studies. Finally, we outline open challenges to inspire future research. Index Terms —Low-altitude economy, large AI model (LAIM), real-world implementation. I. I NTRODUCTION The low-altitude economy (LAE) is rapidly emerging as a critical engine of global industrial innovation and economic growth. As a foundational element of next-generation dig- ital infrastructure, LAE is enabling transformative services in logistics, transportation, and public safety. For instance, China’s LAE market is expected to exceed 1 trillion RMB by 2026 with a 33.8 %annual growth rate. Germany has investigated over 13 billion USD to air mobility since 2019, while Amazon’s Prime Air MK30 drones have been designed for corss-country package delivery in 2024 [1]. These global developments highlight the urgent need for scalable, intelli- gent, and secure low-altitude systems. In parallel, the rise of large artificial intelligence models (LAIMs) has redefined the capabilities of machine percep- tion, reasoning, and decision-making beyond traditional AI Z. Lyu and Y . Gao are with the Department of Information Science and En- gineering, KTH Royal Institute of Technology, Sweden (e-mail: lzhon@kth.se, yulang@kth.se). J. Chen and J. Xu are with the School of Science and Engineering, the Shenzhen Future Network of Intelligence Institute, and the Guangdong Provincial Key Laboratory of Future Networks of Intelligence, The Chinese University of Hong Kong (Shenzhen), China (e-mail: juntingc@cuhk.edu.cn, xujie@cuhk.edu.cn). H. Du and K. Huang are with the Department of Electrical and Electronic Engineering, University of Hong Kong, Hong Kong SAR, China (e-mail: duhy@eee.hku.hk, huangkb@hku.hk). D. I. Kim is with the Department of Electrical and Computer Engineering, Sungkyunkwan University, South Korea (email: dongin@skku.edu).systems, offering a powerful foundation for intelligent LAE services. Their strong generalization capabilities and cross- domain adaptability make them well-suited for various tasks, such as aerial monitoring [2], robust communication [3], [4], and multi-agent coordination [5]. Despite their promise, deploying LAIMs in LAE introduces several challenges: 1)Resource-aware deployment: Generally, the massive model size and high computation intensity far exceed the on- board battery, storage, and processing capabilities of intelligent | https://arxiv.org/abs/2505.22343v1 |
aerial agents (IAAs) and ground nodes. This makes direct deployment of full-scale LAIMs infeasible. To tackle this issue, recent efforts have explored scalable LAIM deployment strategies, including efficient pre-training and fine-tuning [6], joint task-offloading and resource management [7], and light- weight model inference [8]. Moreover, full-scale LAIMs can be hosted on the edge/cloud servers with sufficient computing power, while IAAs interact with them for prediction and inference via prompt engineering [9]. 2)Environment-aware adaptation: Low-altitude system de- sign must adapt to highly complex and dynamic real-world environments. This includes handling three-dimensional (3D) airspace geometry, operation constraints, electromagnetic in- terference, and seamless integration with terrestrial and satel- lite networks. Conventional offline designs built on idealized simulation environments fail to capture such dynamics, ne- cessitating environment-aware IAA deployment and real-time resource management [10]. Furthermore, the performance of LAIMs pre-trained on general-purpose datasets may degrade, when faced with mission-specific tasks and evolving real- world scenarios. To ensure effectiveness, LAIMs must be continuously adapted through feedback-driven refinement in- formed by real-world interactions and task-specific data. 3)Co-designing of sensing, communication, and computa- tion (SCC): To meet heterogeneous quality-of-service (QoS) requirements in LAE, it is essential to co-design SCC in a task-oriented paradigm. Traditional decoupled designs often lead to resource inefficiencies, as the three functionalities are often closely coupled and may compete for constrained resources in low-altitude systems [11]. A task-oriented in- tegrated SCC (ISCC) framework could ensure task-relevant sensing, semantic-aware transmission, and adaptive computa- tion tailored to specific LAIM tasks. These challenges call for unified system architectures, environment-aware system design methodologies, and real- world prototypes to enable efficient deployment of LAIMsarXiv:2505.22343v1 [eess.SP] 28 May 2025 2 Figure 1. Architecture and application scenarios of LAIM-empowered intelligent LAE services. to empower intelligent LAE services, which are still under- explored in existing literature. Motivated by this gap, this paper investigates the integration of LAIMs into low-altitude systems. The main contributions are summarized as follows, •First, we propose a hierarchical system architecture for deploying LAIMs in low-altitude systems, and classify representative LAE application scenarios. Also, open issues are outlined to motivate future research. •Second, we explore the mutual benefits between LAIMs and LAE, including key enabling techniques and real- world interaction loops that continuously refine LAIM performance. Moreover, we propose a task-oriented exe- cution pipeline for LAIM-empowered LAE. •Finally, we present real-world case studies, i.e., LAIM- guided uncrewed aerial vehicle (UA V) placement and resource-aware LAIM deployment across air-ground plat- forms. These studies are grounded in real-world data and testbeds, demonstrating the feasibility of our framework under practical low-altitude environments. II. S YSTEM ARCHITECTURE AND APPLICATION SCENARIOS OF LAIM- EMPOWERED LAE LAIM-empowered LAE requires requires a fundamental shift in architectural design for intelligent service delivery. Unlike traditional mobile edge computing (MEC) systems that rely on static terrestrial infrastructure, LAE necessitates an aerial-native and mission-adaptive system architecture capa- ble of supporting distributed AI execution across dynamic3D network topologies with fast-moving IAAs. To address these unique demands, we propose a three-tier hierarchical framework, as shown in Fig. 1, consisting of a ground layer, an aerial layer, and a cloud layer, which enables scalable and | https://arxiv.org/abs/2505.22343v1 |
adaptive LAIM-empowered intelligence across LAE services. A. Low-altitude System Architecture with LAIM Integration The ground layer comprises heterogeneous terrestrial nodes, such as sensors, mobile user equipments, base stations (BSs), and edge servers, which are responsible for ubiquitous data collection and preliminary processing. While sensors and mo- bile user equipments are typically constrained by limited bat- tery, memory, and processing resources, making them suitable only for lightweight LAIM components like ultra-compact encoders for early-stage feature extraction, BSs and edge servers offer stronger processing and communication capabili- ties. This hierarchical architecture enables the transmission of semantically compressed embeddings or task-relevant raw data to higher-tier nodes, reducing bandwidth consumption while preserving task-critical information. The aerial layer consists of IAAs, such as UA Vs, which serve as multifunctional entities integrating sensing, comput- ing, communication, and service execution. Unlike traditional ground infrastructures, IAAs offer a unique aerial perspective, enabling the capture of comprehensive 3D environmental views and situational awareness that is difficult to obtain from the ground. Their high mobility allows for rapid repo- sitioning and agile adaptation, supporting flexible network 3 topology reconfiguration, on-demand coverage extension, and load balancing in response to dynamic service demands or environmental changes. For instance, in logistics or disaster response scenarios, IAAs act as aerial users to simultaneously perceive the environment, process data in real time, and maintain reliable low-latency communications with centralized controllers. When deployed as aerial BSs, they extend network coverage in hotspots or emergency zones, improving connec- tivity and service continuity. Each IAA is embedded with segmented LAIM modules and edge computing capabilities to execute latency-sensitive tasks locally, while offloading compute-intensive tasks to the cloud for collaborative process- ing. Moreover, in cooperative tasks, IAAs form flying ad hoc networks through dynamic routing, adaptive beamforming, and mesh reconfiguration, offering robust inter-agent connectivity and resilience under high mobility, interference, or partial network failure, far beyond what static ground infrastructures can achieve. The cloud layer consists of centralized infrastructure, in- cluding terrestrial BSs, cloud data centers, and low-orbit satel- lites. It acts as the global coordinator for LAIM-empowered LAE services, supporting distributed model training, inference, and cross-layer resource management. It typically hosts var- ious full-scale LAIMs and partitions their execution across different layers based on environment-aware and task-specific QoS requirements, such as task urgency, channel conditions, and resource availability. This hierarchical orchestration en- ables flexible LAIM deployment for large-scale LAE applica- tions in real-world. B. Typical Application Scenarios Typical LAE applications can be broadly categorized into three classes based on their dominant functional demands, i.e., communication-centric, sensing-centric, and computation- centric scenarios. 1) Communication-centric Applications: The primary chal- lenge is to achieve intelligent, reliable, and low-latency communication in dynamic and resource-limited low-altitude environments. Unlike traditional UA V-assisted communica- tion networks, LAIM-enabled IAAs not only serve as aerial BSs, extending wireless coverage in disaster zones, tempo- rary hotspots, or infrastructure-limited rural regions, but also perform real-time scheduling and dynamic spectrum access. Moreover, IAAs can form self-organized mesh networks be- yond static relaying, to support robust multi-hop communica- tions, ensuring continuity between mobile agents and discon- nected infrastructure segments. Additionally, IAAs leverage multi-sensor fusion | https://arxiv.org/abs/2505.22343v1 |
for precise wireless channel measurement and modeling, enabling predictive channel estimation, 3D beamforming, and interference mitigation, thereby providing a resilient communication backbone for LAE networks. 2) Sensing-centric Applications: Sensing-centric scenarios focus on air-ground collaborative and high-resolution environ- mental awareness, achieving by the enhanced perception ca- pabilities of IAAs equipped with multi-modal sensors, such as LiDAR, cameras, and millimeter-wave radar. Unlike traditional UA Vs, LAIM-empowered IAAs are capable of dynamic sens- ing adaptation, which autonomously modify sensing modesand trajectories in response to real-time environmental feed- back. Representative use cases include precision agriculture, urban pollution assessment, and disaster response, where pe- riodic scanning, wide-area coverage, and multi-angle imaging are essential to detect crop stress, evaluate air quality, or rescue survivors in post-disaster areas. Another critical application is obstacle detection and avoidance during aerial naviga- tion for transportation and delivery missions, particularly in low-visibility or rapidly changing environments, where IAAs continuously observe their surroundings to identify potential hazards such as trees, power lines, or other flying objects, enabling timely and safe trajectory adjustments. 3) Computation-centric Applications: Computation-centric applications position LAE entities (i.e., ground, aerial, and cloud nodes) as distributed computation infrastructures, sup- porting a wide range of AI-driven tasks. Specifically, for delay- sensitive tasks, such as anomaly detection in smart grids or real-time UA V path replanning, IAAs act as mobile edge computing nodes, by processing data close to the source for fast computation responses. Beyond real-time services, LAE also supports computation-intensive applications that require large-scale data analysis, predictive reasoning, and global optimization. Examples include low-altitude channel and envi- ronment map reconstruction, traffic forecasting, and airspace coordination through system-level optimization. Nevertheless, continuous LAIM fine-tuning via real-world interactions is necessary for narrowing the performance gap between lab training and field deployment. To summarize, Fig. 1 presents representative use cases of LAIM-empowered LAE, including smart manufacturing, smart grid, intelligent vehicular networks, smart logistics, and preci- sion agriculture, while highlighting the corresponding sensing, communication, and computation functionalities involved in each scenario. III. M UTUAL BENEFITS OF DEPLOYING LAIM S IN LAE This section discusses the mutual benefits of integrating LAIMs into LAE, and presents a task-oriented execution pipeline to facilitate this integration. A. Key Enabling Techniques for LAIM-empowered LAE Deploying LAIMs in dynamic and resource-constrained low-altitude environments requires tight integration of SCC functionalities to ensure real-time, mission-adaptive intelli- gence. We identify three core enabling techniques that jointly empower LAIM deployment: integrated sensing and commu- nication (ISAC), integrated communication and computation (ICC), and ISCC. 1) ISAC for LAIM-empowered LAE: In low-altitude sys- tems, multi-modal sensors are widely deployed on both IAAs and ground nodes. ISAC frameworks enable these entities to use shared spectrum and hardware resources for both environmental sensing and data transmission by leveraging dual-purpose waveform design [12]. This design paradigm is especially beneficial in low-altitude scenarios where spectrum availability and hardware capacity are constrained. Beyond re- source reuse, ISAC enables environment-aware channel mod- eling and real-time channel state information (CSI) acquisition 4 Figure 2. Execution pipeline for LAIM-empowered LAE services. by jointly estimating the channel parameters and perceiving the physical environment [13]. Moreover, network-level ISAC further enables | https://arxiv.org/abs/2505.22343v1 |
adaptive 3D beamforming, allowing IAAs and edge nodes to sustain high-quality communication links and perform high-precision sensing collaboratively, even in highly dynamic or low-visibility environments. 2) ICC for LAIM-empowered LAE: ICC focuses on the co- design of communication and computation to ensure efficient transmission and processing of task-relevant information with reduced latency. Two foundational enablers of ICC in LAIM- empowered LAE are semantic communications and over-the- air computation (AirComp). Semantic communications aim to transmit only compressed task-relevant features or intermedi- ate inference results instead of raw data. For example, [14] pro- posed an interest-based image segment transmission method, which can help to extract only the relevant part of UA V-sensed images to reduce bandwidth consumption and latency. Air- Comp exploits the superposition property of wireless multiple- access channels to enable simultaneous communication and functional aggregation of distributed data. This is particularly advantageous for collaborative LAE tasks needing efficient information aggregation, such as UA V swarming, distributed sensing, and consensus building. Together, these techniques enable a compute-while-communicate paradigm, alleviating communication bottlenecks and enhancing the responsiveness of latency-sensitive LAIM-empowered services in resource- constrained low-altitude environments. 3) ISCC for LAIM-empowered LAE: ISCC represents an advanced convergence of ISAC and ICC, enabling the task- driven co-optimization of SCC resources. Traditionally de- signed in isolation, these functions are inherently coupled and exhibit complex relationships in LAIM-empowered low- altitude systems, from collaboration to competition over lim- ited resources. The overall service quality depends on their joint effectiveness, but their contributions to task performanceare uneven. Therefore, their individual optimization, e.g., maximizing sensing resolution or transmission rate, can lead to degraded task performance and inefficiencies. To address it, a task-oriented ISCC design paradigm is essential, which co-optimizes SCC for mission-level goals such as LAIM training/inference accuracy, latency, energy efficiency, and reliable primary and secondary control. In this paradigm, sensing prioritizes acquiring task-relevant in- formation, communication focuses on transmitting semantic features or intermediate results, and computation is adaptively offloaded across heterogeneous entities aligned with task- specific QoS demands [15]. Additionally, jointly optimiz- ing IAA positioning with SCC resource allocation further enhances system flexibility and supports adaptation to the dynamic low-altitude environments. •ISCC for distributed LAIM training in LAE: ISCC provides the architectural backbone for LAIM-based services in LAE, supporting continual model refine- ment driven by real-world feedback. Emerging tech- niques, such as parameter-efficient fine-tuning (e.g., low- rank adaptation (LoRA) and prompt-tuning), mixture- of-experts (MoE), and distributed learning (e.g., feder- ated and split learning) are particularly suitable for life- long LAIM adaptation and naturally aligned with ISCC principles in low-altitude environments. For example, to fine-tune a pre-trained LAIM for traffic monitoring, split learning can be used to distribute model segments and training tasks across ground devices, IAAs, and the cloud, leveraging heterogeneous computation resources. Under this setup, only gradient/model information needs to be exchanged, reducing communication overhead and enhancing privacy compared to transmitting raw training data. LoRA can accelerate this fine-tuning process, while federated learning enables semantic heads to generalize across multi-modal and multi-task settings via periodic 5 model aggregation. Throughout this process, a task-aware ISCC framework coordinates model partitioning, node selection, | https://arxiv.org/abs/2505.22343v1 |
resource allocation, and IAA positioning to accelerate the convergence speed. •ISCC for efficient LAIM inference in LAE: ISCC also supports efficient inference by enabling flexible parti- tioning and execution of LAIMs across ground, aerial, and cloud nodes based on real-time QoS requirements, which is commonly referred to as split or co-inference. Enhanced by expert-path routing, model components such as attention heads or decision layers, can be selectively executed depending on the capabilities of each node. Moreover, to enable lightweight deployment for fast inference, various model compression techniques, such as pruning, quantization, sparse activation, and knowledge distillation are applied. However, these must be guided by QoS-aware strategies and theoretical performance guar- antees to ensure a balance among inference accuracy, latency, and energy consumption. For example, high- reasoning-demand tasks (e.g., multi-view 3D scene re- construction) may require retaining more model capacity, even at higher energy and delay costs. ISCC coordinates model compression, task offloading, IAA placement, and resource allocation, enabling resource-efficient LAIM in- ference tailored to evolving service demands in dynamic low-altitude environments. B. LAE for Continual LAIM Evolution While LAIMs serve as powerful enablers of intelligent LAE services, the relationship between them is inherently bidirectional. LAE is not merely a consumer of LAIMs, it could also help to advance LAIMs. Unlike static datasets or simulation environments, low-altitude systems continuously generate vast volumes of real-world, multi-modal, spatio- temporal data under diverse physical constraints, enabling task-oriented fine-tuning of pre-trained LAIMs. 1) Real-World Data for Robust Representation Learning: The multi-modal and spatially distributed sensing data pro- duced in low-altitude environments often contain rich contex- tual correlations, varying noise patterns, and real-world out- liers. These characteristics are critical for learning robust and generalizable representations. Continuous exposure to such diverse data fosters improved feature extraction, contextual adaptation, and multi-modal data fusion, allowing LAIMs to refine digital twin-based world models and semantic knowl- edge bases. 2) Reinforcement Learning (RL) in Real-world Loops: LAE offers unique training environments for RL-type LAIMs, where intelligent agents interact directly with the physical world, execute tasks, observe feedback, and iteratively im- prove policies. The complexity of LAE tasks introduces high- dimensional state-action spaces and diverse reward mecha- nisms, which are ideal for training scalable RL architectures. These systems can be further enhanced using human-in- the-loop feedback, such as expert demonstrations, to guide policy optimization. Through continual interaction with real- world environments, LAIMs can develop reusable policies that generalize to unseen conditions.3) Multi-Task LAIM Adaptation: The heterogeneity of LAE tasks necessitates the deployment of multi-task LAIMs for cross-domain reasoning. A single IAA may be expected to perform tasks from visual inspection to path planning, which pose significant challenges to traditional single-task models. To meet these demands, LAIMs are evolving toward rapid multi-task adaptation, adopting parameter-efficient designs, such as shared multi-head attention, cross-modal encoders, and task-specific output modules. Moreover, low-altitude systems inherently support continual LAIM learning, where task fail- ures naturally trigger on-the-fly adjustment and retraining. C. Execution Pipeline The execution of LAIM-empowered LAE services follows a task-oriented pipeline comprising both offline and online stages, which is summarized as follows. 1) Offline System Initialization: The pipeline | https://arxiv.org/abs/2505.22343v1 |
begins with an offline initialization phase grounded in a digital twin or virtual world model. At this stage, IAA positioning, resource alloca- tion, cooperation strategies, and task scheduling are jointly optimized under task-oriented objectives and QoS constraints. This optimization problem can be solved via a range of techniques, including convex/non-convex optimizations, RL, and generative AI. The outcome provides a coarse-grained initial system setups. However, due to inevitable mismatches between simulation and real-world conditions, this offline plan requires subsequent refinement through online adaptation. 2) Multi-modal Data Sensing: Once deployed, LAE nodes begin collecting environmental data using multi-modal sensors mounted on both ground terminals and IAAs. The sensing process is task-oriented, employing spatially selective and temporally adaptive strategies, such as continuous or event- triggered sampling, based on region-of-interest characteristics and mission requirements. The collected data, ranging from raw sensing data to semantic features, form the foundation for downstream LAIM inference and control tasks. 3) Information Transmission: To synchronize information and support cooperative intelligence, sensing and control sig- nals are transmitted across the LAE networks. This involves both horizontal exchange within each layer (e.g., among IAAs or ground terminals) to build task and environment consensus, and vertical exchange across layers (e.g., IAA-to-cloud) for collaborative task execution. Depending on task urgency, node topology, and available resources, the system dynamically chooses between bit-level and semantic-level transmission modes to improve communication efficiency. 4) Downstream Task Execution and Control: LAIM tasks are executed in a distributed and collaborative manner across LAE entities. Execution is partitioned based on task complex- ity and latency sensitivity. For example, lightweight seman- tic heads may run on ground nodes, transformer encoders on IAAs, and global decision modules on the cloud. This paradigm supports both real-time responsiveness for latency- critical operations and high-accuracy performance for compu- tationally intensive tasks requiring centralized processing. 5) Closed-loop Control Adaptation: Following task execu- tion, the system continuously monitors environmental feed- back, such as task outcomes, signal quality variations, and 6 Figure 3. Real-world implementation with the channel measurement prototype in low-altitude scenarios. unexpected disturbances. These feedback signals are used to update resource allocation, adjust IAA deployment strate- gies, and reconfigure task execution pipelines in real time. Through this closed-loop control mechanism, the system sup- ports timely performance evaluation, strategy adjustment, and LAIM refinement under dynamic and uncertain low-altitude environments. IV. C ASE STUDY We present case studies to validate the proposed framework, focusing on LAIM-guided UA V deployment and resource- aware LAIM deployment across LAE platforms. A. Real-world Implementation Details We conduct signal mapping experiments using a low- altitude channel measurement prototype over the campus of the Chinese University of Hong Kong, Shenzhen. The goal is to characterize the synchronization signal block (SSB) beam coverage from a commercial China Mobile BS operating at 4.9 GHz. A DJI M300 RTK UA V equipped with a radio scanner is used to collect reference signal received power (RSRP) measurements across seven SSB beams. The BS antenna features a 20° mechanical uptilt, which targets aerial coverage. The UA V flies at a fixed altitude of 98 meters above the BS, maintaining a speed | https://arxiv.org/abs/2505.22343v1 |
of 10 m/s and executing a line- by-line scanning trajectory. We scan an area of approximately 634×301 meters, with a spatial resolution of 1 meter between samples. RSRP values are sampled at 10 Hz from the PBCH-XSS RSRP metric, while RTK-based positioning ensured centimeter-level localization accuracy. The collecteddata provides a fine-grained signal quality measurements under realistic low-altitude environments as shown in Fig. 3. B. LAIM-guided UAV Deployment We evaluate the potential of LAIMs to guide UA V po- sitioning in low-altitude environments based on real-world RSRP measurements. The task objective and system setups are specified in the prompts as shown in Fig. 4. We propose a prompt-driven workflow to interact with the LAIM. The input includes the real-world RSRP values and a task description specifying the objective, system constraints, and relevant parameters. Then, the LAIM autonomously inter- prets the spatial distribution of signal strength and generates UA V coordinates along with estimated communication rates. For comparison, we consider a conventional optimization- based benchmark scheme using a location-dependent line-of- sight (LoS) channel model. The corresponding non-convex sum-rate maximization problem is solved using successive convex approximation. Fig. 4 shows that LAIM-based solutions significantly out- perform the traditional optimization-based schemes, with per- formance gaps increasing with the number of UA Vs. This is due to the mismatch between the simplified LoS channel models assumed in the traditional optimization-based scheme and the complex real-world wireless environments. Optimizing UA V placement based on idealized LoS assumptions can lead to positions that suffer from strong interference. In contrast, LAIMs can directly interpret real-world channel measure- ments, enabling more interference-aware and environment- adaptive deployment decisions. These results highlight the practicality of LAIM-driven IAA deployment in complex low- altitude environments. 7 Figure 4. LAIM-guided UA V deployment in LAE. C. Resource-aware LAIM Deployment To enable the deployment of LAIMs on resource-limited LAE platforms, we propose a pruning-aware co-inference scheme guided by real-world RSRP measurements. A pre- trained LAIM is first pruned and then partitioned into on- IAA and on-cloud sub-models for collaborative execution. To balance trade-offs among inference quality, delay, and energy consumption, we jointly optimize the pruning ratio, transmit power, and computation frequency. For each of the three objectives, i.e., maximizing inference quality, minimizing delay, and minimizing energy consumption, the remaining two QoS metrics are treated as constraints. We experiment with the BART model on the CNN/DailyMail summarization task using magnitude-based pruning. Furthermore, we compare three inference paradigms: 1) On-cloud inference, where the full LAIM is executed on the cloud server; 2) On-IAA inference, where the full inference task is performed locally on IAAs; 3) Co-inference, where the model is partitioned between IAAs and the cloud for collaborative execution. Fig. 5 shows that the co-inference paradigm consistently achieves the best performance. The result validates that the proposed design could well balance the trade-offs between LAIM inference quality, delay, and energy consumption. Jointly optimizing the model structure and resource allocation allows for efficient LAIM deployment, tailored to low-altitude system resource availability and real-world task demands. V. C ONCLUSIONS AND FUTURE DIRECTIONS By presenting a hierarchical system architecture, a task- oriented | https://arxiv.org/abs/2505.22343v1 |
execution pipeline, and real-world case studies, this work investigated efficient LAIM deployment in dynamicand resource-constrained low-altitude environments. It en- ables a new air-ground intelligence paradigm for LAE, where real-time perception, high-quality communication, adaptive decision-making, and continual LAIM refinement converge. This transformation unlocks broad opportunities, from smart manufacturing to disaster response, driving future digital in- novation and inspiring further research, as outlined below. 1) Fusion of Airspace Knowledge Graphs and LAIMs: En- hancing airspace intelligence in LAE requires reconciling strict regulatory constraints with the operational flexibility needed by LAIM-driven IAAs. Traditional knowledge graph (KG)- based systems offer structured representations of airspace rules. However, they struggle to adapt to rapid environment dynamics, limiting their responsiveness in tasks like obsta- cle avoidance. Meanwhile, current LAIMs lack the domain- specific reasoning necessary to accurately interpret complex airspace regulations. This gap leads to unreliable or non- compliant behaviors in high-risk scenarios. Bridging this gap demands new frameworks that fuse real-time sensing with KG- grounded LAIM reasoning, empowering UA Vs to understand and act on complex regulations. 2) Multi-agent Gaming: Cooperative decision-making in multi-agent LAE systems remains an open challenge, con- strained by conflicting commercial interests, data confiden- tiality, and regulatory requirements. In competitive airspace scenarios, agents, such as IAAs from different companies, must make decisions under partial observability and strategic uncertainty. However, data-sharing is often limited by pri- vacy or commercial concerns. These issues create a natural setting for game-theoretic interactions, where coordination and competition co-exist. Moreover, execution delays and the 8 Figure 5. Resource-aware LAIM deployment across LAE platforms. opacity of LAIM-driven decisions further hinder synchroniza- tion and regulatory trust. Advancing multi-agent collaboration requires new mechanisms that balance distributed autonomy with global policy compliance under strategic constraints. 3) Sustainable LAIM Deployment in LAE: The growing computational needs of LAIMs significantly exceed the limited and unstable energy resources available to aerial platforms. Wireless power transfer (WPT) from terrestrial nodes may not work because of its much lower end-to-end efficiency for long-range scenarios. Instead, the satellite-based long- range WPT would be feasible, given that a large-scale array antenna can deliver the solar power harvested at the low earth orbit (LEO)/geostationary equatorial orbit (GEO) stations. Furthermore, the mismatch between intermittent renewable energy availability and the continuous power demands of AI workloads calls for energy-aware model design, task-adaptive execution, and robust power management strategies. 4) Open Ecosystem Development: Establishing an open and collaborative ecosystem for intelligent LAE services is significantly challenging, due to the gap between proprietary technologies and the need for industry-wide interoperability. Incompatible communication protocols across manufacturers hinder seamless coordination among heterogeneous devices. Meanwhile, the rapid evolution of AI models often outpaces the hardware upgrade cycles, leading to compatibility issues. Additionally, data silos driven by commercial interests restrict knowledge sharing and collective progress. Future LAE sys- tems should embrace open standards and architectures that promote both technical interoperability and ecosystem-wide innovation. 5) Security Issues: IAAs with limited operational power are more likely exposed to intentional jamming attacks, because of low-power transmissions for uploading data from sensors, and the backhaul links to terrestrial BSs. Consequently, it is important to | https://arxiv.org/abs/2505.22343v1 |
ensure secure and reliable operation againstjamming attacks, where the primary and secondary control links should be designed in a layered and integrated manner under the proposed hierarchical network infrastructure and task-oriented ISCC design paradigm. 6) Standardization Alignment: As the LAE increasingly relies on IAAs and non-terrestrial networks (NTNs), align- ment with ongoing 3GPP standardization efforts becomes critical. Future sixth-generation (6G) of wireless systems are expected to natively support integrated terrestrial, aerial, and satellite communication. This requires unified architecture designs, interoperability protocols, and standardized interfaces for heterogeneous LAE-NTN deployments. Key challenges in- clude delay-tolerant protocols for high-altitude relays, dynamic spectrum access, and service continuity across multi-layered infrastructures. REFERENCES [1] Y . Yang, Y . Chen, J. Wang, G. Sun, and D. Niyato, “Embodied AI-empowered low altitude economy: Integrated sensing, communi- cations, computation, and control (ISC3),” 2024. [Online]. Available: https://arxiv.org/pdf/2412.19996 [2] J. Wang, H. Du, Y . Liu, G. Sun, D. Niyato, S. Mao, D. I. Kim, and X. Shen, “Generative AI based secure wireless sensing for ISAC Networks,” IEEE Trans. Inf. Forensics Secur. , early access. [3] S. Javaid, R. Khalil, N. Saeed, B. He, and M. Alouini, “Leveraging large language models for integrated satellite-aerial-terrestrial networks: Recent advances and future directions,” IEEE Open J. Commun. Soc. , vol. 6, pp. 399-432, 2025. [4] C. Zhao, J. Wang, R. Zhang, D. Niyato, G. Sun, H. Du, D. Kim, and A. Jamalipour, “Generative AI-enabled wireless communications for robust low-altitude economy networking,” 2025. [Online]. Available: https://arxiv.org/pdf/2502.18118 [5] L. Zhou, W. Feng, Z. Chen, T. Ruan, S. Leng, H. Yang, Y . Fu, and T. Quek, “Cooperative generative AI for UA V-based scenarios: An intelligent cooperative framework,” IEEE Veh. Technol. Mag. , early access. [6] Z. Lyu, Y . Li, G. Zhu, J. Xu, H. Vincent Poor, and S. Cui, “Rethinking resource management in edge learning: A joint pre-training and fine- tuning design paradigm,” IEEE Trans. Wireless Commun. , vol. 24, no. 2, pp. 1584-1601, Feb. 2025. [7] Y . Gao, Z. Ye, and H. Yu, “Cost-efficient computation offloading in SAGIN: A deep reinforcement learning and perception-aided approach,” IEEE J. Sel. Areas Commun. , vol. 42, no. 12, pp. 3462-3476, Dec. 2024. [8] Z. Lyu, M. Xiao, J. Xu, M. Skoglund, and M. Renzo, “The larger the merrier? Efficient large AI model inference in wireless edge networks,” 2025. [Online]. Available: https://arxiv.org/pdf/2505.09214 [9] H. Li, M. Xiao, K. Wang, D. I. Kim, and M. Debbah, “Large language model based multi-objective optimization for integrated sensing and communications in UA V networks,” IEEE Wireless Commun. Letters , vol. 14, pp. 979-983, Apr. 2025. [10] Y . Zeng, J. Chen, J. Xu, D. Wu, X. Xu, S. Jin, X. Gao, D. Gesbert, S. Cui, and R. Zhang, “A tutorial on environment-aware communications via channel knowledge map for 6G,” IEEE Commun. Surv. Tutor , vol. 26, no. 3, pp. 1478-1519, thirdquarter 2024. [11] D. Wen, Y . Zhou, X. Li, Y . Shi, K. Huang, and K. B. Letaief, “A survey on integrated sensing, communication, and computation,” IEEE Commun. Surv. Tutor , early access. [12] Z. Lyu, G. Zhu, and J. Xu, | https://arxiv.org/abs/2505.22343v1 |
“Joint maneuver and beamforming design for UA V-enabled integrated sensing and communication,” IEEE Trans. Wireless Commun. , vol. 22, no. 4, pp. 2424-2440, Apr. 2023. [13] W. Liu and J. Chen, “UA V-aided radio map construction exploiting environment semantics,” IEEE Trans. Wireless Commun. , vol. 22, no. 9, pp. 6341-6355, Sept. 2023. [14] Y . Cheng, D. Niyato, H. Du, C. Miao, and D. I. Kim, “Goal-oriented semantic communication with message-sharing NOMA for metaverse,” IEEE Wireless Commun. , vol. 31, no. 3, pp. 309 - 315, Jun. 2024. [15] G. Zhu, Z. Lyu, X. Jiao, P. Liu, M. Chen, J. Xu, S. Cui, and P. Zhang, “Pushing AI to wireless network edge: An overview on integrated sensing, communication, and computation towards 6G,” Sci. China Inf. Sci. , vol. 66, no. 130301, pp. 1-19, Feb. 2023. | https://arxiv.org/abs/2505.22343v1 |
arXiv:2505.22349v1 [cs.DB] 28 May 2025ChatPD: An LLM-driven Paper-Dataset Networking System Anjie Xu Key Lab of High Confidence Software Technologies (Peking University), Ministry of Education Beijing, China School of Computer Science, Peking University Beijing, China anjiexu@stu.pku.edu.cnRuiqing Ding Key Laboratory of Process Optimization and Intelligent Decision-making, Ministry of Education Anhui, China School of Management, Hefei University of Technology Anhui, China ruiqingding@hfut.edu.cnLeye Wang∗ Key Lab of High Confidence Software Technologies (Peking University), Ministry of Education Beijing, China School of Computer Science, Peking University Beijing, China leyewang@pku.edu.cn ABSTRACT Scientific research heavily depends on suitable datasets for method validation, but existing academic platforms with dataset manage- ment like PapersWithCode suffer from inefficiencies in their manual workflow. To overcome this bottleneck, we present a system, called ChatPD, that utilizes Large Language Models (LLMs) to automate dataset information extraction from academic papers and construct a structured paper-dataset network. Our system consists of three key modules: paper collection ,dataset information extraction , and dataset entity resolution to construct paper-dataset networks. Specif- ically, we propose a Graph Completion and Inference strategy to map dataset descriptions to their corresponding entities. Through extensive experiments, we demonstrate that ChatPD not only out- performs the existing platform PapersWithCode in dataset usage extraction but also achieves about 90% precision and recall in entity resolution tasks. Moreover, we have deployed ChatPD to contin- uously extract which datasets are used in papers, and provide a dataset discovery service, such as task-specific dataset queries and similar dataset recommendations. We open source ChatPD and the current paper-dataset network on this GitHub repository. CCS CONCEPTS •Information systems →Data management systems ;Digital libraries and archives . KEYWORDS Dataset Discovery; Information Extraction; Entity Resolution ACM Reference Format: Anjie Xu, Ruiqing Ding, and Leye Wang. 2025. ChatPD: An LLM-driven Paper-Dataset Networking System. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2 (KDD ’25), August ∗Corresponding author. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. KDD ’25, August 3–7, 2025, Toronto, ON, Canada ©2025 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-1454-2/2025/08. . . $15.00 https://doi.org/10.1145/3711896.37372023–7, 2025, Toronto, ON, Canada. ACM, New York, NY, USA, 11 pages. https: //doi.org/10.1145/3711896.3737202 1 INTRODUCTION In scientific research, datasets play a crucial role in method valida- tion, model training, and result evaluation. Currently, research in many fields relies heavily on datasets, such as disease prediction in the medical field [ 18] and climate forecasting meteorology [ 44]. These studies often involve diverse datasets that span multiple disci- plines. While researchers may be familiar with datasets within their own field, they may | https://arxiv.org/abs/2505.22349v1 |
not be aware of datasets from other disciplines that could be beneficial to their research, which makes the process of finding a suitable dataset challenging. Despite the maturity of information retrieval technologies in the text domain, numerous challenges persist in the realm of data retrieval [ 19,26]. These challenges stem from the inherent com- plexity of datasets, which often come in diverse formats such as images, videos, and structured tables, making traditional text-based retrieval methods insufficient. Moreover, the lack of comprehen- sive and standardized metadata provided by data publishers creates further barriers to locating relevant datasets. As a result, users frequently struggle to find suitable datasets for their research [8]. To make dataset discovery more efficient, we aim to profile the us- age of datasets in academic papers and construct a structured paper- dataset network. This network can provide a better understanding of dataset impact, foster reproducibility, and improve dataset dis- coverability for future research. While some academic platforms, like Google Dataset Search [ 7] and PapersWithCode (PwC) [ 39], have made progress in linking papers to datasets, they still rely heavily on manual annotation or rule-based methods, which are time-consuming and error-prone. Furthermore, existing methods frequently fail to capture fine-grained dataset attributes critical to researchers – including data types, size, and specific usage contexts – limiting their utility in comprehensive research analysis. In this paper, we introduce ChatPD , a novel system that lever- ages Large Language Models (LLMs) to automate the construc- tion of a Paper- Dataset network. We design a dataset informa- tion template based on aspects that researchers usually focus on when studying datasets [ 25], and incorporate LLMs to analyze academic papers and extract dataset-related information. While LLMs generate large amounts of textual output at low cost, to in- tegrate this data with existing academic platforms like PwC, we KDD ’25, August 3–7, 2025, Toronto, ON, Canada Anjie Xu, Ruiqing Ding, and Leye Wang develop an algorithm based on graph completion and inference to map textual descriptions of datasets to the corresponding dataset entities in the dataset database, tailored to the characteristics of our data. Through our system, we obtain a high-quality paper- dataset network with rich metadata information about datasets, which can be used for dataset discovery and recommendation. Fi- nally, we deploy ChatPD as a practical dataset discovery service on https://chatpd-web.github.io/chatpd-web , supporting regular construction of AI-related paper-datasets networks on arXiv. In summary, our work has the following contributions: 1. We propose ChatPD , an LLM-driven system designed to au- tomatically construct a paper-dataset network. The system is de- ployed as an online service that supports dataset-related queries, recommendations, and additional functionalities. 2. We comprehensively evaluate the reliability of ChatPD from the perspective of dataset information extraction and entity reso- lution. For dataset information extraction, ChatPD achieves a pre- cision of∼0.99, significantly surpassing PwC’s result of ∼0.83. In entity resolution, ChatPD attains an F1 score of ∼0.88, outperform- ing state-of-the-art entity resolution algorithms [ 28,59], which achieve only∼0.68. 3. By collecting papers on arXiv cs.AI from 2018 to 2024, we have built a continuously evolving | https://arxiv.org/abs/2505.22349v1 |
paper-dataset network, which currently includes 60,126 papers, 4,224 dataset entities, and 137,004 paper-dataset usage records. Notably, the network constructed by ChatPD includes 444 new datasets not covered in PwC, demonstrat- ing the superiority of its automated dataset collection strategy over the manual annotation-based approach employed by PwC. We open source ChatPD and the collected paper-dataset network on GitHub: https://github.com/ChatPD-web/ChatPD . 2 BACKGROUND AND RELATED WORK Constructing a network that connects papers and datasets to facil- itate dataset discovery poses two primary challenges. Firstly, we need to extract pertinent information from scholarly articles. Sec- ondly, given that different papers may refer to the same dataset using diverse names, we are required to perform entity resolution. This process involves mapping varying dataset descriptions to their appropriate entities, enhancing the network’s quality. 2.1 Dataset Discovery Dataset discovery is the process of locating, examining, and access- ing relevant and valuable datasets for analysis, research, or other purposes. The retrieval systems for datasets usually rely on the con- text provided by dataset publishers [ 8]. Kern et al. [ 24] point out the pivotal role of metadata in the discovery of datasets. Following this idea, various studies have contributed to the development of dataset summaries and metadata to enhance dataset retrieval [20, 25, 57]. Various platforms have been developed to facilitate dataset dis- covery. Google Dataset Search [ 7] employs an automated approach, crawling dataset metadata from the web and aggregating meta- data from various sources, to provide a comprehensive dataset search engine. However, this search engine primarily reflects the perspectives of data publishers, potentially omitting the real-worldapplication of datasets. DataCite [ 42] assigns Digital Object Iden- tifiers (DOIs) to datasets, improving their citability and accessi- bility. PapersWithCode (PwC) [ 39] bridges academic publications with their associated code and datasets, fostering reproducibility. These platforms enhance transparency in the research ecosystem by systematically linking papers to underlying data. However, their reliance on manual annotations often results in incomplete dataset usage labels, limiting their comprehensiveness. Our work addresses the limitations of manual annotation by developing a self-evolving system that automatically extracts paper-dataset relationships from newly published papers. 2.2 Information Extraction Information Extraction (IE) is a fundamental task of identifying and converting specific details, like named entities and their relation- ships, from unstructured or semi-structured text into a structured format [ 13,29]. Traditionally, IE depends on supervised learning methods, which require a large amount of labeled data. With more weak supervision methods proposed [ 30,34], the need for anno- tation is alleviated. Recently, LLMs like GPTs [ 2] have subverted the previous modeling methods of natural language processing tasks. For the IE problem, researchers have begun to explore zero- shot or few-shot learning techniques based on LLMs as a uniform tool [ 21,31,38,58]. Our work advances this paradigm by integrat- ing LLMs to automate dataset information extraction, enhancing the scalability of detecting dataset usage in scholarly literature. 2.3 Entity Resolution Entity Resolution (ER) is to identify multiple data representations of the same real-world entity and map them to a unified entity. The early ER methods are mainly based on distance-based | https://arxiv.org/abs/2505.22349v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.