| # Official Repo of Reagent. | |
| Paper: https://arxiv.org/abs/2601.22154 | |
| Code: https://github.com/kxfan2002/Reagent | |
| ## Abstract: | |
| Agentic Reinforcement Learning (Agentic RL) has achieved notable success in enabling agents to perform complex reasoning and tool use. | |
| However, most methods still relies on sparse outcome-based reward for training. | |
| Such feedback fails to differentiate intermediate reasoning quality, leading to suboptimal training results. | |
| In this paper, we introduce \textbf{Agent Reasoning Reward Model (Agent-RRM)}, a multi-faceted reward model that produces structured feedback for agentic trajectories, including (1) an explicit reasoning trace , (2) a focused critique that provides refinement guidance by highlighting reasoning flaws, and (3) an overall score that evaluates process performance. | |
| Leveraging these signals, we systematically investigate three integration strategies: \textbf{Reagent-C} (text-augmented refinement), \textbf{Reagent-R} (reward-augmented guidance), and \textbf{Reagent-U} (unified feedback integration). | |
| Extensive evaluations across 12 diverse benchmarks demonstrate that Reagent-U yields substantial performance leaps, achieving 43.7\% on GAIA and 46.2\% on WebWalkerQA, validating the effectiveness of our reasoning reward model and training schemes. | |
| ## GitHub Repository | |
| The official codebase, including training and evaluation scripts for Reagent, can be found on the project's GitHub repository: https://github.com/kxfan2002/Reagent | |
| ## Citation | |
| ```bash | |
| @article{fan2026exploring, | |
| title={Exploring Reasoning Reward Model for Agents}, | |
| author={Fan, Kaixuan and Feng, Kaituo and Zhang, Manyuan and Peng, Tianshuo and Li, Zhixun and Jiang, Yilei and Chen, Shuang and Pei, Peng and Cai, Xunliang and Yue, Xiangyu}, | |
| journal={arXiv preprint arXiv:2601.22154}, | |
| year={2026} | |
| } | |
| ``` | |
| --- | |
| license: apache-2.0 | |
| --- | |