audio
audioduration (s) 3.02
103
| label
class label 2
classes |
|---|---|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
|
0HeySQuAD
|
Official Repo of Reagent Agent RL training dataset (Reagent-RL-709K).
Paper: https://arxiv.org/abs/2601.22154
Abstract:
Agentic Reinforcement Learning (Agentic RL) has achieved notable success in enabling agents to perform complex reasoning and tool use. However, most methods still relies on sparse outcome-based reward for training. Such feedback fails to differentiate intermediate reasoning quality, leading to suboptimal training results. In this paper, we introduce \textbf{Agent Reasoning Reward Model (Agent-RRM)}, a multi-faceted reward model that produces structured feedback for agentic trajectories, including (1) an explicit reasoning trace , (2) a focused critique that provides refinement guidance by highlighting reasoning flaws, and (3) an overall score that evaluates process performance. Leveraging these signals, we systematically investigate three integration strategies: \textbf{Reagent-C} (text-augmented refinement), \textbf{Reagent-R} (reward-augmented guidance), and \textbf{Reagent-U} (unified feedback integration). Extensive evaluations across 12 diverse benchmarks demonstrate that Reagent-U yields substantial performance leaps, achieving 43.7% on GAIA and 46.2% on WebWalkerQA, validating the effectiveness of our reasoning reward model and training schemes.
GitHub Repository
The official codebase, including training and evaluation scripts for Reagent, can be found on the project's GitHub repository: https://github.com/kxfan2002/Reagent
Citation
license: apache-2.0
- Downloads last month
- 16