In-the-Flow Agentic System Optimization for Effective Planning and Tool Use
Paper
β’ 2510.05592 β’ Published
β’ 108
Reproduces the core idea of AgentFlow: extending single-step LLM inference into a multi-turn Planner β Executor β Verifier agent loop, applying RL signals (GRPO) to the Planner's generation trajectory. This allows the model to improve its tool-use and reasoning capabilities without requiring manually annotated intermediate steps.
Input question
β
βΌ
Planner.plan() β Analyze the problem and devise a solution strategy (loss_mask=1)
β
βββΊ for step in range(max_steps):
β
ββ Planner.generate_next_step() β Select next tool and sub-goal (loss_mask=1)
ββ Executor.generate_tool_command()
β + execute_command() β Invoke tool (excluded from sequence)
ββ Verifier.verificate_context() β Decide whether to continue (excluded)
ββ Memory.add_action() β Record execution result
β
βΌ
Planner.generate_final_output() β Summarize results and produce final answer (loss_mask=0)
β
βΌ
Rewarder.compute_reward() β LLM-as-Judge: compare model answer with ground truth
tools/)
| Tool | Description |
|---|---|
base_generator |
General-purpose text generation tool; answers sub-tasks directly via LLM |
python_coder |
Python code generation and execution tool for math computation and algorithmic problem solving |
| Model | Dataset | Baseline | AgentFlow (Ours) | Improvement |
|---|---|---|---|---|
| Qwen2.5-7B-Instruct | AIME 2024 | 10.0% | 26.7% | +16.7% |
Note: Due to limited training resources, the AgentFlow model was only trained for 100 steps.
Base model
Qwen/Qwen2.5-7B