SPARK: Strategic Policy-Aware Exploration via Dynamic Branching

This model is trained using the SPARK framework proposed in the paper:

SPARK: Strategic Policy-Aware Exploration via Dynamic Branching for Long-Horizon Agentic Learning

๐Ÿ“„ Paper: arXiv:2601.20209

Overview

SPARK is a novel reinforcement learning framework that enables autonomous strategic exploration for long-horizon agentic tasks. Instead of uniformly exploring all steps, SPARK selectively branches at critical decision points using intrinsic <explore> signals, achieving superior performance with significantly fewer training samples.

Key Features

  • ๐ŸŽฏ Autonomous Strategic Exploration: Dynamically identifies critical states for branching without human priors
  • โšก Sample Efficient: Achieves 84.4% success with only 20% training data (vs. GRPO 76.6% at 100%)
  • ๐Ÿ’ฐ Token Efficient: Reduces token consumption by up to 47% through prefix sharing
  • ๐ŸŒ Strong Generalization: Maintains 80.5% success on unseen tasks (significantly outperforms GRPO)

Performance Highlights

Benchmark SPARK-1.5B GPT-5 Gemini-2.5-Pro
ALFWorld L2 80.5% 63.3% 55.5%
ScienceWorld L2 49.2% 33.6% 30.5%
WebShop 75.8% 29.7% 32.0%

Citation

If you use this model or the SPARK framework in your research, please cite:

@article{wu2026spark,
  title={SPARK: Strategic Policy-Aware Exploration via Dynamic Branching for Long-Horizon Agentic Learning},
  author={Wu, Jinyang and Yang, Shuo and Yang, Changpeng and Shen, Yuhao and Zhang, Shuai and Wen, Zhengqi and Tao, Jianhua},
  journal={arXiv preprint arXiv:2601.20209},
  year={2026}
}

Model Details

  • Base Model: Qwen/Qwen2.5-1.5B-Instruct
  • Training Method: SPARK (Dynamic Branching RL)
  • Training Dataset: WebShop

Links

Downloads last month
-
Safetensors
Model size
2B params
Tensor type
BF16
ยท
Video Preview
loading

Model tree for Jinyang23/Spark-1.5B-WebShop

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(1411)
this model
Quantizations
1 model

Paper for Jinyang23/Spark-1.5B-WebShop