nielsr HF Staff commited on
Commit
1d28eef
·
verified ·
1 Parent(s): 08a19d3

Add model card metadata, paper link and GitHub repository

Browse files

Hi! I'm Niels from the Hugging Face team.

This PR improves the model card for **Agent-STAR-RL-3B** by adding relevant metadata and documentation. It ensures the model is correctly categorized and links it to the original research paper, code repository, and training dataset.

Key changes:
- Added `pipeline_tag: text-generation`.
- Added `library_name: transformers` (verified by the model's configuration files).
- Linked the `base_model` (Qwen2.5-3B-Instruct).
- Included links to the paper and GitHub repository in the description.
- Added the BibTeX citation for proper attribution.

Files changed (1) hide show
  1. README.md +51 -3
README.md CHANGED
@@ -1,3 +1,51 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ base_model: Qwen/Qwen2.5-3B-Instruct
6
+ tags:
7
+ - agent
8
+ - tool-use
9
+ - reinforcement-learning
10
+ - travel-planner
11
+ ---
12
+
13
+ # Agent-STAR-RL-3B
14
+
15
+ This repository contains the **Agent-STAR-RL-3B** model, a 3B parameter Large Language Model fine-tuned for long-horizon tool orchestration tasks. It was introduced in the paper [Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe](https://huggingface.co/papers/2603.21972).
16
+
17
+ ## Model Description
18
+
19
+ Agent-STAR is a unified post-training pipeline consisting of **[Data Synthesis → SFT → RL]**. This specific checkpoint is the RL-tuned version based on the **Qwen2.5-3B-Instruct** backbone, optimized for the [TravelPlanner](https://github.com/OSU-NLP-Group/TravelPlanner/) benchmark.
20
+
21
+ The model was developed to handle complex, multi-turn agentic environments where it must call various tools to satisfy multifaceted constraints. According to the research findings, smaller models like this 3B variant benefit from staged rewards and enhanced exploration during the RL phase to achieve high performance.
22
+
23
+ ## Resources
24
+
25
+ - **Paper:** [Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe](https://huggingface.co/papers/2603.21972)
26
+ - **GitHub Repository:** [WxxShirley/Agent-STAR](https://github.com/WxxShirley/Agent-STAR)
27
+ - **Dataset:** [Agent-STAR-TravelDataset](https://huggingface.co/datasets/xxwu/Agent-STAR-TravelDataset)
28
+
29
+ ## Inference
30
+
31
+ To run inference with this model, please refer to the instructions and ReAct-based inference pipeline provided in the [official GitHub repository](https://github.com/WxxShirley/Agent-STAR).
32
+
33
+ ## Citation
34
+
35
+ If you find Agent-STAR helpful to your work, please consider citing:
36
+
37
+ ```bibtex
38
+ @misc{wu2026agentstar,
39
+ title={Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe},
40
+ author={Xixi Wu and Qianguo Sun and Ruiyang Zhang and Chao Song and Junlong Wu and Yiyan Qi and Hong Cheng},
41
+ year={2026},
42
+ eprint={2603.21972},
43
+ archivePrefix={arXiv},
44
+ primaryClass={cs.LG},
45
+ url={https://arxiv.org/abs/2603.21972},
46
+ }
47
+ ```
48
+
49
+ ## Acknowledgements
50
+
51
+ We appreciate the open-sourced [rLLM](https://github.com/rllm-org/rllm/) framework and the authors of [TravelPlanner](https://github.com/OSU-NLP-Group/TravelPlanner) for providing the benchmark and resources that supported this research.