--- library_name: transformers license: apache-2.0 language: - en base_model: Qwen/Qwen3-1.7B pipeline_tag: text-generation tags: - code-search - code-localization - reinforcement-learning - agent - software-engineering - GSPO - OpenHands - SWE-Bench datasets: - OpenHands/SWE-smith-py-code-search - OpenHands/SWE-Gym-code-search - OpenHands/CodeScout_Training_Rollouts --- # CodeScout-1.7B-RFT [📄 Paper](https://arxiv.org/abs/2603.17829) • [💻 Code](https://github.com/OpenHands/codescout) • [🤗 Collection](https://huggingface.co/collections/OpenHands/codescout-69b9a6adcf21f348f4db937f) **Pre-RL checkpoint — rejection fine-tuned on expert trajectories from CodeScout-14B.**
Code localization performance on SWE-Bench Verified. CodeScout (⭐) achieves superior or competitive results over larger open-source LLMs and narrows the gap with closed-source frontier models.
## Training CodeScout-1.7B-RFT is the intermediate checkpoint produced by rejection fine-tuning (RFT) `Qwen3-1.7B` on expert trajectories from CodeScout-14B, before the final RL stage. - **Teacher model:** [CodeScout-14B](https://huggingface.co/OpenHands/CodeScout-14B) - **Source trajectories:** Rollouts from CodeScout-14B on 7,700 training instances - **Filtered data:** 4K trajectories with perfect scores (F1 = 1.0 at file, module, and function level) - **SFT epochs:** 1 - **Learning rate:** 5e-5 with cosine scheduler (warmup ratio 0.1) - **Batch size:** 8 - **Optimizer:** AdamW - **Framework:** [veRL](https://github.com/volcengine/verl) This checkpoint serves as the starting point for RL training of [CodeScout-1.7B](https://huggingface.co/OpenHands/CodeScout-1.7B). ## How It Works CodeScout uses the **OpenHands-Bash** scaffold — an agent equipped with only a `Terminal` tool (supporting standard Unix commands like `rg`, `find`, `grep`, `ls`) and a `LocalizationFinish` tool for structured output submission. The agent iteratively navigates the repository to identify relevant files, classes, and functions related to a given issue. The model is trained with **GSPO** (Group Sequence Policy Optimization) using multi-level F1 rewards at the file, module, and function level. ## Intended Use CodeScout-1.7B-RFT is designed for **repository-level code localization**: given a GitHub issue description and a code repository, it identifies the relevant files, classes, and functions that need to be modified. It is intended to be used as a localization subagent within larger coding agent pipelines. ## Limitations - Trained and evaluated exclusively on **Python** repositories - Designed for code *localization*, not code *editing* or issue resolution - Performance may vary on repositories significantly different from the training distribution - Requires the OpenHands-Bash scaffold for optimal performance ## Citation ```bibtex @misc{sutawika2026codescouteffectiverecipereinforcement, title={CodeScout: An Effective Recipe for Reinforcement Learning of Code Search Agents}, author={Lintang Sutawika and Aditya Bharat Soni and Bharath Sriraam R R and Apurva Gandhi and Taha Yassine and Sanidhya Vijayvargiya and Yuchen Li and Xuhui Zhou and Yilin Zhang and Leander Melroy Maben and Graham Neubig}, year={2026}, eprint={2603.17829}, archivePrefix={arXiv}, primaryClass={cs.SE}, url={https://arxiv.org/abs/2603.17829}, } ```