--- library_name: transformers license: apache-2.0 language: - en base_model: Qwen/Qwen3-1.7B pipeline_tag: text-generation tags: - code-search - code-localization - reinforcement-learning - agent - software-engineering - GSPO - OpenHands - SWE-Bench datasets: - OpenHands/SWE-smith-py-code-search - OpenHands/SWE-Gym-code-search - OpenHands/CodeScout_Training_Rollouts --- # CodeScout-1.7B [📄 Paper](https://arxiv.org/abs/2603.17829) • [💻 Code](https://github.com/OpenHands/codescout) • [🤗 Collection](https://huggingface.co/collections/OpenHands/codescout-69b9a6adcf21f348f4db937f) **Compact yet powerful — outperforms 8× larger Qwen3-14B using only a Unix terminal.**
Code localization performance on SWE-Bench Verified. CodeScout (⭐) achieves superior or competitive results over larger open-source LLMs and narrows the gap with closed-source frontier models.
## Training CodeScout-1.7B is trained in two stages: **Stage 1 — Rejection Fine-Tuning (RFT):** `Qwen3-1.7B` is warm-started via supervised fine-tuning on 4K perfect-score trajectories (F1 = 1.0 at all granularities) sampled from CodeScout-14B, yielding the [CodeScout-1.7B-RFT](https://huggingface.co/OpenHands/CodeScout-1.7B-RFT) checkpoint. **Stage 2 — RL Training:** CodeScout-1.7B-RFT is further trained with GSPO reinforcement learning. - **Training data (RL):** 800 instances (disjoint from RFT data) - **RL steps:** 100 - **Batch size:** 8, with 8 rollouts per instance - **Max context length:** 32K tokens - **Max turns per episode:** 4 - **Reward:** Multi-level F1 (file + module + function) - **Hardware:** 8×H100 GPUs - **Learning rate:** 1e-6 (constant) ## How It Works CodeScout uses the **OpenHands-Bash** scaffold — an agent equipped with only a `Terminal` tool (supporting standard Unix commands like `rg`, `find`, `grep`, `ls`) and a `LocalizationFinish` tool for structured output submission. The agent iteratively navigates the repository to identify relevant files, classes, and functions related to a given issue. The model is trained with **GSPO** (Group Sequence Policy Optimization) using multi-level F1 rewards at the file, module, and function level. ## Intended Use CodeScout-1.7B is designed for **repository-level code localization**: given a GitHub issue description and a code repository, it identifies the relevant files, classes, and functions that need to be modified. It is intended to be used as a localization subagent within larger coding agent pipelines. ## Limitations - Trained and evaluated exclusively on **Python** repositories - Designed for code *localization*, not code *editing* or issue resolution - Performance may vary on repositories significantly different from the training distribution - Requires the OpenHands-Bash scaffold for optimal performance ## Citation ```bibtex @misc{sutawika2026codescouteffectiverecipereinforcement, title={CodeScout: An Effective Recipe for Reinforcement Learning of Code Search Agents}, author={Lintang Sutawika and Aditya Bharat Soni and Bharath Sriraam R R and Apurva Gandhi and Taha Yassine and Sanidhya Vijayvargiya and Yuchen Li and Xuhui Zhou and Yilin Zhang and Leander Melroy Maben and Graham Neubig}, year={2026}, eprint={2603.17829}, archivePrefix={arXiv}, primaryClass={cs.SE}, url={https://arxiv.org/abs/2603.17829}, } ```