library_name: transformers
pipeline_tag: text-generation
license: mit
tags:
- generated_from_trainer
- deep-search
- retrieval-augmented-generation
- web-agent
- qwen
model-index:
- name: Qwen-32B-CyberSearcher
results: []
Qwen-32B-CyberSearcher: Deep Information Seeking via Web-Powered Reasoning Trajectory Synthesis
This model is a 32B parameter variant part of the SimpleDeepSearcher framework, presented in the paper SimpleDeepSearcher: Deep Information Seeking via Web-Powered Reasoning Trajectory Synthesis.
Code: https://github.com/RUCAIBox/SimpleDeepSearcher
Model description
SimpleDeepSearcher is a lightweight yet effective framework for enhancing Large Language Models (LLMs) in complex deep search scenarios that require multi-step reasoning and iterative information retrieval. It addresses critical limitations of existing Retrieval-Augmented Generation (RAG) systems by strategically synthesizing high-quality training data from realistic user interactions in live web search environments, coupled with a multi-criteria curation strategy. This approach enables efficient supervised fine-tuning (SFT) with only a small amount of curated data, establishing SFT as a viable pathway for building efficient deep search systems with reduced computational cost and development complexity.
Key Contributions
- A real web-based data synthesis framework that simulates realistic user search behaviors, generating multi-turn reasoning and search trajectories.
- A multi-criteria data curation strategy that jointly optimizes both input question selection and output response filtering through orthogonal filtering dimensions.
- Experimental results demonstrate that SFT on only 871 curated samples enables SimpleDeepSearcher to outperform strong baselines (especially RL-based baselines) on both in-domain and out-of-domain benchmarks.
Framework Overview
SimpleDeepSearcher achieves intelligent search through efficient supervised fine-tuning (SFT) using minimal, high-quality training data constructed via a systematic data synthesis and curation pipeline.
Overall Performance
SimpleDeepSearcher consistently outperforms all baselines across five benchmark datasets, including both in-domain (2Wiki, MuSiQue) and out-of-domain (Bamboogle, FRAMES, GAIA) settings, demonstrating strong generalization and high data efficiency.
Intended uses & limitations
This model is intended for advanced deep search scenarios where large language models need to perform multi-step reasoning and iterative information retrieval through web searches. It can be utilized as a core component for building efficient and effective deep search systems.
Limitations may include dependency on the quality of synthesized training data and the performance of the underlying web search API.
Training and evaluation data
The model was fine-tuned via Supervised Fine-tuning (SFT) on only 871 curated samples. This high-quality training data was synthesized by simulating realistic user interactions in live web search environments, followed by a multi-criteria curation strategy that optimized the diversity and quality of input and output.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 2.19.0
- Tokenizers 0.20.3
Citation
Please kindly cite our report if they are helpful for your research:
@article{sun2025simpledeepsearcher,
title={SimpleDeepSearcher: Deep Information Seeking via Web-Powered Reasoning Trajectory Synthesis},
author={Sun, Shuang and Song, Huatong and Wang, Yuhao and Ren, Ruiyang and Jiang, Jinhao and Zhang, Junjie and Bai, Fei and Deng, Jia and Zhao, Wayne Xin and Liu, Zheng and others},
journal={arXiv preprint arXiv:2505.16834},
year={2025}
}
License
This project is released under the MIT License.
Contact
For any questions or feedback, please reach out to us at sunshuanguns@gmail.com or songhuatong123@ruc.edu.cn.