🤗 HuggingFace |
Blog |
Slack |
WeChat
Overview
OpenResearcher is a fully open agentic large language model (30B-A3B) designed for long-horizon deep research scenarios. It achieves an impressive 54.8% accuracy on BrowseComp-Plus, surpassing performance of GPT-4.1, Claude-Opus-4, Gemini-2.5-Pro, DeepSeek-R1 and Tongyi-DeepResearch. It also demonstrates leading performance across a range of deep research benchmarks, including BrowseComp, GAIA, WebWalkerQA, and xbench-DeepSearch. We fully open-source the training and evaluation recipe—including data, model, training methodology, and evaluation framework for everyone to progress deep research.
OpenResearcher-30B-A3B-GGUF
Note: For the best performance, we recommend using OpenResearcher-30B-A3B.
To support efficient deployment, we release several quantized versions of OpenResearcher-30B-A3B, including Q4_K_M, Q5_0, Q5_K_M, Q6_K, and Q8_0.
| Quantization | File Size | BPW | PPL | +/- | Tokens/sec |
|---|---|---|---|---|---|
| BF16 | 58.84 GiB | 16.00 | 8.4522 | 0.06489 | 4,117.90 |
| Q8_0 | 31.27 GiB | 8.51 | 8.4654 | 0.06499 | 7,490.81 |
| Q6_K | 31.20 GiB | 8.49 | 8.4784 | 0.06510 | 7,389.76 |
| Q5_0 | 20.37 GiB | 5.54 | 8.5462 | 0.06558 | 7,534.66 |
| Q4_K_M | 22.82 GiB | 6.21 | 8.5970 | 0.06610 | 7,046.96 |
| Q5_K_M | 24.24 GiB | 6.60 | 8.6074 | 0.06625 | 6,661.48 |
Citation
@misc{li2025openresearcher,
title={OpenResearcher: A Fully Open Pipeline for Long-Horizon Deep Research Trajectory Synthesis},
author={Zhuofeng Li and Dongfu Jiang and Xueguang Ma and Haoxiang Zhang and Ping Nie and Yuyu Zhang and Kai Zou and Jianwen Xie and Yu Zhang and Wenhu Chen},
year={2025},
howpublished={\url{https://www.notion.so/OpenResearcher-A-Fully-Open-Pipeline-for-Long-Horizon-Deep-Research-Trajectory-Synthesis-2f7e290627b5800cb3a0cd7e8d6ec0ea}},
note={Notion Blog}
}
- Downloads last month
- -
4-bit
5-bit
6-bit
8-bit
16-bit