Update README.md
Browse files
README.md
CHANGED
|
@@ -28,7 +28,7 @@ base_model:
|
|
| 28 |
</p>
|
| 29 |
|
| 30 |
## OpenResearcher-30B-A3B Overview
|
| 31 |
-
OpenResearcher-30B-A3B is an agentic large language model designed for long-horizon deep research fine-tuned from [NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16) on 96K [OpenResearcher dataset](https://huggingface.co/datasets/OpenResearcher/OpenResearcher-Dataset) with **100+** turns. The dataset is derived by distilling GPT-OSS-120B with [native browser tools](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html#usage:~:text=Limitation%20section%20below.-,Tool%20Use,-%C2%B6). More info
|
| 32 |
|
| 33 |
The model achieves an impressive **54.8%** accuracy on [BrowseComp-Plus](https://huggingface.co/spaces/Tevatron/BrowseComp-Plus), surpassing performance of `GPT-4.1`, `Claude-Opus-4`, `Gemini-2.5-Pro`, `DeepSeek-R1` and `Tongyi-DeepResearch`.
|
| 34 |
<div align="center">
|
|
@@ -47,7 +47,8 @@ We evaluate OpenResearcher-30B-A3B across a range of deep research benchmarks, i
|
|
| 47 |
|
| 48 |
## Quick Start
|
| 49 |
|
| 50 |
-
|
|
|
|
| 51 |
|
| 52 |
## Citation
|
| 53 |
```bibtex
|
|
|
|
| 28 |
</p>
|
| 29 |
|
| 30 |
## OpenResearcher-30B-A3B Overview
|
| 31 |
+
OpenResearcher-30B-A3B is an agentic large language model designed for long-horizon deep research fine-tuned from [NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16) on 96K [OpenResearcher dataset](https://huggingface.co/datasets/OpenResearcher/OpenResearcher-Dataset) with **100+** turns. The dataset is derived by distilling GPT-OSS-120B with [native browser tools](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html#usage:~:text=Limitation%20section%20below.-,Tool%20Use,-%C2%B6). More info can be found on the dataset card at [OpenResearcher dataset](https://huggingface.co/datasets/OpenResearcher/OpenResearcher-Dataset).
|
| 32 |
|
| 33 |
The model achieves an impressive **54.8%** accuracy on [BrowseComp-Plus](https://huggingface.co/spaces/Tevatron/BrowseComp-Plus), surpassing performance of `GPT-4.1`, `Claude-Opus-4`, `Gemini-2.5-Pro`, `DeepSeek-R1` and `Tongyi-DeepResearch`.
|
| 34 |
<div align="center">
|
|
|
|
| 47 |
|
| 48 |
## Quick Start
|
| 49 |
|
| 50 |
+
We provide a [quick-start](https://github.com/TIGER-AI-Lab/OpenResearcher?tab=readme-ov-file#-quick-start) in GitHub that demonstrates how to use `OpenResearcher-30B-A3B` for deep research.
|
| 51 |
+
|
| 52 |
|
| 53 |
## Citation
|
| 54 |
```bibtex
|