sedrickkeh commited on
Commit
f2ac531
Β·
verified Β·
1 Parent(s): a0c4774

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-7B-Instruct
5
+ tags:
6
+ - llama-factory
7
+ - full
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: OpenThinker3-7B
11
+ results: []
12
+ datasets:
13
+ - open-thoughts/OpenThoughts3-1.2M
14
+ ---
15
+
16
+ <p align="center">
17
+ <img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
18
+ </p>
19
+
20
+ <p align="center">
21
+ <a href="https://arxiv.org/abs/2506.04178" style="margin-right: 24px;">paper</a> |
22
+ <a href="https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M" style="margin-right: 24px; margin-left: 24px;">dataset</a> |
23
+ <a href="https://huggingface.co/open-thoughts/OpenThinker3-7B" style="margin-left: 24px;">model</a>
24
+ </p>
25
+
26
+ # OpenThinker3-7B
27
+
28
+ State-of-the-art open-data 7B reasoning model. πŸš€
29
+
30
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the
31
+ [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset.
32
+ It represents a notable improvement over our previous models, [OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) and [OpenThinker2-7B](https://huggingface.co/open-thoughts/OpenThinker2-7B), and it outperforms several other strong reasoning 7B models such as [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) and [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1), despite being trained only with SFT, without any RL.
33
+
34
+ This time, we also release a paper! See our [paper](https://arxiv.org/abs/2506.04178) and [blog post](https://openthoughts.ai/blog/ot3) for more details. OpenThinker3-32B to follow! πŸ‘€
35
+
36
+ # Evaluation Results
37
+ The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
38
+ In the table below, we bold values in each column that are within 2 standard errors of the best.
39
+
40
+ | Model | Data | AIME24 | AIME25 | AMC23 | MATH500 | HMMT O2/25 | LCB 06/24-01/25 | CodeElo | CodeForces | GPQA-D | JEEBench |
41
+ | ----------------------------------------------------------------------------------------------- | ----- | ------ | ------ | ------ | ------- | ---------- | --------------- | ------- | ---------- | ------ | -------- |
42
+ | [OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) | βœ… | | | | | | | | | | |
43
+ | [OpenThinker2-7B](https://huggingface.co/open-thoughts/OpenThinker2-7B) | βœ… | | | | | | | | | | |
44
+ | **[OpenThinker3-7B](https://huggingface.co/open-thoughts/OpenThinker3-7B)** | βœ… |**69.0**|**53.3**|**93.5**| **90.0**| **42.7** | **51.7** | 31.0 |**32.2** | 53.7 |**72.4** |
45
+ | [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | ❌ | 51.3 | 38.0 | 92.0 | 88.0 | 25.0 | 34.5 | 19.9 | 21.1 | 33.2 | 50.4 |
46
+ | [OpenR1-Distill-7B](https://huggingface.co/open-r1/OpenR1-Distill-7B) | βœ… | 57.7 | 39.7 | 87.0 | 88.0 | 25.7 | 30.7 | 30.1 | 29.3 |**58.9**| 68.7 |
47
+ | [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) | βœ… | 62.0 | 48.0 |**94.0**| 89.4 | 26.7 | **50.9** | 30.9 |**32.9** | 52.9 | 70.7 |
48
+ | [AceReason-Nemotron-7B](https://huggingface.co/nvidia/AceReason-Nemotron-7B) | βœ… |**71.0**| 50.7 |**93.8**| 89.8 | 33.3 | 44.3 |**32.9** |**30.9** | 52.9 | 64.3 |
49
+
50
+
51
+ # Data
52
+
53
+ This model was trained on the [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset.
54
+
55
+ The key behind the strong model performance is our comprehensive data pipeline and 1000+ ablation experiments.
56
+ This led to the creation of [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M), which consists of 850,000 math examples, 250,000 code examples, and 100,000 science examples.
57
+ Reasoning traces are generated with QwQ-32B.
58
+
59
+ See the [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset page or our [paper](https://arxiv.org/abs/2506.04178) for additional information.
60
+
61
+
62
+ # Intended uses & limitations
63
+
64
+ Apache 2.0 License
65
+
66
+
67
+ ## Training procedure
68
+
69
+ We used 512 A100 nodes to train the model for 48 hours.
70
+
71
+ ## Training hyperparameters
72
+
73
+ The following hyperparameters were used during training:
74
+ - learning_rate: 8e-05
75
+ - seed: 42
76
+ - distributed_type: multi-GPU
77
+ - num_devices: 512
78
+ - gradient_accumulation_steps: 1
79
+ - total_train_batch_size: 512
80
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
81
+ - lr_scheduler_type: cosine
82
+ - lr_scheduler_warmup_ratio: 0.1
83
+ - num_epochs: 5.0
84
+
85
+ ## Framework versions
86
+
87
+ - Transformers 4.46.1
88
+ - Pytorch 2.3.0
89
+ - Datasets 3.1.0
90
+ - Tokenizers 0.20.3
91
+
92
+ More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).
93
+
94
+ # Links
95
+ - πŸ“ [OpenThoughts Paper](https://arxiv.org/abs/2506.04178)
96
+ - πŸ“Š [OpenThoughts3-1.2M and OpenThinker3-7B Blog Post](https://www.open-thoughts.ai/blog/ot3)
97
+ - πŸ’» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts)
98
+ - 🧠 [OpenThoughts3-1.2M dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M)
99
+ - πŸ€– [OpenThinker3-7B model](https://huggingface.co/open-thoughts/OpenThinker3-7B) - this model.
100
+
101
+ # Citation
102
+ ```
103
+ @misc{guha2025openthoughtsdatarecipesreasoning,
104
+ title={OpenThoughts: Data Recipes for Reasoning Models},
105
+ author={Etash Guha and Ryan Marten and Sedrick Keh and Negin Raoof and Georgios Smyrnis and Hritik Bansal and Marianna Nezhurina and Jean Mercat and Trung Vu and Zayne Sprague and Ashima Suvarna and Benjamin Feuer and Liangyu Chen and Zaid Khan and Eric Frankel and Sachin Grover and Caroline Choi and Niklas Muennighoff and Shiye Su and Wanjia Zhao and John Yang and Shreyas Pimpalgaonkar and Kartik Sharma and Charlie Cheng-Jie Ji and Yichuan Deng and Sarah Pratt and Vivek Ramanujan and Jon Saad-Falcon and Jeffrey Li and Achal Dave and Alon Albalak and Kushal Arora and Blake Wulfe and Chinmay Hegde and Greg Durrett and Sewoong Oh and Mohit Bansal and Saadia Gabriel and Aditya Grover and Kai-Wei Chang and Vaishaal Shankar and Aaron Gokaslan and Mike A. Merrill and Tatsunori Hashimoto and Yejin Choi and Jenia Jitsev and Reinhard Heckel and Maheswaran Sathiamoorthy and Alexandros G. Dimakis and Ludwig Schmidt},
106
+ year={2025},
107
+ eprint={2506.04178},
108
+ archivePrefix={arXiv},
109
+ primaryClass={cs.LG},
110
+ url={https://arxiv.org/abs/2506.04178},
111
+ }
112
+ ```