File size: 7,162 Bytes
03d615c
6806be3
 
 
03d615c
 
 
 
 
 
 
 
 
6806be3
03d615c
 
 
 
 
 
0aefafe
 
 
6806be3
03d615c
6806be3
03d615c
6806be3
2d38c0d
03d615c
 
e946b12
 
 
 
 
 
 
03d615c
 
6806be3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b97a143
03d615c
 
 
b97a143
4a745da
b97a143
c74d502
03d615c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0aefafe
c74d502
03d615c
6806be3
03d615c
 
 
0aefafe
 
 
 
 
 
 
 
 
 
 
 
6806be3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
base_model: Qwen/Qwen2.5-32B-Instruct
datasets:
- open-thoughts/OpenThoughts2-1M
library_name: transformers
license: apache-2.0
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: OpenThinker2-32B
  results: []
pipeline_tag: text-generation
---

<p align="center">
    <img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
</p>

> [!NOTE]
> We have released a paper for OpenThoughts! See our paper [here](https://arxiv.org/abs/2506.04178).

# OpenThinker2-32B: A Powerful Open-Data Reasoning Model

OpenThinker2-32B is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) trained on the [OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset.  This model represents a significant advancement in open-source reasoning capabilities, achieving competitive performance with closed-source models on various benchmarks. OpenThinker2-32B excels at complex reasoning tasks involving math, code, and general knowledge, demonstrating its versatility and robustness.

The [OpenThinker2-32B](https://huggingface.co/open-thoughts/OpenThinker2-32B) model is the highest performing open-data model.
This model improves upon our previous [OpenThinker-32B](https://huggingface.co/open-thoughts/OpenThinker-32B) model, which was trained on 114k examples from [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k).
The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).

| Model                                                                                           | Data | AIME24 | AIME25 | AMC23 | MATH500 | GPQA-D | LCBv2 |
| ----------------------------------------------------------------------------------------------- | ---- | ------ | ------ | ----- | ------- | ------ | ----- |
| [OpenThinker2-32B](https://huggingface.co/open-thoughts/OpenThinker2-32B)                       | βœ…    | 76.7   | 58.7   | 94.0  | 90.8    | 64.1   | 72.5  |
| [OpenThinker-32B](https://huggingface.co/open-thoughts/OpenThinker-32B)                         | βœ…    | 68.0   | 49.3   | 95.5  | 90.6    | 63.5   | 68.6  |
| [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | ❌    | 74.7   | 50.0   | 96.5  | 90.0    | 65.8   | 72.3  |
| [Light-R1-32B](https://huggingface.co/qihoo360/Light-R1-32B)                                    | βœ…    | 74.7   | 58.0   | 96.0  | 90.4    | 62.0   | 56.0  |
| [S1.1-32B](https://huggingface.co/simplescaling/s1.1-32B)                                       | βœ…    | 59.3   | 42.7   | 91.5  | 87.4    | 62.0   | 58.7  |


## Usage Examples

This model can be easily used with the Hugging Face `pipeline` API for text generation.

**Example 1: Simple Text Generation**

```python
from transformers import pipeline

generator = pipeline('text-generation', model='open-thoughts/OpenThinker2-32B')
result = generator("Once upon a time,", max_length=50)
print(result[0]['generated_text'])
```

**Example 2: Controlling the Length of the Generated Text**

```python
from transformers import pipeline

generator = pipeline('text-generation', model='open-thoughts/OpenThinker2-32B')
result = generator("The quick brown fox jumps over the lazy dog.", max_length=100)
print(result[0]['generated_text'])
```

**Example 3: Setting the Temperature**

```python
from transformers import pipeline

generator = pipeline('text-generation', model='open-thoughts/OpenThinker2-32B')
result = generator("Write a short poem about nature:", max_length=50, temperature=0.7)
print(result[0]['generated_text'])
```

## Data

This model was trained on the [OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset.

The [OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset was constructed by augmenting [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k) with existing datasets like [OpenR1](https://huggingface.co/open-r1), as well as additional math and code reasoning data.
We generate the additional math and code data by ablating over 26 different question generation methodologies and sampling from the highest performing ones. 

See the [OpenThoughts2-1M](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M) dataset page or our [blog post](https://www.open-thoughts.ai/blog/thinkagain) for additional information.


## Intended uses & limitations

Apache 2.0 License


## Training procedure

We used 128 4xA100 nodes to train the model for 50 hours.

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 8e-05
- seed: 42
- distributed_type: multi-GPU
- num_devices: 512
- gradient_accumulation_steps: 1
- total_train_batch_size: 512
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0

### Framework versions

- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3

More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).


# Links
- πŸ“ [OpenThoughts Paper](https://arxiv.org/abs/2506.04178)
- πŸ“Š [OpenThoughts2 and OpenThinker2 Blog Post](https://www.open-thoughts.ai/blog/thinkagain)
- πŸ’» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts)
- 🌐 [Project Page](https://openthoughts.ai)
- 🧠 [OpenThoughts2-1M dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts2-1M)
- πŸ€– [OpenThinker2-7B model](https://huggingface.co/open-thoughts/OpenThinker2-7B)
- πŸ€– [OpenThinker2-32B model](https://huggingface.co/open-thoughts/OpenThinker2-32B) - this model.

# Citation
```
@misc{guha2025openthoughtsdatarecipesreasoning,
  title={OpenThoughts: Data Recipes for Reasoning Models}, 
  author={Etash Guha and Ryan Marten and Sedrick Keh and Negin Raoof and Georgios Smyrnis and Hritik Bansal and Marianna Nezhurina and Jean Mercat and Trung Vu and Zayne Sprague and Ashima Suvarna and Benjamin Feuer and Liangyu Chen and Zaid Khan and Eric Frankel and Sachin Grover and Caroline Choi and Niklas Muennighoff and Shiye Su and Wanjia Zhao and John Yang and Shreyas Pimpalgaonkar and Kartik Sharma and Charlie Cheng-Jie Ji and Yichuan Deng and Sarah Pratt and Vivek Ramanujan and Jon Saad-Falcon and Jeffrey Li and Achal Dave and Alon Albalak and Kushal Arora and Blake Wulfe and Chinmay Hegde and Greg Durrett and Sewoong Oh and Mohit Bansal and Saadia Gabriel and Aditya Grover and Kai-Wei Chang and Vaishaal Shankar and Aaron Gokaslan and Mike A. Merrill and Tatsunori Hashimoto and Yejin Choi and Jenia Jitsev and Reinhard Heckel and Maheswaran Sathiamoorthy and Alexandros G. Dimakis and Ludwig Schmidt},
  year={2025},
  eprint={2506.04178},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2506.04178}, 
}
```