File size: 5,080 Bytes
a8ad13b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
library_name: transformers
license: apache-2.0
base_model: open-thoughts/OpenThinker-32B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: OpenThinker-32B
  results: []
datasets:
- open-thoughts/open-thoughts-114k
---

<p align="center">
    <img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
</p>

# OpenThinker-32B

This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the 
[OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset.

The dataset is derived by distilling DeepSeek-R1 using the [data pipeline available on github](https://github.com/open-thoughts/open-thoughts). 
More info about the dataset can be found on the dataset card at [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k).

The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).


|Model Name|Dataset Size|AIME24 I/II|AIME25 I|MATH500|GPQA Diamond|LCBv2|
|---|---|---|---|---|---|---|
|LIMO-32B|0.8k|56.7|49.3|86.6|58.1|60.0|
|s1-32B|1k|36.0|25.3|84.8|50.5|40.9|
|s1.1-32B|1k|64.7|49.3|89.0|60.1|65.5|
|DeepSeek-R1-Distill-Qwen-32B|800k (closed)|**76.7**|**55.9**|89.4|57.6|**71.2**|
|**OpenThinker-32B**|114k|66.0|53.3|**90.6**|**61.6**|68.9|


We are fully open-source. Our [model weights](https://huggingface.co/open-thoughts), [datasets](https://huggingface.co/open-thoughts), [data generation code](https://github.com/open-thoughts/open-thoughts), [evaluation code](https://github.com/mlfoundations/Evalchemy), and [training code](https://github.com/hiyouga/LLaMA-Factory) are all publicly available. 

|  | Open Weights | Open Data | Open Code | 
|--|--------------|-----------| --------- |
|OpenThinker-32B|βœ…|[βœ…](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)|[βœ…](https://github.com/open-thoughts/open-thoughts) |
|DeepSeek-R1-Distill-Qwen-32B|βœ…|❌|❌|
|OpenAI/Gemini|❌|❌|❌|❌|



## Intended uses & limitations

Apache 2.0 License


## Training procedure

We finetune [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) 
on [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) for 
3 epochs with a 16k context length using [LlamaFactory](https://github.com/hiyouga/LLaMA-Factory). 
Our [full training configuration](https://github.com/open-thoughts/open-thoughts/blob/main/train/OpenThinker-32B.yaml) 
is provided in [our repository](https://github.com/open-thoughts/open-thoughts/tree/main). 
Training the 32B model on [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) 
was done on AWS SageMaker with 8xH100 P5 nodes. On 4 nodes, this took around 90 hours. 
Meanwhile, for training on [OpenThoughts-Unverified-173k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Unverfied-173k), 
we used 96 nodes of 4xA100 (64 GB per GPU), training took 30 hours, spending 11,520 A100 hours on the Leonardo Supercomputer.

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0

### Framework versions

- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3

More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).

# Citation
```
@misc{openthoughts,
  author = {Team, OpenThoughts},
  month = jan,
  title = {{Open Thoughts}},
  howpublished = {https://open-thoughts.ai},
  year = {2025}
}
```

# Links
- πŸ“Š [Open Thoughts Launch Blog Post](https://www.open-thoughts.ai/blog/launch)
- πŸ“Š [Open Thoughts Measuring Reasoning with Evalchmey Blog Post](https://www.open-thoughts.ai/blog/measure)
- πŸ“Š [Open Thoughts OpenThinker-32B Post](https://www.open-thoughts.ai/blog/scale)
- πŸ’» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts)
- 🧠 [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
- 🧠 [OpenThoughts-Unverified-173k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Unverified-173k)
- πŸ€– [OpenThinker-7B model](https://huggingface.co/open-thoughts/OpenThinker-7B)
- πŸ€– [OpenThinker-7B-Unverfied model](https://huggingface.co/open-thoughts/OpenThinker-7B-Unverified)
- πŸ€– [OpenThinker-32B model](https://huggingface.co/open-thoughts/OpenThinker-32B) - this model
- πŸ€– [OpenThinker-32B-Unverified model](https://huggingface.co/open-thoughts/OpenThinker-32B-Unverified)