|
|
---
|
|
|
library_name: transformers
|
|
|
license: apache-2.0
|
|
|
base_model:
|
|
|
- Qwen/Qwen2.5-32B-Instruct
|
|
|
tags:
|
|
|
- llama-factory
|
|
|
- full
|
|
|
- generated_from_trainer
|
|
|
datasets:
|
|
|
- open-thoughts/open-thoughts-114k
|
|
|
language:
|
|
|
- zho
|
|
|
- eng
|
|
|
- fra
|
|
|
- spa
|
|
|
- por
|
|
|
- deu
|
|
|
- ita
|
|
|
- rus
|
|
|
- jpn
|
|
|
- kor
|
|
|
- vie
|
|
|
- tha
|
|
|
- ara
|
|
|
model-index:
|
|
|
- name: OpenThinker-32B
|
|
|
results: []
|
|
|
---
|
|
|
|
|
|
<p align="center">
|
|
|
<img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
|
|
|
</p>
|
|
|
|
|
|
# OpenThinker-32B
|
|
|
|
|
|
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the
|
|
|
[OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset.
|
|
|
|
|
|
The dataset is derived by distilling DeepSeek-R1 using the [data pipeline available on github](https://github.com/open-thoughts/open-thoughts).
|
|
|
More info about the dataset can be found on the dataset card at [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k).
|
|
|
|
|
|
The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
|
|
|
|
|
|
|
|
|
|Model Name|Dataset Size|AIME24 I/II|AIME25 I|MATH500|GPQA Diamond|LCBv2|
|
|
|
|---|---|---|---|---|---|---|
|
|
|
|LIMO-32B|0.8k|56.7|49.3|86.6|58.1|60.0|
|
|
|
|s1-32B|1k|36.0|25.3|84.8|50.5|40.9|
|
|
|
|s1.1-32B|1k|64.7|49.3|89.0|60.1|65.5|
|
|
|
|DeepSeek-R1-Distill-Qwen-32B|800k (closed)|**76.7**|**55.9**|89.4|57.6|**71.2**|
|
|
|
|**OpenThinker-32B**|114k|66.0|53.3|**90.6**|**61.6**|68.9|
|
|
|
|
|
|
|
|
|
We are fully open-source. Our [model weights](https://huggingface.co/open-thoughts), [datasets](https://huggingface.co/open-thoughts), [data generation code](https://github.com/open-thoughts/open-thoughts), [evaluation code](https://github.com/mlfoundations/Evalchemy), and [training code](https://github.com/hiyouga/LLaMA-Factory) are all publicly available.
|
|
|
|
|
|
| | Open Weights | Open Data | Open Code |
|
|
|
|--|--------------|-----------| --------- |
|
|
|
|OpenThinker-32B|β
|[β
](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)|[β
](https://github.com/open-thoughts/open-thoughts) |
|
|
|
|DeepSeek-R1-Distill-Qwen-32B|β
|β|β|
|
|
|
|OpenAI/Gemini|β|β|β|β|
|
|
|
|
|
|
|
|
|
|
|
|
## Intended uses & limitations
|
|
|
|
|
|
Apache 2.0 License
|
|
|
|
|
|
|
|
|
## Training procedure
|
|
|
|
|
|
We finetune [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
|
|
|
on [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) for
|
|
|
3 epochs with a 16k context length using [LlamaFactory](https://github.com/hiyouga/LLaMA-Factory).
|
|
|
Our [full training configuration](https://github.com/open-thoughts/open-thoughts/blob/main/train/OpenThinker-32B.yaml)
|
|
|
is provided in [our repository](https://github.com/open-thoughts/open-thoughts/tree/main).
|
|
|
Training the 32B model on [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
|
|
|
was done on AWS SageMaker with 8xH100 P5 nodes. On 4 nodes, this took around 90 hours.
|
|
|
Meanwhile, for training on [OpenThoughts-Unverified-173k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Unverfied-173k),
|
|
|
we used 96 nodes of 4xA100 (64 GB per GPU), training took 30 hours, spending 11,520 A100 hours on the Leonardo Supercomputer.
|
|
|
|
|
|
### Training hyperparameters
|
|
|
|
|
|
The following hyperparameters were used during training:
|
|
|
- learning_rate: 1e-05
|
|
|
- train_batch_size: 1
|
|
|
- eval_batch_size: 8
|
|
|
- seed: 42
|
|
|
- distributed_type: multi-GPU
|
|
|
- num_devices: 32
|
|
|
- gradient_accumulation_steps: 3
|
|
|
- total_train_batch_size: 96
|
|
|
- total_eval_batch_size: 256
|
|
|
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
|
|
- lr_scheduler_type: cosine
|
|
|
- lr_scheduler_warmup_ratio: 0.1
|
|
|
- num_epochs: 3.0
|
|
|
|
|
|
### Framework versions
|
|
|
|
|
|
- Transformers 4.46.1
|
|
|
- Pytorch 2.3.0
|
|
|
- Datasets 3.1.0
|
|
|
- Tokenizers 0.20.3
|
|
|
|
|
|
More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).
|
|
|
|
|
|
# Citation
|
|
|
```
|
|
|
@misc{openthoughts,
|
|
|
author = {Team, OpenThoughts},
|
|
|
month = jan,
|
|
|
title = {{Open Thoughts}},
|
|
|
howpublished = {https://open-thoughts.ai},
|
|
|
year = {2025}
|
|
|
}
|
|
|
```
|
|
|
|
|
|
# Links
|
|
|
- π [Open Thoughts Launch Blog Post](https://www.open-thoughts.ai/blog/launch)
|
|
|
- π [Open Thoughts Measuring Reasoning with Evalchmey Blog Post](https://www.open-thoughts.ai/blog/measure)
|
|
|
- π [Open Thoughts OpenThinker-32B Post](https://www.open-thoughts.ai/blog/scale)
|
|
|
- π» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts)
|
|
|
- π§ [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
|
|
|
- π§ [OpenThoughts-Unverified-173k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Unverified-173k)
|
|
|
- π€ [OpenThinker-7B model](https://huggingface.co/open-thoughts/OpenThinker-7B)
|
|
|
- π€ [OpenThinker-7B-Unverfied model](https://huggingface.co/open-thoughts/OpenThinker-7B-Unverified)
|
|
|
- π€ [OpenThinker-32B model](https://huggingface.co/open-thoughts/OpenThinker-32B) - this model
|
|
|
- π€ [OpenThinker-32B-Unverified model](https://huggingface.co/open-thoughts/OpenThinker-32B-Unverified) |