File size: 12,022 Bytes
e5645c6 0968c98 d729dd0 e5645c6 d729dd0 e5645c6 d729dd0 e5645c6 d89989a e5645c6 d729dd0 0a3069c e5645c6 d89989a e5645c6 d729dd0 e5645c6 d729dd0 e5645c6 d729dd0 e5645c6 d729dd0 3373170 e5645c6 d729dd0 e5645c6 d729dd0 e5645c6 88d0b7c e5645c6 d729dd0 e5645c6 d729dd0 e5645c6 d729dd0 e5645c6 d729dd0 e5645c6 d729dd0 e5645c6 d729dd0 e5645c6 d729dd0 e5645c6 d729dd0 e5645c6 fcef970 e5645c6 fcef970 e5645c6 bc4e4a9 d729dd0 d89989a e5645c6 0968c98 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 |
---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
datasets:
- Skywork/Skywork-OR1-RL-Data
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
<div align="center">
# 🤔 Skywork-OR1 (Open Reasoner 1)
<div>
✊ Unleashing the Power of Reinforcement Learning for Math and Code Reasoners 🤖
</div>
</div>
<div>
<br>
<div align="center">
[](https://huggingface.co/collections/Skywork/skywork-or1-67fa1bcb41b436ef2def76b9)
[](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data)
[](https://github.com/SkyworkAI/Skywork-OR1)
[](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680)
[](https://github.com/SkyworkAI/Skywork-OR1/stargazers)
[](https://github.com/SkyworkAI/Skywork-OR1/fork)
</div>
## 🔥 News
- **May 29, 2025**: Our [Skywork Open Reasoner 1 Technical Report](https://arxiv.org/abs/2505.22312) has been released on arXiv. We provide further details on the training pipeline, investigation and mitigation to the entropy collapse phenomenon, and extensive analysis and ablation studies.
- **May 13, 2025**: We release our final version of **`Skywork-OR1`** series of models:**`Skywork-OR1-32B`** and **`Skywork-OR1-7B`**.
- **[`Skywork-OR1-32B`](https://huggingface.co/Skywork/Skywork-OR1-32B)** outperforms Deepseek-R1 and Qwen3-32B on math tasks (AIME24 and AIME25) and delivers comparable performance on coding tasks (LiveCodeBench).
- **[`Skywork-OR1-7B`](https://huggingface.co/Skywork/Skywork-OR1-7B)** exhibits competitive performance compared to similarly sized models in both math and coding scenarios.
- **April 15, 2025**: We release our rl training dataset [`Skywork-OR1-RL-Data`](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data).
- **April 13, 2025**: We release the **`Skywork-OR1`** (Open Reasoner 1) series of models, including **`Skywork-OR1-Math-7B`**, **`Skywork-OR1-32B-Preview`**, and **`Skywork-OR1-7B-Preview`**. We open-source
- 🤗 Model weights: [`Skywork-OR1-Math-7B`](https://huggingface.co/Skywork/Skywork-OR1-Math-7B), [`Skywork-OR1-32B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-32B-Preview), [`Skywork-OR1-7B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-7B-Preview)
- 🤗 Training data: [`Skywork-OR1-RL-Data`](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data)
- 🧑💻 Code: [`Skywork-OR1`](https://github.com/SkyworkAI/Skywork-OR1)
- We also release a [Notion Blog](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680) to share detailed training recipes and extensive experimental results, analysis, and insights, dedicated to helping the community to better research, understand, and push the frontier of open reasoning models.
## 📖 Overview
<div align="center">
<img src="./assets/32b_perf.jpg" width="100%"/>
<sub>The AIME24 and AIME225 scores versus training steps of Skywork-OR1-32B in our multi-stage training pipeline.</sub>
</div>
The **`Skywork-OR1`** (Open Reasoner 1) model series consists of powerful math and code reasoning models trained using large-scale rule-based reinforcement learning with carefully designed datasets and training recipes. This series includes two general-purpose reasoning modelsl, **`Skywork-OR1-7B`** and **`Skywork-OR1-32B`**.
- **[`Skywork-OR1-32B`](https://huggingface.co/Skywork/Skywork-OR1-32B)** outperforms Deepseek-R1 and Qwen3-32B on math tasks (AIME24 and AIME25) and delivers comparable performance on coding tasks (LiveCodeBench).
- **[`Skywork-OR1-7B`](https://huggingface.co/Skywork/Skywork-OR1-7B)** exhibits competitive performance compared to similarly sized models in both math and coding scenarios.
## 📊 Evaluation
<div align="center">
<img src="./assets/32b_eval.jpg" width="75%"/>
<img src="./assets/7b_eval.jpg" width="75%"/>
</div>
</div>
<br>
We evaluate our models on AIME24, AIME25, and LiveCodeBench. Instead of using Pass@1, which is common in prior work, we introduce Avg@K as the primary metric. This metric robustly measures a model's average performance across K independent attempts, reducing the impact of randomness and enhancing the reliability of the results. We believe that Avg@K provides a better reflection of a model's stability and reasoning consistency.
We include the detailed results in the following table.
| Model | AIME24 (Avg@32) | AIME25 (Avg@32) | LiveCodeBench (8/1/24-2/1/25) (Avg@4) |
| ---------------------------- | --------------- | --------------- | ------------------------------------- |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 39.2 | 37.6 |
| Light-R1-7B-DS | 59.1 | 44.3 | 39.5 |
| **Skywork-OR1-7B** | 70.2 | 54.6 | 47.6 |
| DeepSeek-R1-Distill-Qwen-32B | 72.9 | 59.0 | 57.2 |
| TinyR1-32B-Preview | 78.1 | 65.3 | 61.6 |
| QwQ-32B | 79.5 | 65.3 | 61.6 |
| Qwen3-32B | 81.4 | 72.9 | 65.7 |
| DeepSeek-R1 | 79.8 | 70.0 | 65.9 |
| **Skywork-OR1-32B** | 82.2 | 73.3 | 63.0 |
## 🎯 Getting Started
### Installation
Docker environment:
```bash
docker pull whatcanyousee/verl:vemlp-th2.4.0-cu124-vllm0.6.3-ray2.10-te2.0-megatron0.11.0-v0.0.6
# Launch the desired Docker image:
docker run --runtime=nvidia -it --rm --shm-size="10g" --cap-add=SYS_ADMIN -v <image:tag>
# Inside the container, install Skywork-OR1
git clone https://github.com/SkyworkAI/Skywork-OR1.git && cd Skywork-OR1 && pip3 install -e .
```
Conda environment:
```bash
# Installing Python 3.10 Environment.
conda create -n verl python==3.10
conda activate verl
# Installing RLLM dependencies.
pip3 install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu124
pip3 install flash-attn --no-build-isolation
git clone https://github.com/SkyworkAI/Skywork-OR1.git
cd Skywork-OR1
pip3 install -e .
```
### Training ⚙️
We provide training scripts and data to reproduce the results of the “Skywork-OR1-Series”.
### Training Data Preparation
To prepare the training data, we provide a script to download the data from Hugging Face and filter the problems based on the difficulty level with respect to a particular model (i.e., DeepSeek-R1-Distill-Qwen-{1.5,7,32}B).
```bash
model_size=32b # 1p5b, 7b
python ./or1_scripts/data_preprocess/download_and_filter_data_${model_size}.py --local_dir ./or1_data/train
```
This will generate the training data in the following format:
```bash
./or1_data/train/train_${model_size}_math.pkl
./or1_data/train/train_${model_size}_code.pkl
```
### Train Script
By default, we only provide evaluation on AIME datasets. If you would like to evaluate on LiveCodeBench, please refer to the section [**Evaluation Data Preparation**](#evaluation-data-preparation) and set `LIVECODEBENCH_DATA_PATH` to `./or1_data/eval/livecodebench/livecodebench_2408_2502`.
```bash
# Note: You must provide CODE_PATH and MODEL_PATH
model_size=7b # or 32b
train_seq_len=8 # or 16, 32
export CODE_PATH=./
export MODEL_PATH=
bash ./or1_scripts/train/${model_size}_${train_seq_len}k.sh
```
### Using Ray for Multi-Node Training
If you plan to perform **multi-node training**, you need to **start and connect all nodes using Ray** before launching the training script. Here's a quick guide to set up Ray across machines:
#### Step 1: Start Ray on the Head Node (node0)
On the first node (typically called `node0`), run:
```bash
ray start --head --dashboard-host=0.0.0.0
```
After running the command, you will see a message like:
```
Ray runtime started.
Next steps
To add another node to this Ray cluster, run
ray start --address='10.94.16.4:6379'
```
Note down the IP address (in this example, `10.94.16.4`).
#### Step 2: Connect Other Nodes (e.g., node1)
On each additional worker node (e.g., `node1`), run the following, replacing the IP with that of your head node:
```bash
ray start --address='10.94.16.4:6379'
```
#### Step 3: Check Cluster Status
On `node0`, run:
```bash
ray status
```
You should see output showing all connected nodes and available resources (e.g., CPUs, GPUs, memory). For example:
```
Resources
---------------------------------------------------------------
Usage:
0.0/360.0 CPU
0.0/16.0 GPU
...
```
Once the Ray cluster is up and running, you can launch the training script as usual. The script will automatically utilize the connected nodes.
### Evaluation ⚖️
We provide evaluation scripts to reproduce the results of the `Skywork-OR1-Series`.
#### Evaluation Data Preparation
Evaluation data for AIME24 and AIME25 is already available in our GitHub repository.
For LiveCodeBench, please download the data from [Hugging Face](https://huggingface.co/datasets/Skywork/LiveCodeBench).
```bash
# Download LiveCodeBench
huggingface-cli download Skywork/LiveCodeBench --repo-type=dataset --local-dir ./or1_data/eval/livecodebench
unzip ./or1_data/eval/livecodebench/livecodebench.zip -d ./or1_data/eval/livecodebench/
mv ./or1_data/eval/livecodebench/livecodebench/* ./or1_data/eval/livecodebench/
```
#### Evaluation Start
```bash
bash ./or1_scripts/eval/eval_7b.sh
bash ./or1_scripts/eval/eval_32b.sh
```
The evaluation results will be automatically saved to [outputs/evalation/pass.csv](outputs/evalation/pass.csv)
## 📄 Technical Report
Our technical report will be released soon. Stay tuned!
## 🙏 Acknowledgements
- Both of our models are trained on top of [`DeepSeek-R1-Distill-Qwen-7B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) and [`DeepSeek-R1-Distill-Qwen-32B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B).
- Both models are trained using [a custom fork](https://github.com/SkyworkAI/Skywork-OR1) of the wonderful [`verl`](https://github.com/volcengine/verl) project.
## 📚 Citation
Please cite the following:
```bibtex
@article{he2025skywork,
title={Skywork Open Reasoner 1 Technical Report},
author={He, Jujie and Liu, Jiacai and Liu, Chris Yuhao and Yan, Rui and Wang, Chaojie and Cheng, Peng and Zhang, Xiaoyu and Zhang, Fuxiang and Xu, Jiacheng and Shen, Wei and Li, Siyuan and Zeng, Liang and Wei, Tianwen and Cheng, Cheng and An, Bo and Liu, Yang and Zhou, Yahui},
journal={arXiv preprint arXiv:2505.22312},
year={2025}
}
@misc{skywork-or1-2025,
title={Skywork Open Reasoner Series},
author = {He, Jujie and Liu, Jiacai and Liu, Chris Yuhao and Yan, Rui and Wang, Chaojie and Cheng, Peng and Zhang, Xiaoyu and Zhang, Fuxiang and Xu, Jiacheng and Shen, Wei and Li, Siyuan and Zeng, Liang and Wei, Tianwen and Cheng, Cheng and An, Bo and Liu, Yang and Zhou, Yahui},
howpublished={\url{https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680}},
note={Notion Blog},
year={2025}
}
``` |