Safetensors
qwen2

Add Apache 2.0 License and other relevant metadata

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +169 -31
README.md CHANGED
@@ -3,11 +3,19 @@ base_model:
3
  - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
4
  datasets:
5
  - Skywork/Skywork-OR1-RL-Data
 
 
 
6
  ---
 
7
  <div align="center">
8
 
9
  # 🤔 Skywork-OR1 (Open Reasoner 1)
10
 
 
 
 
 
11
  </div>
12
  <div>
13
  <br>
@@ -26,6 +34,11 @@ datasets:
26
 
27
  ## 🔥 News
28
 
 
 
 
 
 
29
  - **April 13, 2025**: We release the **`Skywork-OR1`** (Open Reasoner 1) series of models, including **`Skywork-OR1-Math-7B`**, **`Skywork-OR1-32B-Preview`**, and **`Skywork-OR1-7B-Preview`**. We open-source
30
  - 🤗 Model weights: [`Skywork-OR1-Math-7B`](https://huggingface.co/Skywork/Skywork-OR1-Math-7B), [`Skywork-OR1-32B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-32B-Preview), [`Skywork-OR1-7B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-7B-Preview)
31
  - 🤗 Training data: [`Skywork-OR1-RL-Data`](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data)
@@ -35,58 +48,183 @@ datasets:
35
  ## 📖 Overview
36
 
37
  <div align="center">
38
- <img src="./assets/skywork-or1-math-7b-multi-stage.png" width="60%"/>
39
 
40
- <sub>The AIME24 scores versus training steps of Skywork-OR1-Math-7B in our multi-stage training pipeline.</sub>
41
  </div>
42
 
43
- The **`Skywork-OR1`** (Open Reasoner 1) model series consists of powerful math and code reasoning models trained using large-scale rule-based reinforcement learning with carefully designed datasets and training recipes. This series includes two general-purpose reasoning modelsl, **`Skywork-OR1-7B-Preview`** and **`Skywork-OR1-32B-Preview`**, along with a math-specialized model, **`Skywork-OR1-Math-7B`**.
44
-
45
- - **[`Skywork-OR1-Math-7B`](https://huggingface.co/Skywork/Skywork-OR1-Math-7B)** is specifically optimized for mathematical reasoning, scoring **69.8** on AIME24 and **52.3** on AIME25 — well ahead of all models of similar size.
46
- - **[`Skywork-OR1-32B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-32B-Preview)** delivers the 671B-parameter Deepseek-R1 performance on math tasks (AIME24 and AIME25) and coding tasks (LiveCodeBench).
47
- - **[`Skywork-OR1-7B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-7B-Preview)** outperforms all similarly sized models in both math and coding scenarios.
48
 
49
- The final release version will be available in two weeks.
 
50
 
51
  ## 📊 Evaluation
52
 
53
  <div align="center">
54
- <img src="./assets/32b_perf.png" width="75%"/>
55
- <img src="./assets/7b_perf.png" width="75%"/>
56
  </div>
57
  </div>
 
58
 
59
  We evaluate our models on AIME24, AIME25, and LiveCodeBench. Instead of using Pass@1, which is common in prior work, we introduce Avg@K as the primary metric. This metric robustly measures a model's average performance across K independent attempts, reducing the impact of randomness and enhancing the reliability of the results. We believe that Avg@K provides a better reflection of a model's stability and reasoning consistency.
60
 
61
  We include the detailed results in the following table.
62
 
63
- | Model | AIME24 (Avg@32) | AIME25 (Avg@32) | LiveCodeBench (8/1/24-2/1/25) (Avg@4) |
64
- |-------|---------|---------|--------------|
65
- | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 39.2| 37.6 |
66
- | Light-R1-7B-DS | 59.1 | 44.3| 39.5 |
67
- | DeepSeek-R1-Distill-Qwen-32B | 72.9 | 59.0| 57.2 |
68
- | TinyR1-32B-Preview | 78.1| 65.3| 61.6 |
69
- | QwQ-32B | 79.5 | 65.3| 61.6 |
70
- | DeepSeek-R1 | 79.8 | 70.0| 65.9 |
71
- | **Skywork-OR1-Math-7B** | 69.8 | 52.3 | 43.6 |
72
- | **Skywork-OR1-7B-Preview** | 63.6 | 45.8 | 43.9 |
73
- | **Skywork-OR1-32B-Preview** | 79.7 | 69.0 | 63.9 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
 
75
- ## ⚙️ Training Recipe
76
 
77
- We offer a brief overview of our data and training pipeline below. For more details, please refer to our Notion Blog [here](https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680).
78
 
79
- ### Data
 
 
 
 
 
80
 
81
- - We select, clean, and curate **a dataset of 110K verifiable, challenging, and diverse math problems and 14K coding questions** from open-source datasets.
82
- - We perform **model-aware difficulty estimation** for each problem and model and conduct **rigorous quality assessment prior to training** to ensure training efficiency and effectiveness.
83
 
84
- ### Training
 
85
 
86
- We develop a customized version of GRPO that leverages both data-wise and training-wise improvements:
 
87
 
88
- - We perform both **offline and online difficulty-based filtering** and **rejection sampling** to improve training efficiency.
89
- - We incorporate a **multi-stage training pipeline** coupled with **adaptive entropy control** and other techniques to enhance exploration and stability.
90
 
91
  ## 📄 Technical Report
92
 
@@ -111,7 +249,7 @@ Please cite the following:
111
 
112
  @misc{skywork-or1-2025,
113
  title={Skywork Open Reasoner Series},
114
- author = {He, Jujie and Liu, Jiacai and Liu, Chris Yuhao and Yan, Rui and Wang, Chaojie and Cheng, Peng and Zhang, Xiaoyu and Zhang, Fuxiang and Xu, Jiacheng and Shen, Wei and Li, Siyuan and Zeng, Liang and Wei, Tianwen and Cheng, Cheng and Liu, Yang and Zhou, Yahui},
115
  howpublished={\url{https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680}},
116
  note={Notion Blog},
117
  year={2025}
 
3
  - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
4
  datasets:
5
  - Skywork/Skywork-OR1-RL-Data
6
+ library_name: transformers
7
+ license: apache-2.0
8
+ pipeline_tag: text-generation
9
  ---
10
+
11
  <div align="center">
12
 
13
  # 🤔 Skywork-OR1 (Open Reasoner 1)
14
 
15
+ <div>
16
+ ✊ Unleashing the Power of Reinforcement Learning for Math and Code Reasoners 🤖
17
+ </div>
18
+
19
  </div>
20
  <div>
21
  <br>
 
34
 
35
  ## 🔥 News
36
 
37
+ - **May 29, 2025**: Our [Skywork Open Reasoner 1 Technical Report](https://arxiv.org/abs/2505.22312) has been released on arXiv. We provide further details on the training pipeline, investigation and mitigation to the entropy collapse phenomenon, and extensive analysis and ablation studies.
38
+ - **May 13, 2025**: We release our final version of **`Skywork-OR1`** series of models:**`Skywork-OR1-32B`** and **`Skywork-OR1-7B`**.
39
+ - **[`Skywork-OR1-32B`](https://huggingface.co/Skywork/Skywork-OR1-32B)** outperforms Deepseek-R1 and Qwen3-32B on math tasks (AIME24 and AIME25) and delivers comparable performance on coding tasks (LiveCodeBench).
40
+ - **[`Skywork-OR1-7B`](https://huggingface.co/Skywork/Skywork-OR1-7B)** exhibits competitive performance compared to similarly sized models in both math and coding scenarios.
41
+ - **April 15, 2025**: We release our rl training dataset [`Skywork-OR1-RL-Data`](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data).
42
  - **April 13, 2025**: We release the **`Skywork-OR1`** (Open Reasoner 1) series of models, including **`Skywork-OR1-Math-7B`**, **`Skywork-OR1-32B-Preview`**, and **`Skywork-OR1-7B-Preview`**. We open-source
43
  - 🤗 Model weights: [`Skywork-OR1-Math-7B`](https://huggingface.co/Skywork/Skywork-OR1-Math-7B), [`Skywork-OR1-32B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-32B-Preview), [`Skywork-OR1-7B-Preview`](https://huggingface.co/Skywork/Skywork-OR1-7B-Preview)
44
  - 🤗 Training data: [`Skywork-OR1-RL-Data`](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data)
 
48
  ## 📖 Overview
49
 
50
  <div align="center">
51
+ <img src="./assets/32b_perf.jpg" width="100%"/>
52
 
53
+ <sub>The AIME24 and AIME225 scores versus training steps of Skywork-OR1-32B in our multi-stage training pipeline.</sub>
54
  </div>
55
 
56
+ The **`Skywork-OR1`** (Open Reasoner 1) model series consists of powerful math and code reasoning models trained using large-scale rule-based reinforcement learning with carefully designed datasets and training recipes. This series includes two general-purpose reasoning modelsl, **`Skywork-OR1-7B`** and **`Skywork-OR1-32B`**.
 
 
 
 
57
 
58
+ - **[`Skywork-OR1-32B`](https://huggingface.co/Skywork/Skywork-OR1-32B)** outperforms Deepseek-R1 and Qwen3-32B on math tasks (AIME24 and AIME25) and delivers comparable performance on coding tasks (LiveCodeBench).
59
+ - **[`Skywork-OR1-7B`](https://huggingface.co/Skywork/Skywork-OR1-7B)** exhibits competitive performance compared to similarly sized models in both math and coding scenarios.
60
 
61
  ## 📊 Evaluation
62
 
63
  <div align="center">
64
+ <img src="./assets/32b_eval.jpg" width="75%"/>
65
+ <img src="./assets/7b_eval.jpg" width="75%"/>
66
  </div>
67
  </div>
68
+ <br>
69
 
70
  We evaluate our models on AIME24, AIME25, and LiveCodeBench. Instead of using Pass@1, which is common in prior work, we introduce Avg@K as the primary metric. This metric robustly measures a model's average performance across K independent attempts, reducing the impact of randomness and enhancing the reliability of the results. We believe that Avg@K provides a better reflection of a model's stability and reasoning consistency.
71
 
72
  We include the detailed results in the following table.
73
 
74
+ | Model | AIME24 (Avg@32) | AIME25 (Avg@32) | LiveCodeBench (8/1/24-2/1/25) (Avg@4) |
75
+ | ---------------------------- | --------------- | --------------- | ------------------------------------- |
76
+ | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 39.2 | 37.6 |
77
+ | Light-R1-7B-DS | 59.1 | 44.3 | 39.5 |
78
+ | **Skywork-OR1-7B** | 70.2 | 54.6 | 47.6 |
79
+ | DeepSeek-R1-Distill-Qwen-32B | 72.9 | 59.0 | 57.2 |
80
+ | TinyR1-32B-Preview | 78.1 | 65.3 | 61.6 |
81
+ | QwQ-32B | 79.5 | 65.3 | 61.6 |
82
+ | Qwen3-32B | 81.4 | 72.9 | 65.7 |
83
+ | DeepSeek-R1 | 79.8 | 70.0 | 65.9 |
84
+ | **Skywork-OR1-32B** | 82.2 | 73.3 | 63.0 |
85
+
86
+ ## 🎯 Getting Started
87
+
88
+ ### Installation
89
+
90
+ Docker environment:
91
+
92
+ ```bash
93
+ docker pull whatcanyousee/verl:vemlp-th2.4.0-cu124-vllm0.6.3-ray2.10-te2.0-megatron0.11.0-v0.0.6
94
+
95
+ # Launch the desired Docker image:
96
+ docker run --runtime=nvidia -it --rm --shm-size="10g" --cap-add=SYS_ADMIN -v <image:tag>
97
+
98
+ # Inside the container, install Skywork-OR1
99
+ git clone https://github.com/SkyworkAI/Skywork-OR1.git && cd Skywork-OR1 && pip3 install -e .
100
+ ```
101
+
102
+ Conda environment:
103
+
104
+ ```bash
105
+ # Installing Python 3.10 Environment.
106
+ conda create -n verl python==3.10
107
+ conda activate verl
108
+
109
+ # Installing RLLM dependencies.
110
+ pip3 install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu124
111
+ pip3 install flash-attn --no-build-isolation
112
+ git clone https://github.com/SkyworkAI/Skywork-OR1.git
113
+ cd Skywork-OR1
114
+ pip3 install -e .
115
+ ```
116
+
117
+ ### Training ⚙️
118
+
119
+ We provide training scripts and data to reproduce the results of the “Skywork-OR1-Series”.
120
+
121
+ ### Training Data Preparation
122
+
123
+ To prepare the training data, we provide a script to download the data from Hugging Face and filter the problems based on the difficulty level with respect to a particular model (i.e., DeepSeek-R1-Distill-Qwen-{1.5,7,32}B).
124
+
125
+ ```bash
126
+ model_size=32b # 1p5b, 7b
127
+ python ./or1_scripts/data_preprocess/download_and_filter_data_${model_size}.py --local_dir ./or1_data/train
128
+ ```
129
+
130
+ This will generate the training data in the following format:
131
+
132
+ ```bash
133
+ ./or1_data/train/train_${model_size}_math.pkl
134
+ ./or1_data/train/train_${model_size}_code.pkl
135
+ ```
136
+
137
+ ### Train Script
138
+
139
+ By default, we only provide evaluation on AIME datasets. If you would like to evaluate on LiveCodeBench, please refer to the section [**Evaluation Data Preparation**](#evaluation-data-preparation) and set `LIVECODEBENCH_DATA_PATH` to `./or1_data/eval/livecodebench/livecodebench_2408_2502`.
140
+
141
+ ```bash
142
+ # Note: You must provide CODE_PATH and MODEL_PATH
143
+ model_size=7b # or 32b
144
+ train_seq_len=8 # or 16, 32
145
+ export CODE_PATH=./
146
+ export MODEL_PATH=
147
+ bash ./or1_scripts/train/${model_size}_${train_seq_len}k.sh
148
+ ```
149
+
150
+ ### Using Ray for Multi-Node Training
151
+
152
+ If you plan to perform **multi-node training**, you need to **start and connect all nodes using Ray** before launching the training script. Here's a quick guide to set up Ray across machines:
153
+
154
+ #### Step 1: Start Ray on the Head Node (node0)
155
+
156
+ On the first node (typically called `node0`), run:
157
+
158
+ ```bash
159
+ ray start --head --dashboard-host=0.0.0.0
160
+ ```
161
+
162
+ After running the command, you will see a message like:
163
+
164
+ ```
165
+ Ray runtime started.
166
+ Next steps
167
+ To add another node to this Ray cluster, run
168
+ ray start --address='10.94.16.4:6379'
169
+ ```
170
+
171
+ Note down the IP address (in this example, `10.94.16.4`).
172
+
173
+ #### Step 2: Connect Other Nodes (e.g., node1)
174
+
175
+ On each additional worker node (e.g., `node1`), run the following, replacing the IP with that of your head node:
176
+
177
+ ```bash
178
+ ray start --address='10.94.16.4:6379'
179
+ ```
180
+
181
+ #### Step 3: Check Cluster Status
182
+
183
+ On `node0`, run:
184
+
185
+ ```bash
186
+ ray status
187
+ ```
188
+
189
+ You should see output showing all connected nodes and available resources (e.g., CPUs, GPUs, memory). For example:
190
+
191
+ ```
192
+ Resources
193
+ ---------------------------------------------------------------
194
+ Usage:
195
+ 0.0/360.0 CPU
196
+ 0.0/16.0 GPU
197
+ ...
198
+ ```
199
+
200
+ Once the Ray cluster is up and running, you can launch the training script as usual. The script will automatically utilize the connected nodes.
201
+
202
+ ### Evaluation ⚖️
203
+
204
+ We provide evaluation scripts to reproduce the results of the `Skywork-OR1-Series`.
205
+
206
+ #### Evaluation Data Preparation
207
 
208
+ Evaluation data for AIME24 and AIME25 is already available in our GitHub repository.
209
 
210
+ For LiveCodeBench, please download the data from [Hugging Face](https://huggingface.co/datasets/Skywork/LiveCodeBench).
211
 
212
+ ```bash
213
+ # Download LiveCodeBench
214
+ huggingface-cli download Skywork/LiveCodeBench --repo-type=dataset --local-dir ./or1_data/eval/livecodebench
215
+ unzip ./or1_data/eval/livecodebench/livecodebench.zip -d ./or1_data/eval/livecodebench/
216
+ mv ./or1_data/eval/livecodebench/livecodebench/* ./or1_data/eval/livecodebench/
217
+ ```
218
 
219
+ #### Evaluation Start
 
220
 
221
+ ```bash
222
+ bash ./or1_scripts/eval/eval_7b.sh
223
 
224
+ bash ./or1_scripts/eval/eval_32b.sh
225
+ ```
226
 
227
+ The evaluation results will be automatically saved to [outputs/evalation/pass.csv](outputs/evalation/pass.csv)
 
228
 
229
  ## 📄 Technical Report
230
 
 
249
 
250
  @misc{skywork-or1-2025,
251
  title={Skywork Open Reasoner Series},
252
+ author = {He, Jujie and Liu, Jiacai and Liu, Chris Yuhao and Yan, Rui and Wang, Chaojie and Cheng, Peng and Zhang, Xiaoyu and Zhang, Fuxiang and Xu, Jiacheng and Shen, Wei and Li, Siyuan and Zeng, Liang and Wei, Tianwen and Cheng, Cheng and An, Bo and Liu, Yang and Zhou, Yahui},
253
  howpublished={\url{https://capricious-hydrogen-41c.notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680}},
254
  note={Notion Blog},
255
  year={2025}