Nickyang commited on
Commit
a37e947
·
verified ·
1 Parent(s): e1dc61e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -3
README.md CHANGED
@@ -1,3 +1,58 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ ---
10
+
11
+ <div align="center">
12
+ <span style="font-family: default; font-size: 1.5em;">FastCuRL-1.5B-Preview</span>
13
+ </div>
14
+
15
+ ## FastCuRL Overview
16
+
17
+ ### 2025-05-23
18
+
19
+ We release **FastCuRL-1.5B-V3** and **FastCuRL-1.5B-V2**.
20
+
21
+ ### 2025-03-17
22
+
23
+ We release **FastCuRL-1.5B-Preview**, a slow-thinking reasoning model that **outperforms** the previous SoTA *DeepScaleR-1.5B-Preview* with **50% training steps**! We adapt a novel curriculum-guided iterative lengthening reinforcement learning to the *DeepSeek-R1-Distill-Qwen-1.5B* and observe continuous performance improvement as training steps increase. To better reproduce our work and advance research progress, we open-source our code, model, and data.
24
+
25
+ Code: https://github.com/nick7nlp/FastCuRL
26
+
27
+ ### 2025-03-21
28
+
29
+ Paper: https://arxiv.org/abs/2503.17287
30
+
31
+ ## Key Results
32
+
33
+ We report Pass@1 accuracy averaged over 16 samples for each problem.
34
+
35
+ | Model | AIME 2024 | MATH 500 | AMC 2023 | Minerva Math | OlympiadBench | Avg. |
36
+ |-------|-----------|-----------|-----------|--------------|---------------|------|
37
+ | Qwen2.5-Math-7B-Instruct | 13.3 | 79.8 | 50.6 | 34.6 | 40.7 | 43.8 |
38
+ | rStar-Math-7B | 26.7 | 78.4 | 47.5 | - | 47.1 | - |
39
+ | Eurus-2-7B-PRIME | 26.7 | 79.2 | 57.8 | 38.6 | 42.1 | 48.9 |
40
+ | Qwen2.5-7B-SimpleRL | 26.7 | 82.4 | 62.5 | <strong>39.7</strong> | 43.3 | 50.9 |
41
+ | DeepSeek-R1-Distill-Qwen-1.5B | 28.8 | 82.8 | 62.9 | 26.5 | 43.3 | 48.9 |
42
+ | Still-1.5B | 32.5 | 84.4 | 66.7 | 29.0 | 45.4 | 51.6 |
43
+ | DeepScaleR-1.5B-Preview | 43.1 | 87.8 | 73.6 | 30.2 | 50.0 | 57.0 |
44
+ | <strong>FastCuRL-1.5B-Preview</strong> | 43.1 | 88.0 | 74.2 | 31.6 | 50.4 | 57.5 |
45
+ | <strong>FastCuRL-1.5B-V2</strong> | 47.5 | 89.3 | 77.0 | 32.8 | 53.3 | 60.0 |
46
+ | <strong>FastCuRL-1.5B-V3</strong> | <strong>49.6</strong> | <strong>90.5</strong> | <strong>78.5</strong> | <strong>34.7</strong> | <strong>54.5</strong> | <strong>61.6</strong> |
47
+
48
+ ## Training Data
49
+ Following DeepScaleR, our training dataset consists of 40,315 unique problem-answer pairs compiled from:
50
+ - AIME problems (1984-2023)
51
+ - AMC problems (before 2023)
52
+ - Omni-MATH dataset
53
+ - Still dataset
54
+
55
+ ## Acknowledgements
56
+
57
+ - Our training experiments are powered by our heavily modified fork of [verl](https://github.com/volcengine/verl) and [deepscaler](https://github.com/agentica-project/deepscaler).
58
+ - Our model is trained on top of [`DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).