Update README.md
Browse files
README.md
CHANGED
|
@@ -12,6 +12,8 @@ library_name: transformers
|
|
| 12 |
<span style="font-family: default; font-size: 1.5em;">FastCuRL-1.5B-Preview</span>
|
| 13 |
</div>
|
| 14 |
|
|
|
|
|
|
|
| 15 |
We release **FastCuRL-1.5B-Preview**, a slow-thinking reasoning model that achieves 43.1% accuracy on the AIME 2024 benchmark! We adapt a novel curriculum-guided iterative lengthening reinforcement learning to the distilled 1.5B model and observe continuous performance improvement as training steps increase. To better reproduce our work and advance research progress, we open-source our code, model, and data.
|
| 16 |
|
| 17 |
Code: https://github.com/nick7nlp/FastCuRL
|
|
@@ -31,6 +33,12 @@ We report Pass@1 accuracy averaged over 16 samples for each problem.
|
|
| 31 |
| DeepScaleR-1.5B-Preview | 43.1 | 87.8 | 73.6 | 30.2 | 50.0 | 57.0 |
|
| 32 |
| <strong>FastCuRL-1.5B-Preview</strong> | <strong>43.1</strong> | <strong>88.0</strong> | <strong>74.2</strong> | 31.6 | <strong>50.4</strong> | <strong>57.5</strong> |
|
| 33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
## Acknowledgements
|
| 36 |
|
|
|
|
| 12 |
<span style="font-family: default; font-size: 1.5em;">FastCuRL-1.5B-Preview</span>
|
| 13 |
</div>
|
| 14 |
|
| 15 |
+
## FastCuRL Overview
|
| 16 |
+
|
| 17 |
We release **FastCuRL-1.5B-Preview**, a slow-thinking reasoning model that achieves 43.1% accuracy on the AIME 2024 benchmark! We adapt a novel curriculum-guided iterative lengthening reinforcement learning to the distilled 1.5B model and observe continuous performance improvement as training steps increase. To better reproduce our work and advance research progress, we open-source our code, model, and data.
|
| 18 |
|
| 19 |
Code: https://github.com/nick7nlp/FastCuRL
|
|
|
|
| 33 |
| DeepScaleR-1.5B-Preview | 43.1 | 87.8 | 73.6 | 30.2 | 50.0 | 57.0 |
|
| 34 |
| <strong>FastCuRL-1.5B-Preview</strong> | <strong>43.1</strong> | <strong>88.0</strong> | <strong>74.2</strong> | 31.6 | <strong>50.4</strong> | <strong>57.5</strong> |
|
| 35 |
|
| 36 |
+
## Training Data
|
| 37 |
+
Following DeepScaleR, our training dataset consists of 40,315 unique problem-answer pairs compiled from:
|
| 38 |
+
- AIME problems (1984-2023)
|
| 39 |
+
- AMC problems (before 2023)
|
| 40 |
+
- Omni-MATH dataset
|
| 41 |
+
- Still dataset
|
| 42 |
|
| 43 |
## Acknowledgements
|
| 44 |
|