Update README.md
Browse files
README.md
CHANGED
|
@@ -5,7 +5,7 @@ license: mit
|
|
| 5 |
|
| 6 |
### Model description
|
| 7 |
Most existing mthods focused on distilling DeepSeek-R1 to improve reasoning ability. However, as far as we know, there is no distilled model could surpass DeepSeek-R1 or QwQ-32B. We introduce NTele-R1-32B-DS , a state-of-the-art mathematical reasoning model that outperforms QwQ-32B across common reasoning benchmarks, including AIME2024/2025, MATH500 and GPQA-Diamond.
|
| 8 |
-
|
| 9 |
| Model | Trained From | Release Date | AIME2024(ours/reported) | AIME2025(o/r) | MATH500(o/r) | GPQA-Diamond(o/r) |
|
| 10 |
|-------|-------|-------|-------|-------|-------|-------|
|
| 11 |
| QwQ-32B | - | 25.3.6 | 76.25 / 79.5 | 67.30 / - | 94.6 / - | 63.6 / - |
|
|
|
|
| 5 |
|
| 6 |
### Model description
|
| 7 |
Most existing mthods focused on distilling DeepSeek-R1 to improve reasoning ability. However, as far as we know, there is no distilled model could surpass DeepSeek-R1 or QwQ-32B. We introduce NTele-R1-32B-DS , a state-of-the-art mathematical reasoning model that outperforms QwQ-32B across common reasoning benchmarks, including AIME2024/2025, MATH500 and GPQA-Diamond.
|
| 8 |
+
Notely, NTele-R1-32B-DS is the first that achieves **more than 80/70 in challenging AIME2024/2025**.
|
| 9 |
| Model | Trained From | Release Date | AIME2024(ours/reported) | AIME2025(o/r) | MATH500(o/r) | GPQA-Diamond(o/r) |
|
| 10 |
|-------|-------|-------|-------|-------|-------|-------|
|
| 11 |
| QwQ-32B | - | 25.3.6 | 76.25 / 79.5 | 67.30 / - | 94.6 / - | 63.6 / - |
|