Enhance model card with metadata, paper link, and project page
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,17 +1,26 @@
|
|
| 1 |
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
base_model:
|
| 4 |
- nvidia/OpenMath-Nemotron-14B
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
---
|
| 6 |
|
| 7 |
# Fast-OpenMath-Nemotron-14B
|
| 8 |
-
|
|
|
|
|
|
|
| 9 |
which achieves approx. 30% faster inference on average, while maintaining accuracy.
|
| 10 |
|
| 11 |
-
In addition, we trained and open-sourced `Fast-OpenMath-Nemotron-14B`, an efficiency-optimized version of NVIDIA’s [`OpenMath-Nemotron-14B`](https://huggingface.co/nvidia/OpenMath-Nemotron-14B), following the same approach.
|
| 12 |
Compared to OpenMath-Nemotron-14B, this model enables approx. 30% faster inference on average, with minimal loss in performance.
|
| 13 |
|
| 14 |
Technical details can be found in [our github repository](https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/tree/master).
|
|
|
|
| 15 |
|
| 16 |
**Note:**
|
| 17 |
This model likely inherits the ability to perform inference in TIR mode from the original model. However, all of our experiments were conducted in CoT mode, and its performance in TIR mode has not been evaluated.
|
|
@@ -19,19 +28,19 @@ This model likely inherits the ability to perform inference in TIR mode from the
|
|
| 19 |
# Evaluation
|
| 20 |
<img src='https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/blob/master/assets/pass1_aime_all.png?raw=true' max-height='400px'>
|
| 21 |
|
| 22 |
-
| | | AIME 2024 | | AIME 2025 | |
|
| 23 |
-
| -------------------------- | ------------ | ---------------- | ------------------ | ---------------- | ------------------ |
|
| 24 |
-
| Model | Token budget | Pass@1 (avg. 64) | Mean output tokens | Pass@1 (avg. 64) | Mean output tokens |
|
| 25 |
-
| OpenMath-Nemotron-14B | 32000 | 76.2 | 11493 | 64.5 | 13414 |
|
| 26 |
-
| | 24000 | 75.4 | 11417 | 63.4 | 13046 |
|
| 27 |
-
| | 16000 | 66 | 10399 | 54.2 | 11422 |
|
| 28 |
-
| | 12000 | 55 | 9053 | 40 | 9609 |
|
| 29 |
-
| | 8000 | 36 | 6978 | 27.2 | 7083 |
|
| 30 |
-
| Fast-OpenMath-Nemotron-14B | 32000 | 70.7 | 9603 | 61.4 | 11424 |
|
| 31 |
-
| | 24000 | 70.6 | 9567 | 60.9 | 11271 |
|
| 32 |
-
| | 16000 | 66.6 | 8954 | 55.3 | 10190 |
|
| 33 |
-
| | 12000 | 59.4 | 7927 | 45.6 | 8752 |
|
| 34 |
-
| | 8000 | 47.6 | 6282 | 33.8 | 6589 |
|
| 35 |
|
| 36 |
# Inference
|
| 37 |
## vLLM
|
|
@@ -59,7 +68,7 @@ sampling_params = SamplingParams(
|
|
| 59 |
)
|
| 60 |
messages = [
|
| 61 |
{
|
| 62 |
-
'role': 'user',
|
| 63 |
'content': (
|
| 64 |
'Solve the problem, and put the answer in \boxed{{}}. '
|
| 65 |
'Sarah is twice as old as her youngest brother. If the difference between their ages is 15 years. How old is her youngest brother?'
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
base_model:
|
| 3 |
- nvidia/OpenMath-Nemotron-14B
|
| 4 |
+
license: cc-by-4.0
|
| 5 |
+
pipeline_tag: text-generation
|
| 6 |
+
library_name: transformers
|
| 7 |
+
tags:
|
| 8 |
+
- mathematical-reasoning
|
| 9 |
+
- qwen2
|
| 10 |
+
paper: https://huggingface.co/papers/2507.08267
|
| 11 |
---
|
| 12 |
|
| 13 |
# Fast-OpenMath-Nemotron-14B
|
| 14 |
+
This model is based on the paper [A Practical Two-Stage Recipe for Mathematical LLMs: Maximizing Accuracy with SFT and Efficiency with Reinforcement Learning](https://huggingface.co/papers/2507.08267).
|
| 15 |
+
|
| 16 |
+
By applying SFT and GRPO on difficult math problems, we enhanced the performance of `DeepSeek-R1-Distill-Qwen-14B` and developed [`Fast-Math-R1-14B`](https://huggingface.co/RabotniKuma/Fast-Math-R1-14B),
|
| 17 |
which achieves approx. 30% faster inference on average, while maintaining accuracy.
|
| 18 |
|
| 19 |
+
In addition, we trained and open-sourced `Fast-OpenMath-Nemotron-14B`, an efficiency-optimized version of NVIDIA’s [`OpenMath-Nemotron-14B`](https://huggingface.co/nvidia/OpenMath-Nemotron-14B), following the same approach.
|
| 20 |
Compared to OpenMath-Nemotron-14B, this model enables approx. 30% faster inference on average, with minimal loss in performance.
|
| 21 |
|
| 22 |
Technical details can be found in [our github repository](https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/tree/master).
|
| 23 |
+
Project page: https://analokmaus.github.io/Fast-Math-R1/
|
| 24 |
|
| 25 |
**Note:**
|
| 26 |
This model likely inherits the ability to perform inference in TIR mode from the original model. However, all of our experiments were conducted in CoT mode, and its performance in TIR mode has not been evaluated.
|
|
|
|
| 28 |
# Evaluation
|
| 29 |
<img src='https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/blob/master/assets/pass1_aime_all.png?raw=true' max-height='400px'>
|
| 30 |
|
| 31 |
+
| | | AIME 2024 | | AIME 2025 | |
|
| 32 |
+
| -------------------------- | ------------ | ---------------- | ------------------ | ---------------- | ------------------ |
|
| 33 |
+
| Model | Token budget | Pass@1 (avg. 64) | Mean output tokens | Pass@1 (avg. 64) | Mean output tokens |
|
| 34 |
+
| OpenMath-Nemotron-14B | 32000 | 76.2 | 11493 | 64.5 | 13414 |
|
| 35 |
+
| | 24000 | 75.4 | 11417 | 63.4 | 13046 |
|
| 36 |
+
| | 16000 | 66 | 10399 | 54.2 | 11422 |
|
| 37 |
+
| | 12000 | 55 | 9053 | 40 | 9609 |
|
| 38 |
+
| | 8000 | 36 | 6978 | 27.2 | 7083 |
|
| 39 |
+
| Fast-OpenMath-Nemotron-14B | 32000 | 70.7 | 9603 | 61.4 | 11424 |
|
| 40 |
+
| | 24000 | 70.6 | 9567 | 60.9 | 11271 |
|
| 41 |
+
| | 16000 | 66.6 | 8954 | 55.3 | 10190 |
|
| 42 |
+
| | 12000 | 59.4 | 7927 | 45.6 | 8752 |
|
| 43 |
+
| | 8000 | 47.6 | 6282 | 33.8 | 6589 |
|
| 44 |
|
| 45 |
# Inference
|
| 46 |
## vLLM
|
|
|
|
| 68 |
)
|
| 69 |
messages = [
|
| 70 |
{
|
| 71 |
+
'role': 'user',
|
| 72 |
'content': (
|
| 73 |
'Solve the problem, and put the answer in \boxed{{}}. '
|
| 74 |
'Sarah is twice as old as her youngest brother. If the difference between their ages is 15 years. How old is her youngest brother?'
|