|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: |
|
|
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B |
|
|
--- |
|
|
|
|
|
# Kaggle AI Mathematical Olympiad - Progress Prize 2 - 9th Place Solution (Fast-Math-R1-14B) |
|
|
## Team |
|
|
- Hiroshi Yoshihara @ [Aillis Inc.](https://aillis.jp/en), [The Univ. of Tokyo](https://publichealth.f.u-tokyo.ac.jp/#page_home) |
|
|
- Yuichi Inoue @ [Sakana AI](https://sakana.ai) |
|
|
- Taiki Yamaguchi @ [Rist Inc.](https://www.rist.co.jp/en/) |
|
|
|
|
|
# Summary |
|
|
By applying SFT and GRPO on difficult math problems, we enhanced the performance of `DeepSeek-R1-Distill-Qwen-14B` and developed `Fast-Math-R1-14B`, |
|
|
which achieves up to 60% faster inference while maintaining accuracy. |
|
|
|
|
|
Technical details can be found in [Kaggle Discussion](https://www.kaggle.com/competitions/ai-mathematical-olympiad-progress-prize-2/discussion/571252) and [Github](https://github.com/analokmaus/kaggle-aimo2-fast-math-r1). |
|
|
|
|
|
<img src="https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/blob/8eb55edfdb8e922b2d504000fb1cefe22acf67ef/assets/pass1_aime2024.png?raw=true" width="50%"><img src="https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/blob/8eb55edfdb8e922b2d504000fb1cefe22acf67ef/assets/pass1_aime2025.png?raw=true" width="50%"> |
|
|
| | | AIME 2024 | | AIME 2025 | | |
|
|
| ---------------------------- | ------------ | ---------------- | ------------- | ---------------- | ------------- | |
|
|
| Model | Token budget | Pass@1 (avg. 64) | Output tokens | Pass@1 (avg. 64) | Output tokens | |
|
|
| DeepSeek-R1-Distill-Qwen-14B | 16384 | 63.3 | 9590 | 46.7 | 10602 | |
|
|
| | 12800 | 58 | 6444 | 41.9 | 6684 | |
|
|
| | 8192 | 45.6 | 4920 | 30.6 | 4611 | |
|
|
| Light-R1-14B-DS | 16384 | **66.8** | 10146 | **51.3** | 11308 | |
|
|
| | 12800 | 59.2 | 6974 | 43.8 | 6869 | |
|
|
| | 8192 | 42.4 | 5500 | 30.4 | 4908 | |
|
|
| Fast-Math-R1-14B | 16384 | 66 | **7932** | 49.2 | **9066** | |
|
|
| | 12800 | **63** | **5996** | **46.1** | **6127** | |
|
|
| | 8192 | **51.4** | **4269** | **37.2** | **3905** | |
|
|
|
|
|
|
|
|
# Dataset |
|
|
- [Our first stage SFT dataset](https://huggingface.co/datasets/RabotniKuma/Fast-Math-R1-SFT) |
|
|
- [Our second stage GRPO dataset](https://huggingface.co/datasets/RabotniKuma/Fast-Math-R1-GRPO) |
|
|
|
|
|
# Inference |
|
|
## vLLM |
|
|
```python |
|
|
from vllm import LLM, SamplingParams |
|
|
|
|
|
|
|
|
vllm_engine = LLM( |
|
|
model='RabotniKuma/Fast-Math-R1-14B', |
|
|
max_model_len=8192, |
|
|
gpu_memory_utilization=0.9, |
|
|
trust_remote_code=True, |
|
|
) |
|
|
sampling_params = SamplingParams( |
|
|
temperature=1.0, |
|
|
top_p=0.90, |
|
|
min_p=0.05, |
|
|
max_tokens=8192, |
|
|
stop='</think>', # Important: early stop at </think> to save output tokens |
|
|
) |
|
|
vllm_engine.generate('1+1=', sampling_params=sampling_params) |
|
|
``` |