File size: 3,636 Bytes
820f4fb
 
aed9fe4
 
820f4fb
 
 
 
 
 
 
 
 
 
cc6b91b
820f4fb
 
 
ce02e0d
3f125d8
820f4fb
 
 
 
cc6b91b
 
820f4fb
cc6b91b
 
820f4fb
cc6b91b
 
 
 
 
820f4fb
 
 
 
 
1c38a1a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aed9fe4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
license: apache-2.0
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
---

# Kaggle AI Mathematical Olympiad - Progress Prize 2 - 9th Place Solution (Fast-Math-R1-14B)
## Team
- Hiroshi Yoshihara @ [Aillis Inc.](https://aillis.jp/en), [The Univ. of Tokyo](https://publichealth.f.u-tokyo.ac.jp/#page_home)
- Yuichi Inoue @ [Sakana AI](https://sakana.ai)
- Taiki Yamaguchi @ [Rist Inc.](https://www.rist.co.jp/en/)

# Summary
By applying SFT and GRPO on difficult math problems, we enhanced the performance of `DeepSeek-R1-Distill-Qwen-14B` and developed `Fast-Math-R1-14B`, 
which achieves up to 60% (on average approx. 30%) faster inference while maintaining accuracy.

Technical details can be found in [Kaggle Discussion](https://www.kaggle.com/competitions/ai-mathematical-olympiad-progress-prize-2/discussion/571252) and [Github](https://github.com/analokmaus/kaggle-aimo2-fast-math-r1).

<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1973217%2F4f221ab914f3e950fa35bdab5723d462%2Fpass1_aime_all.png?generation=1744851665782759&alt=media" max-height="300px">

|                              |              | AIME 2024        |               | AIME 2025        |               | 
| ---------------------------- | ------------ | ---------------- | ------------- | ---------------- | ------------- | 
| Model                        | Token budget | Pass@1 (avg. 64) | Output tokens | Pass@1 (avg. 64) | Output tokens | 
| DeepSeek-R1-Distill-Qwen-14B | 16384        | 63.3             | 9590          | 46.7             | 10602         | 
|                              | 12800        | 58               | 8632          | 41.9             | 9363          | 
|                              | 8192         | 45.6             | 6638          | 30.6             | 6897          | 
| Light-R1-14B-DS              | 16384        | **66.8**             | 10146         | **51.3**             | 11308         | 
|                              | 12800        | 59.2             | 9110          | 43.8             | 9834          | 
|                              | 8192         | 42.4             | 7020          | 30.4             | 7124          | 
| Fast-Math-R1-14B             | 16384        | 66               | **7932**          | 49.2             | **9066**          | 
|                              | 12800        | **63**               | **7449**          | **46.1**             | **8282**          | 
|                              | 8192         | **51.4**             | **5963**          | **37.2**             | **6256**          | 
| Fast-Math-R1-14B-SFT Only    | 16384        | 65.2             | 10268         | 49.7             | 11264         | 
|                              | 12800        | 57.2             | 9180          | 42.8             | 9805          | 
|                              | 8192         | 41.3             | 7015          | 30.1             | 7074          | 


# Dataset
- [Our first stage SFT dataset](https://huggingface.co/datasets/RabotniKuma/Fast-Math-R1-SFT)
- [Our second stage GRPO dataset](https://huggingface.co/datasets/RabotniKuma/Fast-Math-R1-GRPO)

# Inference
## vLLM
```python
from vllm import LLM, SamplingParams


vllm_engine = LLM(
    model='RabotniKuma/Fast-Math-R1-14B',
    max_model_len=8192,
    gpu_memory_utilization=0.9,
    trust_remote_code=True,
)
sampling_params = SamplingParams(
    temperature=1.0,
    top_p=0.90,
    min_p=0.05,
    max_tokens=8192,
    stop='</think>',  # Important: early stop at </think> to save output tokens
)
vllm_engine.generate('1+1=', sampling_params=sampling_params)
```