RabotniKuma commited on
Commit
187dbec
·
verified ·
1 Parent(s): db17c25

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -3
README.md CHANGED
@@ -1,3 +1,67 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen3-14B
5
+ ---
6
+ # Fast-Math-Qwen3-14B
7
+ By applying SFT and GRPO on difficult math problems, we enhanced the performance of `DeepSeek-R1-Distill-Qwen-14B` and developed [`Fast-Math-R1-14B`](https://huggingface.co/RabotniKuma/Fast-Math-R1-14B),
8
+ which achieves approx. 30% faster inference on average, while maintaining accuracy.
9
+
10
+ In addition, we trained and open-sourced `Fast-Math-Qwen3-14B`, an efficiency-optimized version of Qwen3-14B`, following the same approach.
11
+
12
+ **Compared to Qwen3-14B, this model enables approx. 65% faster inference on average, with minimal loss in performance.**
13
+
14
+ Technical details can be found in [our github repository](https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/tree/master).
15
+
16
+ **Note:**
17
+ This model likely inherits the ability to perform inference in TIR mode from the original model. However, all of our experiments were conducted in CoT mode, and its performance in TIR mode has not been evaluated.
18
+
19
+ ## Evaluation
20
+ <img src='https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/blob/master/assets/pass1_aime_qwen3.png?raw=true' max-height='300px'>
21
+
22
+ | | | AIME 2024 | | AIME 2025 | |
23
+ | ------------------- | ------------ | ---------------- | ------------- | ---------------- | ------------- |
24
+ | Model | Token budget | Pass@1 (avg. 64) | Output tokens | Pass@1 (avg. 64) | Output tokens |
25
+ | Qwen3-14B | 32000 | **79.3** | 13324 | **69.5** | 15165 |
26
+ | | 16000 | 65.5 | 9179 | 51.5 | 9724 |
27
+ | | 8000 | 29.7 | 5926 | 20.1 | 5484 |
28
+ | Fast-Math-Qwen3-14B | 32000 | 77.6 | 9668 | 66.6 | 11950 |
29
+ | | 16000 | **72.8** | 7161 | **60.7** | 7874 |
30
+ | | 8000 | **51.6** | 4778 | **36.9** | 4531 |
31
+
32
+ # Inference
33
+ ## vLLM
34
+ ```python
35
+ from vllm import LLM, SamplingParams
36
+ from transformers import AutoTokenizer
37
+ model_path = 'RabotniKuma/Fast-Math-Qwen3-14B'
38
+ vllm_engine = LLM(
39
+ model=model_path,
40
+ max_model_len=16000,
41
+ gpu_memory_utilization=0.9,
42
+ trust_remote_code=True,
43
+ )
44
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
45
+ sampling_params = SamplingParams(
46
+ temperature=1.0,
47
+ top_p=0.90,
48
+ min_p=0.05,
49
+ max_tokens=8192,
50
+ stop='</think>', # For even faster inference, applying early stopping at the </think> tag and extracting the final boxed content is recommended.
51
+ )
52
+ messages = [
53
+ {
54
+ 'role': 'user',
55
+ 'content': (
56
+ 'Solve the problem, and put the answer in \boxed{{}}. '
57
+ 'Sarah is twice as old as her youngest brother. If the difference between their ages is 15 years. How old is her youngest brother?'
58
+ )
59
+ }
60
+ ]
61
+ messages = tokenizer.apply_chat_template(
62
+ conversation=messages,
63
+ tokenize=False,
64
+ add_generation_prompt=True
65
+ )
66
+ response = vllm_engine.generate(messages, sampling_params=sampling_params)
67
+ ```