Fast-Math-Qwen3-14B / README.md
nielsr's picture
nielsr HF Staff
Improve model card: Add project page and relevant tags
84a551c verified
|
raw
history blame
3.98 kB
---
base_model:
- Qwen/Qwen3-14B
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- mathematics
- reasoning
- qwen
---
# Fast-Math-Qwen3-14B
**Fast-Math-Qwen3-14B** is an efficiency-optimized version of `Qwen3-14B`, developed following the two-stage recipe of Supervised Fine-Tuning (SFT) and Reinforcement Learning from Online Inference (GRPO) presented in the paper:
**[A Practical Two-Stage Recipe for Mathematical LLMs: Maximizing Accuracy with SFT and Efficiency with Reinforcement Learning](https://huggingface.co/papers/2507.08267)**
Project page: [https://analokmaus.github.io/kaggle-aimo2-fast-math-r1/](https://analokmaus.github.io/kaggle-aimo2-fast-math-r1/)
This model enables **approx. 65% faster inference on average, with minimal loss in performance**, compared to the base `Qwen3-14B`.
Technical details can be found in [our github repository](https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/tree/master).
**Note:**
This model likely inherits the ability to perform inference in TIR mode from the original model. However, all of our experiments were conducted in CoT mode, and its performance in TIR mode has not been evaluated.
## Evaluation
<img src='https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/blob/master/assets/pass1_aime_all.png?raw=true' max-height='400px'>
| | | AIME 2024 | | AIME 2025 | |
| ------------------- | ------------ | ---------------- | ------------------ | ---------------- | ------------------ |
| Model | Token budget | Pass@1 (avg. 64) | Mean output tokens | Pass@1 (avg. 64) | Mean output tokens |
| Qwen3-14B | 32000 | 79.3 | 13669 | 69.5 | 16481 |
| | 24000 | 75.9 | 13168 | 65.6 | 15235 |
| | 16000 | 64.5 | 11351 | 50.4 | 12522 |
| | 12000 | 49.7 | 9746 | 36.3 | 10353 |
| | 8000 | 28.4 | 7374 | 19.5 | 7485 |
| Fast-Math-Qwen3-14B | 32000 | 77.6 | 9740 | 66.6 | 12281 |
| | 24000 | 76.5 | 9634 | 65.3 | 11847 |
| | 16000 | 72.6 | 8793 | 60.1 | 10195 |
| | 12000 | 65.1 | 7775 | 49.4 | 8733 |
| | 8000 | 50.7 | 6260 | 36 | 6618 |
# Inference
## vLLM
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_path = 'RabotniKuma/Fast-Math-Qwen3-14B'
vllm_engine = LLM(
model=model_path,
max_model_len=16000,
gpu_memory_utilization=0.9,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
sampling_params = SamplingParams(
temperature=1.0,
top_p=0.90,
min_p=0.05,
max_tokens=8192,
stop='</think>', # For even faster inference, applying early stopping at the </think> tag and extracting the final boxed content is recommended.
)
messages = [
{
'role': 'user',
'content': (
'Solve the problem, and put the answer in \\boxed{{}}. '
'Sarah is twice as old as her youngest brother. If the difference between their ages is 15 years. How old is her youngest brother?'
)
}
]
messages = tokenizer.apply_chat_template(
conversation=messages,
tokenize=False,
add_generation_prompt=True
)
response = vllm_engine.generate(messages, sampling_params=sampling_params)
```