Improve model card: Add project page and relevant tags

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -1,9 +1,13 @@
1
  ---
2
  base_model:
3
  - Qwen/Qwen3-14B
 
4
  license: apache-2.0
5
  pipeline_tag: text-generation
6
- library_name: transformers
 
 
 
7
  ---
8
 
9
  # Fast-Math-Qwen3-14B
@@ -12,6 +16,8 @@ library_name: transformers
12
 
13
  **[A Practical Two-Stage Recipe for Mathematical LLMs: Maximizing Accuracy with SFT and Efficiency with Reinforcement Learning](https://huggingface.co/papers/2507.08267)**
14
 
 
 
15
  This model enables **approx. 65% faster inference on average, with minimal loss in performance**, compared to the base `Qwen3-14B`.
16
 
17
  Technical details can be found in [our github repository](https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/tree/master).
 
1
  ---
2
  base_model:
3
  - Qwen/Qwen3-14B
4
+ library_name: transformers
5
  license: apache-2.0
6
  pipeline_tag: text-generation
7
+ tags:
8
+ - mathematics
9
+ - reasoning
10
+ - qwen
11
  ---
12
 
13
  # Fast-Math-Qwen3-14B
 
16
 
17
  **[A Practical Two-Stage Recipe for Mathematical LLMs: Maximizing Accuracy with SFT and Efficiency with Reinforcement Learning](https://huggingface.co/papers/2507.08267)**
18
 
19
+ Project page: [https://analokmaus.github.io/kaggle-aimo2-fast-math-r1/](https://analokmaus.github.io/kaggle-aimo2-fast-math-r1/)
20
+
21
  This model enables **approx. 65% faster inference on average, with minimal loss in performance**, compared to the base `Qwen3-14B`.
22
 
23
  Technical details can be found in [our github repository](https://github.com/analokmaus/kaggle-aimo2-fast-math-r1/tree/master).