aashish1904 commited on
Commit
f6cc63f
·
verified ·
1 Parent(s): 5f7e3ab

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +164 -0
README.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ pipeline_tag: text-generation
5
+ base_model: AceReason-Nemotron-7B
6
+ library_name: transformers
7
+
8
+ ---
9
+
10
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
11
+
12
+
13
+ # QuantFactory/AceReason-Nemotron-7B-GGUF
14
+ This is quantized version of [nvidia/AceReason-Nemotron-7B](https://huggingface.co/nvidia/AceReason-Nemotron-7B) created using llama.cpp
15
+
16
+ # Original Model Card
17
+
18
+
19
+
20
+ ---
21
+ library_name: transformers
22
+ license: other
23
+ license_name: nvidia-open-model-license
24
+ license_link: >-
25
+ https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
26
+ pipeline_tag: text-generation
27
+ language:
28
+ - en
29
+ tags:
30
+ - nvidia
31
+ - reasoning
32
+ - math
33
+ - code
34
+ - reinforcement learning
35
+ - pytorch
36
+ ---
37
+
38
+ # AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning
39
+
40
+ <p align="center">
41
+
42
+ [![Technical Report](https://img.shields.io/badge/2505.16400-Technical_Report-blue)](https://arxiv.org/abs/2505.16400)
43
+ [![Dataset](https://img.shields.io/badge/🤗-Math_RL_Datset-blue)](https://huggingface.co/datasets/nvidia/AceReason-Math)
44
+ [![Models](https://img.shields.io/badge/🤗-Models-blue)](https://huggingface.co/collections/nvidia/acereason-682f4e1261dc22f697fd1485)
45
+ [![Eval Toolkit](https://img.shields.io/badge/🤗-Eval_Code-blue)](https://huggingface.co/nvidia/AceReason-Nemotron-14B/blob/main/README_EVALUATION.md)
46
+ </p>
47
+
48
+ <img src="fig/main_fig.png" alt="main_fig" style="width: 600px; max-width: 100%;" />
49
+
50
+ ## 🔥News
51
+ - **6/11/2025**: We share our evaluation toolkit at [AceReason Evalution](https://huggingface.co/nvidia/AceReason-Nemotron-14B/blob/main/README_EVALUATION.md) including:
52
+ - scripts to run inference and scoring
53
+ - LiveCodeBench (avg@8): model prediction files and scores for each month (2023/5-2025/5)
54
+ - AIME24/25 (avg@64): model prediction files and scores
55
+ - **6/2/2025**: We are excited to share our Math RL training dataset at [AceReason-Math](https://huggingface.co/datasets/nvidia/AceReason-Math)
56
+
57
+ We're thrilled to introduce AceReason-Nemotron-7B, a math and code reasoning model trained entirely through reinforcement learning (RL), starting from the DeepSeek-R1-Distilled-Qwen-7B. It delivers impressive results, achieving 69.0% on AIME 2024 (+14.5%), 53.6% on AIME 2025 (+17.4%), 51.8% on LiveCodeBench v5 (+8%), 44.1% on LiveCodeBench v6 (+7%). We systematically study the RL training process through extensive ablations and propose a simple yet effective approach: first RL training on math-only prompts, then RL training on code-only prompts. Notably, we find that math-only RL not only significantly enhances the performance of strong distilled models on math benchmarks, but also code reasoning tasks. In addition, extended code-only RL further improves code benchmark performance while causing minimal degradation in math results. We find that RL not only elicits the foundational reasoning capabilities acquired during pre-training and supervised fine-tuning (e.g., distillation), but also pushes the limits of the model's reasoning ability, enabling it to solve problems that were previously unsolvable.
58
+
59
+ We share our training recipe, training logs in our technical report.
60
+
61
+ ## Results
62
+
63
+ We evaluate our model against competitive reasoning models of comparable size within Qwen2.5 and Llama3.1 model family on AIME 2024, AIME 2025, LiveCodeBench v5 (2024/08/01 - 2025/02/01), and LiveCodeBench v6 (2025/02/01-2025/05/01). More evaluation results can be found in our technical report.
64
+
65
+ | **Model** | **AIME 2024<br>(avg@64)** | **AIME 2025<br>(avg@64)** | **LCB v5<br>(avg@8)** | **LCB v6<br>(avg@8)** |
66
+ | :---: | :---: | :---: | :---: | :---: |
67
+ | <small>QwQ-32B</small> | 79.5 | 65.8 | 63.4 | - |
68
+ | <small>DeepSeek-R1-671B</small> | 79.8 | 70.0 | 65.9 | - |
69
+ | <small>Llama-Nemotron-Ultra-253B</small> | 80.8 | 72.5 | 66.3 | - |
70
+ | <small>o3-mini (medium)</small> | 79.6 | 76.7 | 67.4 | - |
71
+ | <small>Light-R1-7B</small> | 59.1 | 44.3 | 40.6 | 36.4 |
72
+ | <small>Light-R1-14B</small> | 74 | 60.2 | 57.9 | 51.5 |
73
+ | <small>DeepCoder-14B (32K Inference)</small> | 71 | 56.1 | 57.9 | 50.4 |
74
+ | <small>OpenMath-Nemotron-7B</small> | 74.8 | 61.2 | - | - |
75
+ | <small>OpenCodeReasoning-Nemotron-7B</small> | - | - | 51.3 | 46.1 |
76
+ | <small>Llama-Nemotron-Nano-8B-v1</small> | 61.3 | 47.1 | 46.6 |46.2 |
77
+ | <small>DeepSeek-R1-Distilled-Qwen-7B</small> | 55.5 | 39.0 | 37.6 | 34.1 |
78
+ | <small>DeepSeek-R1-Distilled-Qwen-14B</small> | 69.7 | 50.2 | 53.1 | 47.9 |
79
+ | <small>DeepSeek-R1-Distilled-Qwen-32B</small> | 72.6 | 54.9 | 57.2 | - |
80
+ | [AceReason-Nemotron-7B 🤗](https://huggingface.co/nvidia/AceReason-Nemotron-7B)| 69.0 | 53.6 | 51.8 | 44.1 |
81
+ | [AceReason-Nemotron-14B 🤗](https://huggingface.co/nvidia/AceReason-Nemotron-14B)| 78.6 | 67.4 | 61.1 | 54.9 |
82
+
83
+
84
+
85
+
86
+
87
+ ## How to use
88
+ ```python
89
+ import torch
90
+ from transformers import AutoModelForCausalLM, AutoTokenizer
91
+
92
+ model_name = 'nvidia/AceReason-Nemotron-7B'
93
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
94
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
95
+
96
+ prompt = "Jen enters a lottery by picking $4$ distinct numbers from $S=\\{1,2,3,\\cdots,9,10\\}.$ $4$ numbers are randomly chosen from $S.$ She wins a prize if at least two of her numbers were $2$ of the randomly chosen numbers, and wins the grand prize if all four of her numbers were the randomly chosen numbers. The probability of her winning the grand prize given that she won a prize is $\\tfrac{m}{n}$ where $m$ and $n$ are relatively prime positive integers. Find $m+n$."
97
+ messages = [{"role": "user", "content": prompt}]
98
+
99
+ text = tokenizer.apply_chat_template(
100
+ messages,
101
+ tokenize=False,
102
+ add_generation_prompt=True
103
+ )
104
+ model_inputs = tokenizer([text], return_tensors="pt").to("cuda")
105
+
106
+ generated_ids = model.generate(
107
+ **model_inputs,
108
+ max_new_tokens=32768,
109
+ temperature=0.6,
110
+ top_p=0.95
111
+ )
112
+ generated_ids = [
113
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
114
+ ]
115
+
116
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
117
+ ```
118
+
119
+
120
+ ## Usage Recommendations
121
+
122
+ 1. Don't include a system prompt; instead, place all instructions directly in the user prompt.
123
+ 2. We recommend using the following instruction for math questions: Please reason step by step, and put your final answer within \\boxed{}.
124
+ 3. We recommend using the following instruction for code questions:
125
+ ```python
126
+ question = "" # code question
127
+ starter_code = "" # starter code function header
128
+
129
+ code_instruction_nostartercode = """Write Python code to solve the problem. Please place the solution code in the following format:\n```python\n# Your solution code here\n```"""
130
+ code_instruction_hasstartercode = """Please place the solution code in the following format:\n```python\n# Your solution code here\n```"""
131
+ if starter_code != "":
132
+ question += "\n\n" + "Solve the problem starting with the provided function header.\n\nFunction header:\n" + "```\n" + starter_code + "\n```"
133
+ question += "\n\n" + code_instruction_hasstartercode
134
+ else:
135
+ question += "\n\n" + code_instruction_nostartercode
136
+
137
+ final_prompt = "<|User|>" + question + "<|Assistant|><think>\n"
138
+ ```
139
+ 5. Our inference engine for evaluation is **vLLM==0.7.3** using top-p=0.95, temperature=0.6, max_tokens=32768.
140
+
141
+ ## Evaluation Toolkit
142
+
143
+ Please check evaluation code, scripts, cached prediction files in https://huggingface.co/nvidia/AceReason-Nemotron-14B/blob/main/README_EVALUATION.md
144
+
145
+
146
+ ## Correspondence to
147
+ Yang Chen (yachen@nvidia.com), Zhuolin Yang (zhuoliny@nvidia.com), Zihan Liu (zihanl@nvidia.com), Chankyu Lee (chankyul@nvidia.com), Wei Ping (wping@nvidia.com)
148
+
149
+
150
+ ## License
151
+ Your use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).
152
+
153
+
154
+ ## Citation
155
+ ```
156
+ @article{chen2025acereason,
157
+ title={AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning},
158
+ author={Chen, Yang and Yang, Zhuolin and Liu, Zihan and Lee, Chankyu and Xu, Peng and Shoeybi, Mohammad and Catanzaro, Bryan and Ping, Wei},
159
+ journal={arXiv preprint arXiv:2505.16400},
160
+ year={2025}
161
+ }
162
+ ```
163
+
164
+