Text Generation
Transformers
Safetensors
PyTorch
nvidia
conversational
Cheeeeeeeeky suhara commited on
Commit
be7b4cc
·
verified ·
0 Parent(s):

Duplicate from nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16

Browse files

Co-authored-by: Yoshi Suhara <suhara@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ accuracy_chart.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,864 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ license_name: nvidia-open-model-license
5
+ license_link: >-
6
+ https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
7
+ pipeline_tag: text-generation
8
+ language:
9
+ - en
10
+ - es
11
+ - fr
12
+ - de
13
+ - ja
14
+ - it
15
+ tags:
16
+ - nvidia
17
+ - pytorch
18
+ datasets:
19
+ - nvidia/Nemotron-Pretraining-Code-v1
20
+ - nvidia/Nemotron-CC-v2
21
+ - nvidia/Nemotron-Pretraining-SFT-v1
22
+ - nvidia/Nemotron-CC-Math-v1
23
+ - nvidia/Nemotron-Pretraining-Code-v2
24
+ - nvidia/Nemotron-Pretraining-Specialized-v1
25
+ - nvidia/Nemotron-CC-v2.1
26
+ - nvidia/Nemotron-CC-Code-v1
27
+ - nvidia/Nemotron-Pretraining-Dataset-sample
28
+ - nvidia/Nemotron-Competitive-Programming-v1
29
+ - nvidia/Nemotron-Math-v2
30
+ - nvidia/Nemotron-Agentic-v1
31
+ - nvidia/Nemotron-Math-Proofs-v1
32
+ - nvidia/Nemotron-Instruction-Following-Chat-v1
33
+ - nvidia/Nemotron-Science-v1
34
+ - nvidia/Nemotron-3-Nano-RL-Training-Blend
35
+ track_downloads: true
36
+ ---
37
+
38
+ # NVIDIA-Nemotron-3-Nano-30B-A3B-BF16
39
+
40
+ <div align="center" style="line-height: 1;">
41
+ <a href="https://build.nvidia.com/nvidia/nemotron-3-nano-30b-a3b" target="_blank" style="margin: 2px;">
42
+ <img alt="Chat" src="https://img.shields.io/badge/🤖Chat-Nemotron_3_Nano-536af5?color=76B900&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
43
+ </a>
44
+ <a href="https://arxiv.org/abs/2512.20848" target="_blank" style="margin: 2px;">
45
+ <img alt="Chat" src="https://img.shields.io/badge/📝Paper-Read Now!-536af5?color=76B900&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
46
+ </a>
47
+ <a href="https://huggingface.co/collections/nvidia/nemotron-pre-training-datasets" target="_blank" style="margin: 2px;">
48
+ <img alt="Pre-Training Datasets" src="https://img.shields.io/badge/🗄️_Pre--Training_Datasets-Available_Here-76B900?logoColor=white" style="display: inline-block; vertical-align: middle;"/>
49
+ </a>
50
+ <a href="https://huggingface.co/collections/nvidia/nemotron-post-training-v3" target="_blank" style="margin: 2px;">
51
+ <img alt="Post-Training Datasets" src="https://img.shields.io/badge/🗄️_Post--Training_Datasets-Available_Here-76B900?logoColor=white" style="display: inline-block; vertical-align: middle;"/>
52
+ </a>
53
+ </div>
54
+ <div align="center" style="line-height: 1;">
55
+ <a href="https://developer.nvidia.com/nemotron" target="_blank" style="margin: 2px;">
56
+ <img alt="Homepage" src="https://img.shields.io/badge/🏠Nemotron Developer Page-Learn More Here!-536af5?color=76B900&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
57
+ </a>
58
+ <a href="https://discord.gg/9xpKQtVvrk" target="_blank" style="margin: 2px;">
59
+ <img alt="Homepage" src="https://img.shields.io/badge/Discord-NVIDIA%20AI%20Developer-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
60
+ </a>
61
+ </div>
62
+
63
+ <div align="center" style="line-height: 1;">
64
+ <a href="https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-nemotron-open-model-license/" style="margin: 2px;">
65
+ <img alt="License" src="https://img.shields.io/badge/License-NVIDIA Open Model License-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
66
+ </a>
67
+ </div>
68
+
69
+
70
+ ![](./accuracy_chart.png)
71
+
72
+ ## Model Overview
73
+
74
+ **Model Developer:** NVIDIA Corporation
75
+
76
+ **Model Dates:**
77
+
78
+ September 2025 \- December 2025
79
+
80
+ **Data Freshness:**
81
+
82
+ * The post-training data has a cutoff date of November 28, 2025\.
83
+ * The pre-training data has a cutoff date of June 25, 2025\.
84
+
85
+ ## Description
86
+
87
+ Nemotron-3-Nano-30B-A3B-BF16 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be configured through a flag in the chat template. If the user prefers the model to provide its final answer without intermediate reasoning traces, it can be configured to do so, albeit with a slight decrease in accuracy for harder prompts that require reasoning. Conversely, allowing the model to generate reasoning traces first generally results in higher-quality final solutions to queries and tasks.
88
+
89
+ The model employs a hybrid Mixture-of-Experts (MoE) architecture, consisting of 23 Mamba-2 and MoE layers, along with 6 Attention layers. Each MoE layer includes 128 experts plus 1 shared expert, with 6 experts activated per token. The model has 3.5B active parameters and 30B parameters in total.
90
+
91
+ The supported languages include: English, German, Spanish, French, Italian, and Japanese. Improved using Qwen.
92
+
93
+ This model is ready for commercial use.
94
+
95
+ ### What is Nemotron?
96
+
97
+ NVIDIA Nemotron™ is a family of open models with open weights, training data, and recipes, delivering leading efficiency and accuracy for building specialized AI agents.
98
+
99
+ To get started, you can use [our quickstart guide](#quick-start-guide) below.
100
+
101
+ ## Feature Voting
102
+
103
+ We want to hear from you\! Share your ideas, vote on what matters, and help [shape the future of Nemotron](https://nemotron.ideas.nvidia.com/).
104
+
105
+ ## License/Terms of Use
106
+
107
+ Governing Terms: Use of this model is governed by the [NVIDIA Nemotron Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-nemotron-open-model-license/).
108
+
109
+ ### Reasoning Benchmark Evaluations
110
+
111
+ We evaluated our model on the following benchmarks:
112
+
113
+ | Task | NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 | Qwen3-30B-A3B-Thinking-2507 | GPT-OSS-20B |
114
+ | ----- | :---- | :---- | :---- |
115
+ | **General Knowledge** | | | |
116
+ | MMLU-Pro | 78.3 | **80.9** | 75.0 |
117
+ | **Reasoning** | | | |
118
+ | AIME25 (no tools) | 89.1 | 85.0 | **91.7** |
119
+ | AIME25 (with tools) | **99.2** | \- | 98.7 |
120
+ | GPQA (no tools) | 73.0 | **73.4** | 71.5 |
121
+ | GPQA (with tools) | **75.0** | \- | 74.2 |
122
+ | LiveCodeBench (v6 2025-08–2025-05) | **68.3** | 66.0 | 61.0 |
123
+ | SciCode (subtask) | 33.3 | 33.0 | **34.0** |
124
+ | HLE (no tools) | 10.6 | 9.8 | **10.9** |
125
+ | HLE (with tools) | 15.5 | \- | **17.3** |
126
+ | MiniF2F pass@1 | **50.0** | 5.7 | 12.1 |
127
+ | MiniF2F pass@32 | **79.9** | 16.8 | 43.0 |
128
+ | **Agentic** | | | |
129
+ | Terminal Bench (hard subset) | 8.5 | 5.0 | 6.0 |
130
+ | SWE-Bench (OpenHands) | **38.8** | 22.0 | 34.0 |
131
+ | TauBench V2 (Airline) | 48.0 | **58.0** | 38.0 |
132
+ | TauBench V2 (Retail) | 56.9 | **58.8** | 38.0 |
133
+ | TauBench V2 (Telecom) | 42.2 | 26.3 | **49.7** |
134
+ | TauBench V2 (Average) | **49.0** | 47.7 | 48.7 |
135
+ | BFCL v4 | **53.8** | 46.4\* | \- |
136
+ | **Chat & Instruction Following** | | | |
137
+ | IFBench (prompt) | **71.5** | 51.0 | 65.0 |
138
+ | Scale AI Multi Challenge | 38.5 | **44.8** | 33.8 |
139
+ | Arena-Hard-V2 (Hard Prompt) | **72.1** | 49.6\* | 71.2\* |
140
+ | Arena-Hard-V2 (Creative Writing) | 63.2 | **66.0\*** | 25.9& |
141
+ | Arena-Hard-V2 (Average) | **67.7** | 57.8 | 48.6 |
142
+ | **Long Context** | | | |
143
+ | AA-LCR | 35.9 | **59.0** | 34.0 |
144
+ | RULER-100@256k | **92.9** | 89.4 | \- |
145
+ | RULER-100@512k | **91.3** | 84.0 | \- |
146
+ | RULER-100@1M | **86.3** | 77.5 | \- |
147
+ | **Multilingual** | | | |
148
+ | MMLU-ProX (avg over langs) | 59.5 | **77.6\*** | 69.1\* |
149
+ | WMT24++ (en-\>xx) | **86.2** | 85.6 | 83.2 |
150
+
151
+ All evaluation results were collected via [Nemo Evaluator SDK](https://github.com/NVIDIA-NeMo/Evaluator) and [Nemo Skills](https://github.com/NVIDIA-NeMo/Skills). The open source container on Nemo Skills packaged via NVIDIA’s Nemo Evaluator SDK used for evaluations can be found [here](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/eval-factory/containers/nemo_skills?version=25.11). In addition to Nemo Skills, the evaluations also used dedicated packaged containers for Tau-2 Bench, ArenaHard v2, AA_LCR. A reproducibility tutorial along with all configs can be found in [Nemo Evaluator SDK examples](https://github.com/NVIDIA-NeMo/Evaluator/tree/main/packages/nemo-evaluator-launcher/examples/nemotron/nano-v3-reproducibility.md). The configs are also available in this HF repo [here](./nemo-evaluator-launcher-configs/local_nvidia_nemotron_3_nano_30b_a3b.yaml). \* denotes the accuracy numbers are measured by us.
152
+
153
+
154
+ ### Deployment Geography: Global
155
+
156
+ ### Use Case
157
+
158
+ NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 is a general purpose reasoning and chat model intended to be used in English and coding languages. Other non-English languages (English, Spanish, French, German, Japanese, Italian) are also supported. This model is intended to be used by developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. This model is also suitable for typical instruction-following tasks.
159
+
160
+ ### Release Date
161
+
162
+ December 15, 2025 via [Hugging Face](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16)
163
+
164
+ ## Reference(s)
165
+
166
+ * [NVIDIA Nemotron 3 model family on Hugging Face](https://huggingface.co/collections/nvidia/nvidia-nemotron-v3)
167
+ * [NVIDIA Nemotron 2 model family on Hugging Face](https://huggingface.co/collections/nvidia/nvidia-nemotron-v2)
168
+ * [Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning](https://arxiv.org/abs/2512.20848)
169
+ * [NVIDIA Nemotron 3 White Paper](https://arxiv.org/abs/2512.20856)
170
+
171
+ ## Model Architecture
172
+
173
+ - **Architecture Type:** Mamba2-Transformer Hybrid Mixture of Experts (MoE)
174
+ - **Network Architecture:** Nemotron Hybrid MoE
175
+ - **Number of model parameters:** 30B
176
+
177
+ ## Model Design
178
+
179
+ The model was trained with 25T tokens, with a batch size of 3072, and used the Warmup-Stable-Decay (WSD) learning rate schedule with 8B tokens of learning rate warm up, peak learning rate of 1e-3 and minimum learning rate of 1e-5. There are a total of 52 layers, of which there are 23 of each MoE and Mamba-2 and the remaining 6 layers use grouped query attention (GQA) with 2 groups. Each MoE layer includes 128 routed experts plus 1 shared expert, with 6 experts activated per token.
180
+
181
+ ## Training Methodology
182
+
183
+ Stage 1: Pre-Training
184
+
185
+ * [NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16) model was pre-trained using crawled and synthetic code, math, science, and general knowledge data. All datasets are disclosed in the [Training, Testing, and Evaluation Datasets](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16#training-testing-and-evaluation-datasets) section of this document. Major portions of the pre-training corpus are released in the [Nemotron-Pre-Training-Datasets](https://huggingface.co/collections/nvidia/nemotron-pre-training-datasets) collection.
186
+ * Software used for pre-training: [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
187
+
188
+ Stage 2: Supervised Fine-Tuning
189
+
190
+ * The model was further fine-tuned on synthetic code, math, science, tool calling, instruction following, structured outputs, and general knowledge data. All datasets are disclosed in the [Training, Testing, and Evaluation Datasets](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16#training-testing-and-evaluation-datasets) section of this document. Major portions of the fine-tuning corpus are released in the [Nemotron-Post-Training-v3](https://huggingface.co/collections/nvidia/nemotron-post-training-v3) collection. [Data Designer](https://github.com/NVIDIA-NeMo/DataDesigner) is one of the libraries used to prepare these corpora.
191
+ *
192
+
193
+ Stage 3: Reinforcement Learning
194
+
195
+ * The model underwent multi-environment reinforcement learning using synchronous GRPO (Group Relative Policy Optimization) across math, code, science, instruction following, multi-step tool use, multi-turn conversations, and structured output environments. Conversational quality was further refined through RLHF using a [generative reward model](https://huggingface.co/nvidia/Qwen3-Nemotron-235B-A22B-GenRM). All datasets are disclosed in the *Training, Testing, and Evaluation Datasets* section of this document. The RL environments and datasets are released as part of [NeMo Gym](https://github.com/NVIDIA-NeMo/Gym).
196
+ * Software used for reinforcement learning: [NeMo RL](https://github.com/NVIDIA-NeMo/RL), [NeMo Gym](https://github.com/NVIDIA-NeMo/Gym)
197
+
198
+ NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 model is a result of the above work.
199
+
200
+ The end-to-end training recipe is available in the [NVIDIA Nemotron Developer Repository](https://github.com/NVIDIA-NeMo/Nemotron). Evaluation results can be replicated using the [NeMo Evaluator SDK](https://github.com/NVIDIA-NeMo/Evaluator). [Data Designer](https://github.com/NVIDIA-NeMo/DataDesigner) is one of the libraries used to prepare the pre and post training datasets. More details on the datasets and synthetic data generation methods can be found in the technical report [NVIDIA Nemotron 3 Nano](https://arxiv.org/abs/2512.20848).
201
+
202
+ ## Input
203
+
204
+ - **Input Type(s):** Text
205
+
206
+ - **Input Format(s):** String
207
+
208
+ - **Input Parameters:** One-Dimensional (1D): Sequences
209
+
210
+ - **Maximum input size:** 128K tokens
211
+
212
+ - **Other Properties Related to Input:** Supported languages include: English, Spanish, French, German, Japanese, Italian
213
+
214
+ ## Output
215
+
216
+ - **Output Type(s):** Text
217
+
218
+ - **Output Format:** String
219
+
220
+ - **Output Parameters:** One-Dimensional (1D): Sequences
221
+
222
+ - **Maximum output size:** 128K tokens
223
+
224
+ Our AI models are designed and optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
225
+
226
+ ## Software Integration
227
+
228
+ - Runtime Engine(s): NeMo 25.11.01
229
+ - Supported Hardware Microarchitecture Compatibility: NVIDIA H100-80GB, NVIDIA A100
230
+ - Operating System(s): Linux
231
+
232
+ The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
233
+
234
+ ## Quick Start Guide
235
+
236
+ ### Use it with Transformers
237
+
238
+ The snippet below shows how to use this model with Huggingface Transformers (tested on version 4.57.3). We recommend using [NeMo Framework 25.11.01](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo/tags?version=25.11.01) to ensure all required libraries are available.
239
+
240
+ Please note that the model supports up to a 1M context size, although the default context size in the Hugging Face configuration is 256k due to higher VRAM requirements.
241
+
242
+ ```
243
+ import torch
244
+ from transformers import AutoTokenizer, AutoModelForCausalLM
245
+
246
+ # Load tokenizer and model
247
+ tokenizer = AutoTokenizer.from_pretrained("nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16")
248
+ model = AutoModelForCausalLM.from_pretrained(
249
+ "nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16",
250
+ torch_dtype=torch.bfloat16,
251
+ trust_remote_code=True,
252
+ device_map="auto"
253
+ )
254
+ ```
255
+
256
+ ```
257
+ messages = [
258
+ {"role": "user", "content": "Write a haiku about GPUs"},
259
+ ]
260
+
261
+ tokenized_chat = tokenizer.apply_chat_template(
262
+ messages,
263
+ tokenize=True,
264
+ add_generation_prompt=True,
265
+ return_tensors="pt"
266
+ ).to(model.device)
267
+
268
+ outputs = model.generate(
269
+ tokenized_chat,
270
+ max_new_tokens=1024,
271
+ temperature=1.0,
272
+ top_p=1.0,
273
+ eos_token_id=tokenizer.eos_token_id
274
+ )
275
+ print(tokenizer.decode(outputs[0]))
276
+ ```
277
+
278
+ `temperature=1.0` and `top_p=1.0` are recommended for reasoning tasks, while `temperature=0.6` and `top_p=0.95` are recommended for tool calling.
279
+
280
+ If you’d like to use reasoning off, add `enable_thinking=False` to `apply_chat_template()`. By default, `enable_thinking` is set to be `True`.
281
+
282
+ ```
283
+
284
+ tokenized_chat = tokenizer.apply_chat_template(
285
+ messages,
286
+ tokenize=True,
287
+ enable_thinking=False,
288
+ add_generation_prompt=True,
289
+ return_tensors="pt"
290
+ ).to(model.device)
291
+
292
+ # Use Greedy Search for reasoning off
293
+ outputs = model.generate(
294
+ tokenized_chat,
295
+ max_new_tokens=32,
296
+ do_sample=False,
297
+ num_beams=1,
298
+ eos_token_id=tokenizer.eos_token_id
299
+ )
300
+ print(tokenizer.decode(outputs[0]))
301
+ ```
302
+
303
+ ### Use it with vLLM
304
+
305
+ For more detailed information on how to use the model with vLLM, please see [this cookbook](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-3-Nano/vllm\_cookbook.ipynb).
306
+ If you are on Jetson Thor, please use this vllm container: `ghcr.io/nvidia-ai-iot/vllm:latest-jetson-thor`.
307
+
308
+ ```
309
+ pip install -U "vllm>=0.12.0"
310
+ ```
311
+
312
+ Download the custom parser from the Hugging Face repository.
313
+
314
+ ```
315
+ wget https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16/resolve/main/nano_v3_reasoning_parser.py
316
+ ```
317
+
318
+ Launch a vLLM server using the custom parser. In this example, we use a context length of 256k. You can increase the context size up to 1M to support longer contexts.
319
+
320
+ ```
321
+ vllm serve --model nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 \
322
+ --max-num-seqs 8 \
323
+ --tensor-parallel-size 1 \
324
+ --max-model-len 262144 \
325
+ --port 8000 \
326
+ --trust-remote-code \
327
+ --enable-auto-tool-choice \
328
+ --tool-call-parser qwen3_coder \
329
+ --reasoning-parser-plugin nano_v3_reasoning_parser.py \
330
+ --reasoning-parser nano_v3
331
+ ```
332
+
333
+ Here is an example client code for vLLM. By default, the endpoint has reasoning enabled. We recommend setting a high value (e.g., 10,000) for `max_tokens`.
334
+
335
+ ```shell
336
+ curl http://localhost:8000/v1/chat/completions \
337
+ -H "Content-Type: application/json" \
338
+ -d '{
339
+ "model": "model",
340
+ "messages":[{"role": "user", "content": "Write a haiku about GPUs"}],
341
+ "max_tokens": 10000
342
+ }'
343
+ ```
344
+
345
+ If you’d like to use reasoning off with vLLM, you can do the following:
346
+ vLLM OpenAI curl request:
347
+
348
+ ```shell
349
+ curl http://localhost:8000/v1/chat/completions \
350
+ -H "Content-Type: application/json" \
351
+ -d '{
352
+ "model": "model",
353
+ "messages":[{"role": "user", "content": "Write a haiku about GPUs"}],
354
+ "chat_template_kwargs": {"enable_thinking": false}
355
+ }'
356
+ ```
357
+
358
+ vLLM OpenAI client:
359
+
360
+ ```py
361
+ response = client.chat.completions.create(model=model, messages=messages, extra_body={"chat_template_kwargs": {"enable_thinking": False}})
362
+ ```
363
+
364
+ ### Use it with TRT-LLM
365
+
366
+ For more detailed information on how to use the model with TRT-LLM, please see [this cookbook](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-3-Nano/trtllm\_cookbook.ipynb).
367
+
368
+ ```
369
+ # nano_v3 example yaml is https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/auto_deploy/nano_v3.yaml
370
+ trtllm-serve <model_path> \
371
+ --backend _autodeploy \
372
+ --trust_remote_code \
373
+ --reasoning_parser nano-v3 \
374
+ --tool_parser qwen3_coder \
375
+ --extra_llm_api_options nano_v3.yaml
376
+ ```
377
+
378
+ ### Use it with SGLang
379
+
380
+ For more detailed information on how to use the model with SGLang, please see [this cookbook](https://github.com/NVIDIA-NeMo/Nemotron/blob/main/usage-cookbook/Nemotron-3-Nano/sglang\_cookbook.ipynb).
381
+
382
+ ```
383
+ python3 -m sglang.launch_server --model-path nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 \
384
+ --trust-remote-code \
385
+ --tp 1 \
386
+ --attention-backend flashinfer \
387
+ --tool-call-parser qwen3_coder \
388
+ --reasoning-parser nano_v3
389
+ ```
390
+
391
+ #### Using Budget Control
392
+
393
+ The thinking budget allows developers to keep accuracy high and meet response‑time targets \- which is especially crucial for customer support, autonomous agent steps, and edge devices where every millisecond counts.
394
+
395
+ With budget control, you can set a limit for internal reasoning:
396
+
397
+ * `reasoning_budget`: This is a threshold that will attempt to end the reasoning trace at the next newline encountered in the reasoning trace. If no newline is encountered within 500 tokens, it will abruptly end the reasoning trace at `reasoning_budget + 500`.
398
+
399
+ > NOTE: This client will work with any OpenAI API compatible endpoint.
400
+
401
+ Client for supporting budget control:
402
+
403
+ ```py
404
+ from typing import Any, Dict, List
405
+
406
+ import openai
407
+ from transformers import AutoTokenizer
408
+
409
+
410
+ class ThinkingBudgetClient:
411
+ def __init__(self, base_url: str, api_key: str, tokenizer_name_or_path: str):
412
+ self.base_url = base_url
413
+ self.api_key = api_key
414
+ self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path)
415
+ self.client = openai.OpenAI(base_url=self.base_url, api_key=self.api_key)
416
+
417
+
418
+ def chat_completion(
419
+ self,
420
+ model: str,
421
+ messages: List[Dict[str, Any]],
422
+ reasoning_budget: int = 512,
423
+ max_tokens: int = 1024,
424
+ **kwargs,
425
+ ) -> Dict[str, Any]:
426
+ assert (
427
+ max_tokens > reasoning_budget
428
+ ), f"thinking budget must be smaller than maximum new tokens. Given {max_tokens=} and {reasoning_budget=}"
429
+
430
+
431
+ # 1. first call chat completion to get reasoning content
432
+ response = self.client.chat.completions.create(
433
+ model=model, messages=messages, max_tokens=reasoning_budget, **kwargs
434
+ )
435
+ content = response.choices[0].message.content
436
+
437
+
438
+ reasoning_content = content
439
+ if not "</think>" in reasoning_content:
440
+ # reasoning content is too long, closed with a period (.)
441
+ reasoning_content = f"{reasoning_content}.\n</think>\n\n"
442
+ reasoning_tokens_len = len(
443
+ self.tokenizer.encode(reasoning_content, add_special_tokens=False)
444
+ )
445
+ remaining_tokens = max_tokens - reasoning_tokens_len
446
+ assert (
447
+ remaining_tokens > 0
448
+ ), f"remaining tokens must be positive. Given {remaining_tokens=}. Increase the max_tokens or lower the reasoning_budget."
449
+
450
+
451
+ # 2. append reasoning content to messages and call completion
452
+ messages.append({"role": "assistant", "content": reasoning_content})
453
+ prompt = self.tokenizer.apply_chat_template(
454
+ messages,
455
+ tokenize=False,
456
+ continue_final_message=True,
457
+ )
458
+ response = self.client.completions.create(
459
+ model=model, prompt=prompt, max_tokens=remaining_tokens, **kwargs
460
+ )
461
+
462
+
463
+ response_data = {
464
+ "reasoning_content": reasoning_content.strip().strip("</think>").strip(),
465
+ "content": response.choices[0].text,
466
+ "finish_reason": response.choices[0].finish_reason,
467
+ }
468
+ return response_data
469
+ ```
470
+
471
+ Calling the server with a budget (Restricted to 32 tokens here as an example)
472
+
473
+ ```py
474
+ tokenizer_name_or_path = "nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16"
475
+ client = ThinkingBudgetClient(
476
+ base_url="http://localhost:8000/v1", # Nemotron 3 Nano deployed in thinking mode
477
+ api_key="EMPTY",
478
+ tokenizer_name_or_path=tokenizer_name_or_path,
479
+ )
480
+
481
+
482
+ result = client.chat_completion(
483
+ model="nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16",
484
+ messages=[
485
+ {"role": "system", "content": "You are a helpful assistant. /think"},
486
+ {"role": "user", "content": "What is 2+2?"},
487
+ ],
488
+ reasoning_budget=32,
489
+ max_tokens=512,
490
+ temperature=1.0,
491
+ top_p=1.0,
492
+ )
493
+ print(result)
494
+ ```
495
+
496
+ You should see output similar to the following:
497
+
498
+ ```
499
+ {'reasoning_content': "Okay, the user asked, What is 2+2? Let me think. Well, 2 plus 2 equals 4. That's a basic.", 'content': '2 + 2 equals **4**.\n', 'finish_reason': 'stop'}
500
+ ```
501
+
502
+ ## Model Version(s)
503
+
504
+ - v1.0
505
+
506
+ # Training, Testing, and Evaluation Datasets
507
+
508
+ **Data Modality:** Text
509
+ **The total size:** 10,648,823,153,919 Tokens
510
+ **Total number of datasets:** 141
511
+ **Dataset partition:** *Training \[100%\], testing \[0%\], validation \[0%\]*
512
+ **Time period for training data collection:** 2013 to May 1, 2025
513
+ **Time period for testing data collection:** 2013 to May 1, 2025
514
+ **Time period for validation data collection:** 2013 to May 1, 2025
515
+ **Data Collection Method by dataset:** Hybrid: Automated, Human, Synthetic
516
+ **Labeling Method by dataset: Hybrid:** Automated, Human, Synthetic
517
+
518
+ NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 is pre-trained on a large corpus of high-quality curated and synthetically-generated data. It is trained in the English language, as well as 19 other languages and 43 programming languages. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. We also include a small portion of question-answering, and alignment style data to improve model accuracy. The model was trained for approximately 25 trillion tokens.
519
+
520
+ The post-training corpus for NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 of high-quality curated and synthetically-generated data. Primary languages used for post-training include English, German, Spanish, French, Italian, and Japanese.
521
+
522
+ These datasets, such as FinePDFs, EssentialWeb, HotpotQA, SQuAD, and HelpSteer3, do not collectively or exhaustively represent all demographic groups (and proportionally therein). For instance, these datasets do not contain explicit mentions of demographic classes such as age, gender, or ethnicity in 64-99% of samples, depending on the source. In the subset where such terms are present, document-based datasets (FinePDFs and EssentialWeb) contain representational skews, such as references to "male" outnumbering those to "female", and mentions of "White" as the most frequent among ethnic identifiers (comprising 43-44% of ethnicity mentions). To mitigate these imbalances, we recommend considering evaluation techniques such as bias audits, fine-tuning with demographically balanced datasets, and mitigation strategies like counterfactual data augmentation to align with the desired model behavior. This evaluation used a 3,000-sample subset per dataset, identified as the optimal threshold for maximizing embedder accuracy.
523
+
524
+ During post-training, we generate synthetic data by distilling trajectories, solutions, and translations from strong teacher models and agent systems, often grounded in real tasks or documents and aggressively filtered for quality. For math, code, and science, we start from curated problem sets and use open source permissive models such as GPT-OSS-120B to produce step-by-step reasoning traces, candidate solutions, best-of-n selection traces, and verified CUDA kernels. For long-context and science, we build synthetic QA and reasoning data by retrieving passages from long documents, generating MCQ/OpenQA questions and answers, and paraphrasing them into multiple prompt/response formats to ensure diversity. Across all pipelines we stack automated verification—compilers, numerical checks, language identification—to ensure our data is high quality.
525
+
526
+ For all domains, we apply a unified data filtering pipeline to ensure that only high-quality, license-compliant, and verifiable samples are used for post-training. We first discard malformed examples using structural checks (e.g., missing tool definitions when tool calls are present). We then aggressively filter reasoning traces exhibiting pathological repetition, such as repeated n-grams within a sliding window or across the entire trajectory, which we found to be a strong indicator of malformed or low-quality reasoning. Finally, based on internal audits of synthetically generated datasets, we observed that some teacher models occasionally produce reasoning traces and final responses that implicitly align with specific political entities or promote nationalistic narratives. To mitigate this, we apply targeted keyword- and regex-based filters and remove all trajectories matching such behavior.
527
+
528
+ Alongside the model, we release our final [pre-training](https://huggingface.co/collections/nvidia/nemotron-pre-training-datasets) and [post-training](https://huggingface.co/collections/nvidia/nemotron-post-training-v3) data, as outlined in this section. For ease of analysis, there is a sample set that is ungated. For all remaining code, math and multilingual data, gating and approval is required, and the dataset is permissively licensed for model training purposes.
529
+
530
+ More details on the datasets and synthetic data generation methods can be found in the technical report [NVIDIA Nemotron 3 Nano](https://arxiv.org/abs/2512.20848).
531
+
532
+ | Dataset | Collection Period |
533
+ | :---- | :---- |
534
+ | [GSM8K](https://github.com/openai/grade-school-math) | 4/23/2025 |
535
+ | [CC-NEWS](https://commoncrawl.org/blog/news-dataset-available) | 4/23/2025 |
536
+ | [Common Crawl](https://commoncrawl.org/) | 4/23/2025 |
537
+ | [Wikimedia](https://dumps.wikimedia.org/) | 4/23/2025 |
538
+ | [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k) | 4/23/2025 |
539
+ | [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k) | 4/23/2025 |
540
+ | [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) | 4/23/2025 |
541
+ | [APIGen Function-Calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) | 4/23/2025 |
542
+ | [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) | 4/23/2025 |
543
+ | [Open Textbook Library \- CC BY-SA & GNU subset](https://open.umn.edu/opentextbooks/textbooks/) and [OpenStax \- CC BY-SA subset](https://openstax.org/) | 4/23/2025 |
544
+ | [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb), [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k), [PRM800K](https://github.com/openai/prm800k), and [SciBench](https://github.com/mandyyyyii/scibench) | 4/23/2025 |
545
+ | [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) | 4/23/2025 |
546
+ | [Court Listener](https://www.courtlistener.com/help/api/bulk-data/) | Legacy Download |
547
+ | [peS2o](https://huggingface.co/datasets/allenai/peS2o) | Legacy Download |
548
+ | [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) | Legacy Download |
549
+ | [BioRxiv](https://www.biorxiv.org/tdm) | Legacy Download |
550
+ | [PMC Open Access Subset](https://pmc.ncbi.nlm.nih.gov/tools/openftlist/) | Legacy Download |
551
+ | [OpenWebText2](https://openwebtext2.readthedocs.io/en/latest/) | Legacy Download |
552
+ | [Stack Exchange Data Dump](https://archive.org/details/stackexchange) | Legacy Download |
553
+ | [PubMed Abstracts](https://github.com/thoppe/The-Pile-PubMed) | Legacy Download |
554
+ | [NIH ExPorter](https://exporter.nih.gov/ExPORTER_Catalog.aspx) | Legacy Download |
555
+ | [arXiv](https://info.arxiv.org/help/bulk_data/index.html) | Legacy Download |
556
+ | [BigScience Workshop Datasets](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#datasets) | Legacy Download |
557
+ | [Reddit Dataset](https://files.pushshift.io/reddit/) | Legacy Download |
558
+ | [SEC's Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/search-filings) | Legacy Download |
559
+ | [Advanced Mathematical Problem Solving](https://github.com/hendrycks/math?tab=readme-ov-file) | Legacy Download |
560
+ | [MathPile](https://github.com/GAIR-NLP/MathPile/) | Legacy Download |
561
+ | [NuminaMath CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) | Legacy Download |
562
+ | [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/) | Legacy Download |
563
+ | [FLAN](https://github.com/google-research/FLAN) | Legacy Download |
564
+ | [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb) | Legacy Download |
565
+ | [SciBench](https://github.com/mandyyyyii/scibench) | Legacy Download |
566
+ | [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) | Legacy Download |
567
+ | [FinQA](https://finqasite.github.io/) | Legacy Download |
568
+ | [Riddles](https://github.com/crawsome/riddles) | Legacy Download |
569
+ | [Problems in Elementary Mathematics for Home Study](https://archive.org/details/AntonovVygodskyNikitinSankinProblemsInElementaryMathematicsForHomeStudyMir1982) | Legacy Download |
570
+ | [MedMCQA](https://huggingface.co/datasets/openlifescienceai/medmcqa) | Legacy Download |
571
+ | [Cosmos QA](https://huggingface.co/datasets/allenai/cosmos_qa) | Legacy Download |
572
+ | [MCTest](https://huggingface.co/datasets/sagnikrayc/mctest) | Legacy Download |
573
+ | [AI2's Reasoning Challenge](https://huggingface.co/datasets/ai2_arc) | Legacy Download |
574
+ | [OpenBookQA](https://github.com/allenai/OpenBookQA) | Legacy Download |
575
+ | [MMLU Auxiliary Train](https://huggingface.co/datasets/cais/mmlu/viewer/all/auxiliary_train) | Legacy Download |
576
+ | [social-chemestry-101](https://huggingface.co/datasets/tasksource/social-chemestry-101) | Legacy Download |
577
+ | [Moral Stories](https://huggingface.co/datasets/demelin/moral_stories) | Legacy Download |
578
+ | [The Common Pile v0.1](https://huggingface.co/common-pile) | Legacy Download |
579
+ | [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) | Legacy Download |
580
+ | [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath) | Legacy Download |
581
+ | [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath) | Legacy Download |
582
+ | [MultiverseMathHard](https://huggingface.co/datasets/Nexusflow/MultiverseMathHard) | 10/2/2025 |
583
+ | [SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym) | 10/2/2025 |
584
+ | [WorkBench](https://github.com/olly-styles/WorkBench/tree/main/data/raw) | 10/2/2025 |
585
+ | [WildChat-1M](https://huggingface.co/datasets/allenai/WildChat-1M) | 10/2/2025 |
586
+ | [OpenCodeReasoning-2](https://huggingface.co/datasets/nvidia/OpenCodeReasoning-2) | 10/2/2025 |
587
+ | [HelpSteer3](https://huggingface.co/datasets/nvidia/HelpSteer3) | 10/2/2025 |
588
+ | [opc-sft-stage2](https://huggingface.co/datasets/OpenCoder-LLM/opc-sft-stage2) | 10/2/2025 |
589
+ | [Big-Math-RL-Verified](https://huggingface.co/datasets/SynthLabsAI/Big-Math-RL-Verified) | 10/2/2025 |
590
+ | [NuminaMath CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) | 10/2/2025 |
591
+ | [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) | 10/2/2025 |
592
+ | [simple-arithmetic-problems](https://huggingface.co/datasets/garrethlee/simple-arithmetic-problems) | 10/2/2025 |
593
+ | [arithmetic](https://huggingface.co/datasets/EleutherAI/arithmetic) | 10/2/2025 |
594
+ | [Skywork-OR1-RL-Data](https://huggingface.co/datasets/Skywork/Skywork-OR1-RL-Data) | 10/2/2025 |
595
+ | [News Commentary](https://opus.nlpl.eu/News-Commentary.php) | 10/2/2025 |
596
+ | [FastChat](https://github.com/lm-sys/FastChat/blob/main/playground/data/dummy.json) | 10/2/2025 |
597
+ | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | 10/2/2025 |
598
+ | [finepdfs](https://huggingface.co/datasets/HuggingFaceFW/finepdfs) | 10/2/2025 |
599
+ | [HotpotQA](https://huggingface.co/hotpot_qa/datasets) | 10/2/2025 |
600
+ | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | 10/2/2025 |
601
+ | [NLTK Words Lists](https://www.nltk.org/nltk_data/) | 10/2/2025 |
602
+
603
+ ## Private Non-publicly Accessible Datasets of Third Parties
604
+
605
+ | Dataset |
606
+ | :---- |
607
+ | Global Regulation |
608
+ | TAUS Translation Memory |
609
+ | Scale HLE |
610
+ | HackerRank Coding |
611
+
612
+ ## Private Non-publicly Accessible Datasets by NVIDIA
613
+
614
+ | Dataset |
615
+ | :---- |
616
+ | Simple Minesweeper |
617
+ | Simple Sudoku |
618
+ | Multitool Typewriter Hard |
619
+ | Machine Translation of News Commentary and TAUS Translation Memory |
620
+ | Machine Translation of STEM data using [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) |
621
+
622
+ ## Crawled and Scraped from Online Sources by NVIDIA
623
+
624
+ The English Common Crawl data was downloaded from the Common Crawl Foundation (see their FAQ for details on their crawling) and includes the snapshots CC-MAIN-2013-20 through CC-MAIN-2025-13. The data was subsequently deduplicated and filtered in various ways described in the Nemotron-CC paper. Additionally, we extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC.
625
+
626
+ The GitHub Crawl was collected using the GitHub REST API and the Amazon S3 API. Each crawl was operated in accordance with the rate limits set by its respective source, either GitHub or S3. We collect raw source code and subsequently remove any having a license which does not exist in our permissive-license set (for additional details, refer to the [technical report](https://arxiv.org/abs/2512.20848)).
627
+
628
+ | Dataset | Modality | Dataset Size | Collection Period | Collecting Organisation |
629
+ | :---- | :---- | :---- | :---- | :---- |
630
+ | English Common Crawl | Text | 3.36T | 4/8/2025 | NVIDIA Advanced Deep Learning Research |
631
+ | English Common Crawl 1.1 | Text | Not disclosed | 10/2/2025 | NVIDIA Advanced Deep Learning Research |
632
+ | Multilingual Common Crawl | Text | 812.7B | 5/1/2025 | NVIDIA Advanced Deep Learning Research |
633
+ | GitHub Crawl | Text | 747.4B | 4/29/2025 | NVIDIA Advanced Deep Learning Research |
634
+
635
+ ## NVIDIA-Sourced Synthetic Datasets
636
+
637
+ | Dataset | Modality | Dataset Size | Seed Dataset | Model(s) used for generation |
638
+ | :---- | :---- | :---- | :---- | :---- |
639
+ | Synthetic Art of Problem Solving from DeepSeek-R1 | Text | 40B | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
640
+ | Synthetic Moral Stories and Social Chemistry from Mixtral-8x22B-v0.1 | Text | 327M | [social-chemestry-101](https://huggingface.co/datasets/tasksource/social-chemestry-101); [Moral Stories](https://huggingface.co/datasets/demelin/moral_stories) | [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) |
641
+ | Synthetic Social Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 83.6M | [OpenStax \- CC BY-SA subset](https://openstax.org/) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) |
642
+ | Synthetic Health Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 9.7M | [OpenStax \- CC BY-SA subset](https://openstax.org/) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) |
643
+ | Synthetic STEM seeded with OpenStax, Open Textbook Library, and GSM8K from DeepSeek-R1, DeepSeek-V3, DeepSeek-V3-0324, and Qwen2.5-72B | Text | 175M | [OpenStax \- CC BY-SA subset](https://openstax.org/); [GSM8K](https://github.com/openai/grade-school-math); [Open Textbook Library \- CC BY-SA & GNU subset](https://open.umn.edu/opentextbooks/textbooks/) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1), [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) |
644
+ | [Nemotron-PrismMath](https://huggingface.co/datasets/nvidia/Nemotron-PrismMath) | Text | 4.6B | [Big-Math-RL-Verified](https://huggingface.co/datasets/SynthLabsAI/Big-Math-RL-Verified); [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) | [Qwen2.5-0.5B-instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct), [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct); [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
645
+ | Synthetic Question Answering Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | 350M | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
646
+ | Refreshed [Nemotron-MIND](https://huggingface.co/datasets/nvidia/Nemotron-MIND) from phi-4 | Text | 73B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
647
+ | Nemotron-CC-Math-4plus | Text | 52.3B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
648
+ | Nemotron-CC-Math-3 | Text | 80.9B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
649
+ | Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from DeepSeek-V3 and DeepSeek-V3-0324 | Text | 4.0B | [AQUA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat); [LogiQA](https://huggingface.co/datasets/lucasmccabe/logiqa); [AR-LSAT](https://github.com/zhongwanjun/AR-LSAT) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324) |
650
+ | Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from Qwen3-30B-A3B | Text | 4.2B | [AQUA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat); [LogiQA](https://huggingface.co/datasets/lucasmccabe/logiqa); [AR-LSAT](https://github.com/zhongwanjun/AR-LSAT) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
651
+ | Synthetic Art of Problem Solving from Qwen2.5-32B-Instruct, Qwen2.5-Math-72B, Qwen2.5-Math-7B, and Qwen2.5-72B-Instruct | Text | | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); [GSM8K](https://github.com/openai/grade-school-math); [PRM800K](https://github.com/openai/prm800k) | [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [Qwen2.5-Math-72B](https://huggingface.co/Qwen/Qwen2.5-Math-72B); [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B); [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
652
+ | Synthetic MMLU Auxiliary Train from DeepSeek-R1 | Text | 0.5B | [MMLU Auxiliary Train](https://huggingface.co/datasets/cais/mmlu/viewer/all/auxiliary_train) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
653
+ | Synthetic Long Context Continued Post-Training Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
654
+ | Synthetic Common Crawl from Qwen3-30B-A3B and Mistral-Nemo-12B-Instruct | Text | 415.8B | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct) |
655
+ | Synthetic Multilingual Data from Common Crawl from Qwen3-30B-A3B | Text | | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
656
+ | Synthetic Multilingual Data from Wikimedia from Qwen3-30B-A3B | Text | | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
657
+ | Synthetic Math Data from Wikimedia from Nemotron-4-340B-Instruct | Text | | \- | [Nemotron-4-340B-Instruct](https://huggingface.co/nvidia/Nemotron-4-340B-Instruct) |
658
+ | Synthetic Common Crawl Code from phi-4 | Text | 427.9B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) |
659
+ | Synthetic Scientific Coding from Qwen3-235B-A22B | Text | 1.2B | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507) |
660
+ | Tool Calling Data | Text | 26.2B | | [Qwen3-235B-A22B-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
661
+ | Synthetic Essential-Web from QwQ-32B | Text | 28.1B | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) |
662
+ | Translated Synthetic Crawl | Text | 389.9B | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
663
+ | Translated Synthetic Wikipedia | Text | 7.9B | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
664
+ | Synthetic Art of Problem Solving from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) |
665
+ | Synthetic Stack Exchange from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | [Stack Exchange](https://archive.org/details/stackexchange) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) |
666
+ | Synthetic OpenCodeReasoning from DeepSeek-R1-0528 | Text | Undisclosed | [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
667
+ | Synthetic HackerRank Coding from DeepSeek-R1-0528 | Text | Undisclosed | HackerRank Coding Dataset | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
668
+ | Synthetic SWE-Gym from Qwen3-Coder-480B-A35B-Instruct | Text | Undisclosed | [SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym) | [Qwen3-Coder-480B-A35B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct) |
669
+ | Synthetic Art of Problem Solving and Stack Exchange from gpt-oss-120b, Qwen2.5-32B-Instruct, and Goedel-Prover-V2-32B | Text | Undisclosed | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); [Stack Exchange](https://archive.org/details/stackexchange) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [Goedel-Prover-V2-32B](https://huggingface.co/Goedel-LM/Goedel-Prover-V2-32B) |
670
+ | Synthetic Multilingual Science and Code data from DeepSeek-R1, DeepSeek-R1-0528, Qwen2.5-32B-Instruct, and Qwen3-235B-A22B, translated with Qwen2.5-32B-Instruct and Qwen2.5-14B-Instruct | Text | Undisclosed | [Stack Exchange](https://archive.org/details/stackexchange); [SCP-116K](https://huggingface.co/datasets/EricLu/SCP-116K); [LIMO](https://huggingface.co/datasets/GAIR/LIMO); [TACO](https://huggingface.co/datasets/BAAI/TACO); Code Contest; Codeforces | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); |
671
+ | Synthetic Safety from DeepSeek-R1-0528, gpt-oss-120b and Mixtral-8x7B-v0.1 | Text | Undisclosed | [Nemotron Content Safety Dataset V2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0); [Gretel Synthetic Safety Alignment Dataset](https://huggingface.co/datasets/gretelai/gretel-safety-alignment-en-v1); [RedTeam-2K](https://huggingface.co/datasets/JailbreakV-28K/JailBreakV-28k); [Malicious Tasks](https://github.com/CrystalEye42/eval-safety/blob/main/malicious_tasks_dataset.yaml); [Nemotron-Personas-USA](https://huggingface.co/datasets/nvidia/Nemotron-Personas-USA) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528); [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) |
672
+ | Synthetic STEM from Qwen3-235B-A22B-Instruct-2507 and gpt-oss-120b | Text | Undisclosed | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
673
+ | Synthetic KernelBook from DeepSeek-R1-0528 | Text | Undisclosed | [KernelBook](https://huggingface.co/datasets/GPUMODE/KernelBook) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
674
+ | Synthetic Tool Calling from Qwen3-235B-A22B-Thinking-2507 and Qwen3-Next-80B-A3B-Thinking | Text | Undisclosed | [ToolBench](https://github.com/OpenBMB/ToolBench/tree/master); [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2); [APIGen Function-Calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k); [Nemotron-Personas-USA](https://huggingface.co/datasets/nvidia/Nemotron-Personas-USA) | [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507); [Qwen3-Next-80B-A3B-Thinking](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking) |
675
+ | Synthetic Chat from gpt-oss-120b, Mixtral-8x22B-Instruct-v0.1, Qwen3-235B-A22B-Instruct-2507 , and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | [C4](https://huggingface.co/datasets/allenai/c4); [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m); [ShareGPT](https://huggingface.co/datasets/RyokoAI/ShareGPT52K); [GSM8K](https://github.com/openai/grade-school-math); [PRM800K](https://github.com/openai/prm800k); [FinQA](https://finqasite.github.io/); [WikiTableQuestions](https://huggingface.co/wikitablequestions/datasets); [Riddles](https://github.com/crawsome/riddles); [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2); [SciBench](https://huggingface.co/datasets/xw27/scibench); [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k); [OpenBookQA](https://github.com/allenai/OpenBookQA); [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb); Software Heritage; [Khan Academy Math Keywords](https://www.khanacademy.org/math); [WildChat-1M](https://huggingface.co/datasets/allenai/WildChat-1M); [Nemotron-Personas-USA](https://huggingface.co/datasets/nvidia/Nemotron-Personas-USA) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
676
+ | Synthetic Long Context from Qwen3-235B-A22B-Instruct-2507 | Text | Undisclosed | [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507) |
677
+ | Synthetic Tool Use Interactive Agent from gpt-oss-120b, DeepSeek-R1-0528, Qwen3-32B, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | NVIDIA Internal | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528); [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B); and [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
678
+ | Synthetic STEM from Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | [ICHO-IPH0](https://huggingface.co/datasets/II-Vietnam/IChO-IPhO-RL-v2-formated); [Physics Big](https://huggingface.co/datasets/Vikhrmodels/physics_big); Scale HLE; [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning); [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) | [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
679
+ | Synthetic DocFinQA and SWE-smith from Qwen3-Coder-480B-A35B-Instruct and Kimi-K2-Thinking | Text | Undisclosed | [DocFinQA](https://huggingface.co/datasets/kensho/DocFinQA); [SWE-smith](https://huggingface.co/datasets/SWE-bench/SWE-smith) | [Qwen3-Coder-480B-A35B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct); [Kimi-K2-Thinking](https://huggingface.co/moonshotai/Kimi-K2-Thinking) |
680
+ | Synthetic Math from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | \- | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) |
681
+ | Synthetic Essential-Web from gpt-oss-120b | Text | Undisclosed | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
682
+ | Synthetic Scale HLE from gpt-oss-120b | Text | Undisclosed | Scale HLE | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
683
+ | Synthetic CDQuestions from gpt-oss-120b | Text | Undisclosed | [CDQuestions](https://cdquestions.com/) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
684
+ | Synthetic Stack Exchange from gpt-oss-120b | Text | Undisclosed | [Stack Exchange](https://archive.org/details/stackexchange) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
685
+ | Synthetic GPQA from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | [Stack Exchange](https://archive.org/details/stackexchange) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) |
686
+ | Synthetic Vedantu from gpt-oss-120b | Text | Undisclosed | [Vedantu](https://www.vedantu.com/) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
687
+ | Synthetic SWE-Gym and R2E-Gym-Subset from Qwen3-Coder-480B-A35B-Instruct | Text | Undisclosed | [SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym); [R2E-Gym-Subset](https://huggingface.co/datasets/R2E-Gym/R2E-Gym-Subset) | [Qwen3-Coder-480B-A35B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct) |
688
+ | Synthetic SWE-Gym from Qwen3-Coder-480B-A35B-Instruct | Text | Undisclosed | [SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym) | [Qwen3-Coder-480B-A35B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct) |
689
+ | Synthetic SWE-Gym and R2E-Gym-Subset from DeepSeek-R1-0528 | Text | Undisclosed | [SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym); [R2E-Gym-Subset](https://huggingface.co/datasets/R2E-Gym/R2E-Gym-Subset) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
690
+ | Synthetic HelpSteer, LMSYS-Chat-1M, and Nemotron-Personas-USA from gpt-oss-120b, Qwen3-235B-A22B-Instruct-2507, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | [HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2); [HelpSteer3](https://huggingface.co/datasets/nvidia/HelpSteer3); [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m); [Nemotron-Personas-USA](https://huggingface.co/datasets/nvidia/Nemotron-Personas-USA) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
691
+ | Synthetic Structured Outputs from Qwen3-30B-A3B-Instruct-2507, Qwen3-30B-A3B-Thinking-2507, Qwen3-235B-A22B-Instruct-2507, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | \- | [Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507); [Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
692
+ | Synthetic Search STEM MCQ from Qwen3-235B-A22B and DeepSeek-R1-0528 | Text | Undisclosed | \- | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
693
+ | Synthetic Search STEM OPENQ from DeepSeek-R1-0528 | Text | Undisclosed | \- | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
694
+ | Synthetic OpenSTEM from Qwen2.5-32B-Instruct and DeepSeek-R1-0528 | Text | Undisclosed | \- | [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
695
+ | Synthetic MCQ from Qwen2.5-32B-Instruct and DeepSeek-R1-0528 | Text | Undisclosed | \- | [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
696
+ | Synthetic MCQ10 from DeepSeek-R1-0528 | Text | Undisclosed | \- | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
697
+ | Synthetic MCQ4 from Qwen3-235B-A22B, DeepSeek-R1-0528, and Qwen3-235B-A22B-Instruct-2507 | Text | Undisclosed | \- | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507) |
698
+ | Synthetic OpenMathReasoning from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) |
699
+ | Synthetic Offline Search MCQA HLE from DeepSeek-R1-0528 | Text | Undisclosed | \- | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
700
+ | Synthetic Offline Search MCQA GPQA from Qwen3-235B-A22B and DeepSeek-R1-0528 | Text | Undisclosed | \- | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
701
+ | Synthetic Human Preference from QwQ-32B, Qwen3-30B-A3B, Qwen3-235B-A22B, Qwen3-235B-A22B-Instruct-2507, Mistral-Small-3.1-24B-Instruct-2503, Mistral-Small-3.2-24B-Instruct-2506, MiniMax-M1-80k, MiniMax-M1-40k, Kimi-K2-Instruct, DeepSeek-V3-0324, DeepSeek-R1-0528 | Text | Undisclosed | \- | [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B); [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503); [Mistral-Small-3.2-24B-Instruct-2506](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506); [MiniMax-M1-80k](https://huggingface.co/MiniMaxAI/MiniMax-M1-80k); [MiniMax-M1-40k](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k); [Kimi-K2-Instruct](https://huggingface.co/moonshotai/Kimi-K2-Instruct); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
702
+ | Synthetic WildChat-1M and arena-human-preference-140k from DeepSeek-R1, gemma-2-2b-it, gemma-3-27b-it, gpt-oss-20b, gpt-oss-120b, Mistral-7B-Instruct-v0.3, Mixtral-8x22B-Instruct-v0.1, Nemotron-4-340B-Instruct, NVIDIA-Nemotron-Nano-9B-v2, Phi-4-mini-instruct, Phi-3-small-8k-instruct, Phi-3-medium-4k-instruct, Qwen3-235B-A22B, QwQ-32B | Text | Undisclosed | [WildChat-1M](https://huggingface.co/datasets/allenai/WildChat-1M); [arena-human-preference-140k](https://huggingface.co/datasets/lmarena-ai/arena-human-preference-140k) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1); [gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it); [gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it); [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b); [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3); [Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1); [Nemotron-4-340B-Instruct](https://huggingface.co/nvidia/Nemotron-4-340B-Instruct); [NVIDIA-Nemotron-Nano-9B-v2](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2); [Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct); [Phi-3-small-8k-instruct](https://huggingface.co/microsoft/Phi-3-small-8k-instruct); [Phi-3-medium-4k-instruct](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct); [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) |
703
+ | Synthetic Safety from DeepSeek-R1-0528, gpt-oss-120b, DeepSeek-R1-Distill-Qwen-7B, and Mixtral-8x7B-v0.1 | Text | Undisclosed | [Nemotron Content Safety Dataset V2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0); [Gretel Synthetic Safety Alignment Dataset](https://huggingface.co/datasets/gretelai/gretel-safety-alignment-en-v1); [RedTeam-2K](https://huggingface.co/datasets/JailbreakV-28K/JailBreakV-28k); [Malicious Tasks](https://github.com/CrystalEye42/eval-safety/blob/main/malicious_tasks_dataset.yaml); | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528); [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B); [Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507); [Qwen3-235B-A22B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507); [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) |
704
+ | Synthetic Code from Qwen3-32B | Text | Undisclosed | English Common Crawl; English Common Crawl 1.1 | [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) |
705
+ | Synthetic OpenCodeReasoning from DeepSeek-R1 | Text | Undisclosed | [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
706
+ | Synthetic LIMO from DeepSeek-R1-0528 | Text | Undisclosed | [LIMO](https://huggingface.co/datasets/GAIR/LIMO) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
707
+ | Synthetic SCP from DeepSeek-R1-0528 | Text | Undisclosed | [SCP-116K](https://huggingface.co/datasets/EricLu/SCP-116K) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
708
+ | Synthetic Stack Exchange from DeepSeek-R1-0528 | Text | Undisclosed | [Stack Exchange](https://archive.org/details/stackexchange) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
709
+ | Synthetic Common Crawl from Qwen3-30B-A3B | Text | Undisclosed | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
710
+ | Synthetic Wikipedia from Qwen3-30B-A3B | Text | Undisclosed | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
711
+ | Synthetic Essential-Web from Qwen3-30B-A3B and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | [Essential-Web](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) |
712
+ | Synthetic Textbook Math from Qwen3-30B-A3B, Qwen3-235B-A22B, phi-4 | Text | Undisclosed | [Common Crawl](https://commoncrawl.org/); [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [phi-4](https://huggingface.co/microsoft/phi-4) |
713
+ | Synthetic Math and Code from DeepSeek-R1 and DeepSeek-R1-0528 | Text | Undisclosed | [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder-Evol-Instruct-110K); [opc-sft-stage2](https://huggingface.co/datasets/OpenCoder-LLM/opc-sft-stage2); [TACO](https://huggingface.co/datasets/BAAI/TACO); [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning); [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning); [NuminaMath CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1); [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) |
714
+ | Synthetic Nemotron-Personas-USA from gpt-oss-120b and Qwen3-8B | Text | Undisclosed | [Nemotron-Personas-USA](https://huggingface.co/datasets/nvidia/Nemotron-Personas-USA) | [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b); [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) |
715
+
716
+ ## Training Dataset
717
+
718
+ | Dataset | \# of Tokens in Nemotron Nano 2 | \# of Tokens in Nemotron 3 Nano |
719
+ | :---- | :---- | :---- |
720
+ | English Common Crawl | 3,360,110,334,818 | 3,456,523,212,210 |
721
+ | English Synthetic CC | 1,949,464,641,123 | 4,340,740,677,920 |
722
+ | Crawl++ | 360,389,153,262 | 360,389,153,262 |
723
+ | Math | 124,606,230,663 | 154,217,502,165 |
724
+ | Synthetic Math | 73,007,767,155 | 73,007,767,155 |
725
+ | Code | 747,409,228,724 | 1,043,856,922,136 |
726
+ | Synthetic Code | 175,067,553,293 | 453,117,917,176 |
727
+ | Common Crawl Code | 0 | 263,072,374,097 |
728
+ | English Wiki | 17,349,266,926 | 17,349,266,926 |
729
+ | Synthetic Wiki | 0 | 7,850,648,552 |
730
+ | Books | 0 | 0 |
731
+ | Papers | 191,586,493,365 | 191,586,493,365 |
732
+ | PDF-to-text | 141,096,578,533 | 141,096,578,533 |
733
+ | Code SFT | 60,025,726,817 | 102,863,752,325 |
734
+ | STEM SFT | 272,680,426,295 | 359,826,214,274 |
735
+ | General SFT | 6,057,478,645 | 6,057,478,645 |
736
+ | Tool-Calling SFT | 0 | 26,244,716,867 |
737
+ | Multilingual | 2,172,261,909,350 | 1,743,892,490,859 |
738
+ | Synthetic multilingual | 997,710,364,950 | 595,140,661,135 |
739
+ | **Total** | **10,648,823,153,919** | **13,336,833,827,602** |
740
+
741
+ We use a considerable amount of synthetic data. Out of 10.6 trillion tokens, 3,534,013,958,278 tokens are synthetically generated.
742
+
743
+ We extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC. Additionally, we used data from Wikipedia and FineWeb-2 (Penedo et al., 2025\) for these fifteen languages as well as four additional languages: Czech, Finnish, Hebrew, and Hindi.
744
+
745
+ | Language | Total Tokens |
746
+ | :---- | :---- |
747
+ | Arabic | 118,056,362,726 |
748
+ | Danish | 117,747,321,618 |
749
+ | German | 146,613,691,781 |
750
+ | Spanish | 469,156,575,409 |
751
+ | French | 139,982,002,289 |
752
+ | Italian | 298,858,370,174 |
753
+ | Japanese | 682,755,693,336 |
754
+ | Korean | 127,099,747,538 |
755
+ | Dutch | 89,041,592,681 |
756
+ | Polish | 105,356,493,147 |
757
+ | Portuguese | 243,249,275,089 |
758
+ | Russian | 185,314,014,057 |
759
+ | Swedish | 74,954,953,299 |
760
+ | Thai | 160,778,944,467 |
761
+ | Chinese | 211,007,236,689 |
762
+
763
+ We collect a total of 922,476,782,017 tokens of code in 43 different languages.
764
+
765
+ | Language | Tokens |
766
+ | :---- | :---- |
767
+ | Assembly | 750,628,764 |
768
+ | C | 42,657,300,868 |
769
+ | C\# | 56,153,329,307 |
770
+ | C++ | 67,773,701,658 |
771
+ | CommonLisp | 263,234,672 |
772
+ | CSS | 38,848,760,035 |
773
+ | Cuda | 400,222,993 |
774
+ | Dart | 3,816,960,470 |
775
+ | Dockerfile | 474,958,084 |
776
+ | Fortran | 1,105,049,387 |
777
+ | Go | 8,332,419,480 |
778
+ | Haskell | 1,294,613,669 |
779
+ | HTML | 69,082,117,487 |
780
+ | Java | 131,440,465,822 |
781
+ | JavaScript | 75,573,420,861 |
782
+ | JSON | 15,366,881,241 |
783
+ | Julia | 621,046,949 |
784
+ | JupyterNotebook | 2,241,893,197 |
785
+ | Lua | 4,146,420,802 |
786
+ | Makefile | 12,640,010,879 |
787
+ | Markdown | 64,796,743,311 |
788
+ | Mathematica | 320,504,225 |
789
+ | OmniversePython | 26,946,093 |
790
+ | Pascal | 1,625,013,876 |
791
+ | Perl | 1,575,314,434 |
792
+ | PHP | 61,575,339,005 |
793
+ | Python | 126,916,727,384 |
794
+ | R | 19,811,381,935 |
795
+ | reStructuredText | 1,779,876,391 |
796
+ | Ruby | 6,446,962,615 |
797
+ | Rust | 4,438,640,533 |
798
+ | Scala | 3,343,959,154 |
799
+ | Shell | 18,758,779,250 |
800
+ | SQL | 23,205,633,085 |
801
+ | Swift | 5,976,714,881 |
802
+ | SystemVerilog | 233,056,185 |
803
+ | TeX | 7,347,157,527 |
804
+ | TypeScript | 15,657,838,582 |
805
+ | Verilog | 811,884,369 |
806
+ | VHDL | 648,401,444 |
807
+ | VisualBasic.NET | 1,005,680,881 |
808
+ | XML | 12,616,779,741 |
809
+ | YAML | 10,574,010,491 |
810
+
811
+ ## Language Distribution in Post-Training
812
+
813
+ For our post-training recipe, we focused on 5 main languages in addition to English: Spanish, French, Japanese, Italian, German.
814
+ Those languages were represented in the form of multilingual reasoning and translation task.
815
+
816
+ The following table depicts our sample distribution for the 6 languages and 5 translation pairs.
817
+
818
+ | Language | Size |
819
+ | :---- | :---- |
820
+ | English | 16.2 M |
821
+ | Italian | 0.252M |
822
+ | German | 0.252M |
823
+ | Spanish | 0.252M |
824
+ | French | 0.252M |
825
+ | Japanese | 0.252M |
826
+ | English \<-\> Italian | 108k |
827
+ | English \<-\> German | 108k |
828
+ | English \<-\> Spanish | 108k |
829
+ | English \<-\> French | 108k |
830
+ | English \<-\> Japanese | 108k |
831
+
832
+ ## Evaluation Dataset
833
+
834
+ * Data Collection Method by dataset: Hybrid: Human, Synthetic
835
+ * Labeling Method by dataset: Hybrid: Automated, Human, Synthetic
836
+
837
+ ## Inference
838
+
839
+ - Engines: HF, vLLM, TRT-LLM, SGLang, Llama.cpp
840
+ - Test Hardware: NVIDIA A100 80GB, H100 80GB, B200 192GB, RTX PRO 6000 96GB, Jetson Thor
841
+
842
+
843
+ ## Ethical Considerations
844
+
845
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our [Trustworthy AI terms of service](https://www.nvidia.com/en-us/agreements/trustworthy-ai/terms/), developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
846
+
847
+ We advise against circumvention of any provided safety guardrails contained in the Model without a substantially similar guardrail appropriate for your use case.For more details: [Safety](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16/blob/main/safety.md) and [Explainability](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16/blob/main/explainability.md) Subcards.
848
+
849
+ For more detailed information on ethical considerations for this model, please see the Model Card++ [Bias](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16/blob/main/bias.md), and [Privacy](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16/blob/main/privacy.md) Subcards.
850
+
851
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
852
+
853
+ ## Citation
854
+
855
+ ```
856
+
857
+ @misc{nvidia_nemotron_nano_v3_2025,
858
+ title = {{Nemotron 3 Nano}: Open, Efficient Mixture-of-Experts Hybrid {Mamba}-{Transformer} Model for {Agentic} Reasoning},
859
+ author = {{NVIDIA}},
860
+ year = {2025},
861
+ url = {https://arxiv.org/abs/2512.20848},
862
+ note = {Technical report}
863
+ }
864
+ ```
accuracy_chart.png ADDED

Git LFS Details

  • SHA256: 5fc15c89729277632af6491fd62b023af7e36345c1131a3f5e9801f8dacd54f8
  • Pointer size: 131 Bytes
  • Size of remote file: 191 kB
bias.md ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ | Field | Response |
2
+ | :---- | :---- |
3
+ | Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None |
4
+ | Bias Metric (If Measured): | [BBQ Accuracy Scores in Ambiguous Contexts](https://github.com/nyu-mll/BBQ/) |
5
+ | Which characteristic (feature) show(s) the greatest difference in performance?: | The model shows high variance in the characteristics when it is used with a high temperature. |
6
+ | Which feature(s) have the worst performance overall? | Age |
7
+ | Measures taken to mitigate against unwanted bias: | None |
8
+ | If using internal data, description of methods implemented in data acquisition or processing, if any, to address the prevalence of identifiable biases in the training, testing, and validation data: | The training datasets contain a large amount of synthetic data generated by LLMs. We manually curated prompts. |
9
+ | Tools used to assess statistical imbalances and highlight patterns that may introduce bias into AI models: | [BBQ](https://github.com/nyu-mll/BBQ/) |
10
+ | Tools used to assess statistical imbalances and highlight patterns that may introduce bias into AI models: | These datasets, such as Common Crawl, CC-News, and Wikimedia, do not collectively or exhaustively represent all demographic groups (and proportionally therein). For instance, these datasets do not contain explicit mentions of demographic classes such as age, gender, or ethnicity in over 85% of samples. In the subset where such terms are present, Common Crawl and CC-News contain notable representational skews—for example, references to "male" significantly outnumber those to "female," and mentions of "White" are the most frequent among ethnic identifiers. To mitigate these imbalances, we recommend considering evaluation techniques such as bias audits, fine-tuning with demographically balanced datasets, and mitigation strategies like counterfactual data augmentation to align with the desired model behavior. This evaluation used a 3,000-sample subset per dataset, identified as the optimal threshold for maximizing embedder accuracy, and includes outputs from uncalibrated embedders; as such, certain limitations may exist in the reliability of the embedding. |
chat_template.jinja ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {% macro render_extra_keys(json_dict, handled_keys) %}
2
+ {%- if json_dict is mapping %}
3
+ {%- for json_key in json_dict if json_key not in handled_keys %}
4
+ {%- if json_dict[json_key] is mapping or (json_dict[json_key] is sequence and json_dict[json_key] is not string) %}
5
+ {{- '\n<' ~ json_key ~ '>' ~ (json_dict[json_key] | tojson | safe) ~ '</' ~ json_key ~ '>' }}
6
+ {%- else %}
7
+ {{-'\n<' ~ json_key ~ '>' ~ (json_dict[json_key] | string) ~ '</' ~ json_key ~ '>' }}
8
+ {%- endif %}
9
+ {%- endfor %}
10
+ {%- endif %}
11
+ {% endmacro %}
12
+ {%- set enable_thinking = enable_thinking if enable_thinking is defined else True %}
13
+ {%- set truncate_history_thinking = truncate_history_thinking if truncate_history_thinking is defined else True %}
14
+
15
+ {%- set ns = namespace(last_user_idx = -1) %}
16
+ {%- set loop_messages = messages %}
17
+ {%- for m in loop_messages %}
18
+ {%- if m["role"] == "user" %}
19
+ {%- set ns.last_user_idx = loop.index0 %}
20
+ {%- endif %}
21
+ {%- endfor %}
22
+
23
+ {%- if messages[0]["role"] == "system" %}
24
+ {%- set system_message = messages[0]["content"] %}
25
+ {%- set loop_messages = messages[1:] %}
26
+ {%- else %}
27
+ {%- set system_message = "" %}
28
+ {%- set loop_messages = messages %}
29
+ {%- endif %}
30
+ {%- if not tools is defined %}
31
+ {%- set tools = [] %}
32
+ {%- endif %}
33
+ {# Recompute last_user_idx relative to loop_messages after handling system #}
34
+ {%- set ns = namespace(last_user_idx = -1) %}
35
+ {%- for m in loop_messages %}
36
+ {%- if m["role"] == "user" %}
37
+ {%- set ns.last_user_idx = loop.index0 %}
38
+ {%- endif %}
39
+ {%- endfor %}
40
+ {%- if system_message is defined %}
41
+ {{- "<|im_start|>system\n" + system_message }}
42
+ {%- else %}
43
+ {%- if tools is iterable and tools | length > 0 %}
44
+ {{- "<|im_start|>system\n" }}
45
+ {%- endif %}
46
+ {%- endif %}
47
+ {%- if tools is iterable and tools | length > 0 %}
48
+ {%- if system_message is defined and system_message | length > 0 %}
49
+ {{- "\n\n" }}
50
+ {%- endif %}
51
+ {{- "# Tools\n\nYou have access to the following functions:\n\n" }}
52
+ {{- "<tools>" }}
53
+ {%- for tool in tools %}
54
+ {%- if tool.function is defined %}
55
+ {%- set tool = tool.function %}
56
+ {%- endif %}
57
+ {{- "\n<function>\n<name>" ~ tool.name ~ "</name>" }}
58
+ {%- if tool.description is defined %}
59
+ {{- '\n<description>' ~ (tool.description | trim) ~ '</description>' }}
60
+ {%- endif %}
61
+ {{- '\n<parameters>' }}
62
+ {%- if tool.parameters is defined and tool.parameters is mapping and tool.parameters.properties is defined and tool.parameters.properties is mapping %}
63
+ {%- for param_name, param_fields in tool.parameters.properties|items %}
64
+ {{- '\n<parameter>' }}
65
+ {{- '\n<name>' ~ param_name ~ '</name>' }}
66
+ {%- if param_fields.type is defined %}
67
+ {{- '\n<type>' ~ (param_fields.type | string) ~ '</type>' }}
68
+ {%- endif %}
69
+ {%- if param_fields.description is defined %}
70
+ {{- '\n<description>' ~ (param_fields.description | trim) ~ '</description>' }}
71
+ {%- endif %}
72
+ {%- if param_fields.enum is defined %}
73
+ {{- '\n<enum>' ~ (param_fields.enum | tojson | safe) ~ '</enum>' }}
74
+ {%- endif %}
75
+ {%- set handled_keys = ['name', 'type', 'description', 'enum'] %}
76
+ {{- render_extra_keys(param_fields, handled_keys) }}
77
+ {{- '\n</parameter>' }}
78
+ {%- endfor %}
79
+ {%- endif %}
80
+ {% set handled_keys = ['type', 'properties', 'required'] %}
81
+ {{- render_extra_keys(tool.parameters, handled_keys) }}
82
+ {%- if tool.parameters is defined and tool.parameters.required is defined %}
83
+ {{- '\n<required>' ~ (tool.parameters.required | tojson | safe) ~ '</required>' }}
84
+ {%- endif %}
85
+ {{- '\n</parameters>' }}
86
+ {%- set handled_keys = ['type', 'name', 'description', 'parameters'] %}
87
+ {{- render_extra_keys(tool, handled_keys) }}
88
+ {{- '\n</function>' }}
89
+ {%- endfor %}
90
+ {{- "\n</tools>" }}
91
+
92
+ {{- '\n\nIf you choose to call a function ONLY reply in the following format with NO suffix:\n\n<tool_call>\n<function=example_function_name>\n<parameter=example_parameter_1>\nvalue_1\n</parameter>\n<parameter=example_parameter_2>\nThis is the value for the second parameter\nthat can span\nmultiple lines\n</parameter>\n</function>\n</tool_call>\n\n<IMPORTANT>\nReminder:\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\n- Required parameters MUST be specified\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\n</IMPORTANT>' }}
93
+ {%- endif %}
94
+
95
+
96
+ {%- if system_message is defined %}
97
+ {{- '<|im_end|>\n' }}
98
+ {%- else %}
99
+ {%- if tools is iterable and tools | length > 0 %}
100
+ {{- '<|im_end|>\n' }}
101
+ {%- endif %}
102
+ {%- endif %}
103
+
104
+ {%- for message in loop_messages %}
105
+ {%- if message.role == "assistant" %}
106
+ {# Add reasoning content in to content field for unified processing below. #}
107
+ {%- if message.reasoning_content is defined and message.reasoning_content is string and message.reasoning_content | trim | length > 0 %}
108
+ {%- set content = "<think>\n" ~ message.reasoning_content ~ "\n</think>\n" ~ (message.content | default('', true)) %}
109
+ {%- else %}
110
+ {%- set content = message.content | default('', true) %}
111
+ {%- if content is string -%}
112
+ {# Allow downstream logic to to take care of broken thought, only handle coherent reasoning here. #}
113
+ {%- if '<think>' not in content and '</think>' not in content -%}
114
+ {%- set content = "<think></think>" ~ content -%}
115
+ {%- endif -%}
116
+ {%- else -%}
117
+ {%- set content = content -%}
118
+ {%- endif -%}
119
+ {%- endif %}
120
+ {%- if message.tool_calls is defined and message.tool_calls is iterable and message.tool_calls | length > 0 %}
121
+ {# Assistant message has tool calls. #}
122
+ {{- '<|im_start|>assistant\n' }}
123
+ {%- set include_content = not (truncate_history_thinking and loop.index0 < ns.last_user_idx) %}
124
+ {%- if content is string and content | trim | length > 0 %}
125
+ {%- if include_content %}
126
+ {{- (content | trim) ~ '\n' -}}
127
+ {%- else %}
128
+ {%- set c = (content | string) %}
129
+ {%- if '</think>' in c %}
130
+ {# Keep only content after the last closing think. Also generation prompt causes this. #}
131
+ {%- set c = c.split('</think>')[-1] %}
132
+ {%- elif '<think>' in c %}
133
+ {# If <think> was opened but never closed, drop the trailing think segment #}
134
+ {%- set c = c.split('<think>')[0] %}
135
+ {%- endif %}
136
+ {%- set c = "<think></think>" ~ c | trim %}
137
+ {%- if c | length > 0 %}
138
+ {{- c ~ '\n' -}}
139
+ {%- endif %}
140
+ {%- endif %}
141
+ {%- else %}
142
+ {{- "<think></think>" -}}
143
+ {%- endif %}
144
+ {%- for tool_call in message.tool_calls %}
145
+ {%- if tool_call.function is defined %}
146
+ {%- set tool_call = tool_call.function %}
147
+ {%- endif %}
148
+ {{- '<tool_call>\n<function=' ~ tool_call.name ~ '>\n' -}}
149
+ {%- if tool_call.arguments is defined %}
150
+ {%- for args_name, args_value in tool_call.arguments|items %}
151
+ {{- '<parameter=' ~ args_name ~ '>\n' -}}
152
+ {%- set args_value = args_value | tojson | safe if args_value is mapping or (args_value is sequence and args_value is not string) else args_value | string %}
153
+ {{- args_value ~ '\n</parameter>\n' -}}
154
+ {%- endfor %}
155
+ {%- endif %}
156
+ {{- '</function>\n</tool_call>\n' -}}
157
+ {%- endfor %}
158
+ {{- '<|im_end|>\n' }}
159
+ {%- else %}
160
+ {# Assistant message doesn't have tool calls. #}
161
+ {%- if not (truncate_history_thinking and loop.index0 < ns.last_user_idx) %}
162
+ {{- '<|im_start|>assistant\n' ~ (content | default('', true) | string | trim) ~ '<|im_end|>\n' }}
163
+ {%- else %}
164
+ {%- set c = (content | default('', true) | string) %}
165
+ {%- if '<think>' in c and '</think>' in c %}
166
+ {%- set c = "<think></think>" ~ c.split('</think>')[-1] %}
167
+ {%- endif %}
168
+ {%- set c = c | trim %}
169
+ {%- if c | length > 0 %}
170
+ {{- '<|im_start|>assistant\n' ~ c ~ '<|im_end|>\n' }}
171
+ {%- else %}
172
+ {{- '<|im_start|>assistant\n<|im_end|>\n' }}
173
+ {%- endif %}
174
+ {%- endif %}
175
+ {%- endif %}
176
+ {%- elif message.role == "user" or message.role == "system" %}
177
+ {{- '<|im_start|>' + message.role + '\n' }}
178
+ {%- set content = message.content | string %}
179
+ {{- content }}
180
+ {{- '<|im_end|>\n' }}
181
+ {%- elif message.role == "tool" %}
182
+ {%- if loop.previtem and loop.previtem.role != "tool" %}
183
+ {{- '<|im_start|>user\n' }}
184
+ {%- endif %}
185
+ {{- '<tool_response>\n' }}
186
+ {{- message.content }}
187
+ {{- '\n</tool_response>\n' }}
188
+ {%- if not loop.last and loop.nextitem.role != "tool" %}
189
+ {{- '<|im_end|>\n' }}
190
+ {%- elif loop.last %}
191
+ {{- '<|im_end|>\n' }}
192
+ {%- endif %}
193
+ {%- else %}
194
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>\n' }}
195
+ {%- endif %}
196
+ {%- endfor %}
197
+
198
+ {%- if add_generation_prompt %}
199
+ {%- if enable_thinking %}
200
+ {{- '<|im_start|>assistant\n<think>\n' }}
201
+ {%- else %}
202
+ {{- '<|im_start|>assistant\n<think></think>' }}
203
+ {%- endif %}
204
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "NemotronHForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_nemotron_h.NemotronHConfig",
9
+ "AutoModel": "modeling_nemotron_h.NemotronHForCausalLM",
10
+ "AutoModelForCausalLM": "modeling_nemotron_h.NemotronHForCausalLM"
11
+ },
12
+ "bos_token_id": 1,
13
+ "chunk_size": 128,
14
+ "conv_kernel": 4,
15
+ "eos_token_id": 2,
16
+ "expand": 2,
17
+ "head_dim": 128,
18
+ "hidden_dropout": 0.0,
19
+ "hidden_size": 2688,
20
+ "hybrid_override_pattern": "MEMEM*EMEMEM*EMEMEM*EMEMEM*EMEMEM*EMEMEMEM*EMEMEMEME",
21
+ "initializer_range": 0.02,
22
+ "intermediate_size": 1856,
23
+ "layer_norm_epsilon": 1e-05,
24
+ "mamba_head_dim": 64,
25
+ "mamba_hidden_act": "silu",
26
+ "mamba_num_heads": 64,
27
+ "mamba_proj_bias": false,
28
+ "mamba_ssm_cache_dtype": "float32",
29
+ "max_position_embeddings": 262144,
30
+ "mlp_bias": false,
31
+ "mlp_hidden_act": "relu2",
32
+ "model_type": "nemotron_h",
33
+ "moe_intermediate_size": 1856,
34
+ "moe_shared_expert_intermediate_size": 3712,
35
+ "n_group": 1,
36
+ "n_groups": 8,
37
+ "n_routed_experts": 128,
38
+ "n_shared_experts": 1,
39
+ "norm_eps": 1e-05,
40
+ "norm_topk_prob": true,
41
+ "num_attention_heads": 32,
42
+ "num_experts_per_tok": 6,
43
+ "num_hidden_layers": 52,
44
+ "num_key_value_heads": 2,
45
+ "num_logits_to_keep": 1,
46
+ "pad_token_id": 0,
47
+ "partial_rotary_factor": 1.0,
48
+ "rescale_prenorm_residual": true,
49
+ "residual_in_fp32": false,
50
+ "rope_theta": 10000,
51
+ "routed_scaling_factor": 2.5,
52
+ "sliding_window": null,
53
+ "ssm_state_size": 128,
54
+ "tie_word_embeddings": false,
55
+ "time_step_floor": 0.0001,
56
+ "time_step_limit": [
57
+ 0.0,
58
+ Infinity
59
+ ],
60
+ "time_step_max": 0.1,
61
+ "time_step_min": 0.001,
62
+ "topk_group": 1,
63
+ "torch_dtype": "bfloat16",
64
+ "transformers_version": "4.55.4",
65
+ "use_bias": false,
66
+ "use_cache": true,
67
+ "use_conv_bias": true,
68
+ "use_mamba_kernels": true,
69
+ "vocab_size": 131072
70
+ }
configuration_nemotron_h.py ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 AI21 Labs Ltd. and the HuggingFace Inc. team. All rights reserved.
3
+ # Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """NemotronH model configuration"""
17
+
18
+ import re
19
+
20
+ from transformers.configuration_utils import PretrainedConfig
21
+ from transformers.utils import logging
22
+
23
+
24
+ logger = logging.get_logger(__name__)
25
+
26
+
27
+ class NemotronHConfig(PretrainedConfig):
28
+ r"""
29
+ This is the configuration class to store the configuration of a [`NemotronHModel`]. It is used to instantiate a
30
+ NemotronH model according to the specified arguments, defining the model architecture. Instantiating a configuration
31
+ with the defaults will yield a similar configuration to that of the NemotronH-v0.1 model.
32
+
33
+ [todo](todo)
34
+
35
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
36
+ documentation from [`PretrainedConfig`] for more information.
37
+
38
+
39
+ Args:
40
+ vocab_size (`int`, *optional*, defaults to 131072):
41
+ Vocabulary size of the NemotronH model. Defines the number of different tokens that can be represented by the
42
+ `inputs_ids` passed when calling [`NemotronHModel`]
43
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
44
+ Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the
45
+ model has a output word embedding layer.
46
+ hidden_size (`int`, *optional*, defaults to 4096):
47
+ Dimension of the hidden representations.
48
+ intermediate_size (`int`, *optional*, defaults to 21504):
49
+ Dimension of the MLP representations.
50
+ num_hidden_layers (`int`, *optional*, defaults to 52):
51
+ Number of hidden layers in the Transformer encoder.
52
+ hybrid_override_pattern (`str`, *optional*, defaults to `"M-M-M-M*-M-M-M-M-M*-M-M-M-M-M*-M-M-M-M-M*-M-M-M-M-M-"`):
53
+ The pattern of the hybrid model. The pattern is a string of characters where each character represents M: Mamba2, *: Attention, -: MLP
54
+ num_attention_heads (`int`, *optional*, defaults to 32):
55
+ Number of attention heads for each attention layer in the Transformer encoder.
56
+ head_dim (`int`, *optional*, defaults to 128):
57
+ Dimension of each attention head.
58
+ num_key_value_heads (`int`, *optional*, defaults to 8):
59
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
60
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
61
+ `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used.
62
+ mlp_hidden_act (`str`, *optional*, defaults to "relu2"):
63
+ The non-linear activation function in the MLP layers.
64
+ attention_bias (`bool`, *optional*, defaults to `False`):
65
+ Whether to use bias in attention layers.
66
+ mlp_bias (`bool`, *optional*, defaults to `False`):
67
+ Whether to use bias in MLP layers.
68
+ use_bias (`bool`, *optional*, defaults to `False`):
69
+ Whether to use bias in the model.
70
+ initializer_range (`float`, *optional*, defaults to 0.02):
71
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
72
+ layer_norm_epsilon (`float`, *optional*, defaults to 1e-5):
73
+ The epsilon used by the layer normalization layers.
74
+ residual_in_fp32 (`bool`, *optional*, defaults to `False`):
75
+ Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model.
76
+ use_cache (`bool`, *optional*, defaults to `True`):
77
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
78
+ relevant if `config.is_decoder=True`.
79
+ num_logits_to_keep (`int` or `None`, *optional*, defaults to 1):
80
+ Number of prompt logits to calculate during generation. If `None`, all logits will be calculated. If an
81
+ integer value, only last `num_logits_to_keep` logits will be calculated.
82
+ pad_token_id (`int`, *optional*, defaults to 0):
83
+ The id of the padding token.
84
+ bos_token_id (`int`, *optional*, defaults to 1):
85
+ The id of the "beginning-of-sequence" token.
86
+ eos_token_id (`int`, *optional*, defaults to 2):
87
+ The id of the "end-of-sequence" token.
88
+ sliding_window (`int`, *optional*, defaults to None):
89
+ Sliding window attention window size.
90
+ max_position_embeddings (`int`, *optional*, defaults to 4096):
91
+ The maximum sequence length that this model might ever be used with.
92
+ attention_dropout (`float`, *optional*, defaults to 0.0):
93
+ The dropout ratio for the attention probabilities.
94
+ hidden_dropout (`float`, *optional*, defaults to 0.0):
95
+ The dropout ratio for the hidden states.
96
+ use_mamba_kernels (`bool`, *optional*, defaults to `True`):
97
+ Flag indicating whether or not to use the fast mamba kernels. These are available only if `mamba-ssm` and
98
+ `causal-conv1d` are installed, and the mamba modules are running on a CUDA device.
99
+ ssm_state_size (`int`, *optional*, defaults to 128):
100
+ The dimension of the mamba state space latents.
101
+ mamba_num_heads (`int`, *optional*, defaults to 128):
102
+ Number of heads in Mamba layers.
103
+ mamba_n_groups (`int`, *optional*, defaults to 8):
104
+ Number of groups in Mamba layers.
105
+ mamba_head_dim (`int`, *optional*, defaults to 64):
106
+ Dimension of each Mamba head.
107
+ mamba_d_conv (`int`, *optional*, defaults to 4):
108
+ The size of the mamba convolution kernel.
109
+ mamba_expand (`int`, *optional*, defaults to 2):
110
+ Expanding factor used to determine the mamba intermediate size.
111
+ mamba_hidden_act (`str`, *optional*, defaults to "silu"):
112
+ The non-linear activation function in the Mamba layers.
113
+ mamba_dt_min (`float`, *optional*, defaults to 0.001):
114
+ Minimum value for the time step in Mamba.
115
+ mamba_dt_max (`float`, *optional*, defaults to 0.1):
116
+ Maximum value for the time step in Mamba.
117
+ mamba_dt_limit (`tuple`, *optional*, defaults to (0.0, float("inf"))):
118
+ Limits for the time step in Mamba.
119
+ mamba_dt_init_floor (`float`, *optional*, defaults to 1e-4):
120
+ Floor value for time step initialization in Mamba.
121
+ mamba_conv_bias (`bool`, *optional*, defaults to `True`):
122
+ Whether to use bias in the convolution layer of the mamba mixer block.
123
+ mamba_proj_bias (`bool`, *optional*, defaults to `False`):
124
+ Whether to use bias in the input and output projections of the mamba mixer block.
125
+ mamba_chunk_size (`int`, *optional*, defaults to 256):
126
+ Size of chunks for Mamba processing.
127
+ rescale_prenorm_residual (`bool`, *optional*, defaults to `True`):
128
+ Whether to rescale the pre-normalization residual connections.
129
+ """
130
+
131
+ model_type = "nemotron_h"
132
+ keys_to_ignore_at_inference = ["past_key_values"]
133
+
134
+ def __init__(
135
+ self,
136
+ vocab_size=131072,
137
+ tie_word_embeddings=False,
138
+ hidden_size=4096,
139
+ intermediate_size=21504,
140
+ num_hidden_layers=52,
141
+ hybrid_override_pattern="M-M-M-M*-M-M-M-M-M*-M-M-M-M-M*-M-M-M-M-M*-M-M-M-M-M-",
142
+ num_attention_heads=32,
143
+ head_dim=128,
144
+ num_key_value_heads=8, # nemo: num_query_groups
145
+ mlp_hidden_act="relu2",
146
+ attention_bias=False,
147
+ mlp_bias=False,
148
+ use_bias=False,
149
+ initializer_range=0.02, # nemo: init_method_std
150
+ layer_norm_epsilon=1e-5, # nemo: layernorm_epsilon
151
+ residual_in_fp32=False, # Megatron Core default value
152
+ use_cache=True,
153
+ num_logits_to_keep=1,
154
+ pad_token_id=0,
155
+ bos_token_id=1,
156
+ eos_token_id=2,
157
+ sliding_window=None,
158
+ max_position_embeddings=4096,
159
+ attention_dropout=0.0,
160
+ hidden_dropout=0.0, # * ADDED
161
+ use_mamba_kernels=True,
162
+ ssm_state_size=128, # mamba_state_size
163
+ mamba_num_heads=128,
164
+ mamba_n_groups=8, # nemo: mamba_ssm_ngroups = num_heads
165
+ mamba_head_dim=64,
166
+ mamba_d_conv=4,
167
+ mamba_expand=2,
168
+ mamba_hidden_act="silu",
169
+ mamba_dt_min=0.001,
170
+ mamba_dt_max=0.1,
171
+ mamba_dt_limit=(0.0, float("inf")),
172
+ mamba_dt_init_floor=1e-4,
173
+ mamba_conv_bias=True,
174
+ mamba_proj_bias=False,
175
+ mamba_chunk_size=128,
176
+ rescale_prenorm_residual=True,
177
+ n_routed_experts=8,
178
+ n_shared_experts=1,
179
+ moe_intermediate_size=7688,
180
+ moe_shared_expert_intermediate_size=7688,
181
+ num_experts_per_tok=2,
182
+ routed_scaling_factor=1.0,
183
+ n_group=1,
184
+ topk_group=1,
185
+ norm_topk_prob=True,
186
+ **kwargs,
187
+ ):
188
+ self.vocab_size = vocab_size
189
+ self.tie_word_embeddings = tie_word_embeddings
190
+ self.hidden_size = hidden_size
191
+ self.intermediate_size = intermediate_size
192
+ self.num_hidden_layers = num_hidden_layers
193
+ self.hybrid_override_pattern = hybrid_override_pattern
194
+ self.num_attention_heads = num_attention_heads
195
+ self.head_dim = head_dim
196
+ self.sliding_window = sliding_window
197
+ self.max_position_embeddings = max_position_embeddings
198
+ self.attention_dropout = attention_dropout
199
+ self.hidden_dropout = hidden_dropout
200
+
201
+ # Validate hybrid_override_pattern
202
+ # M: Mamba2, *: Attention, -: MLP
203
+ assert len(self.hybrid_override_pattern) == self.num_hidden_layers, "hybrid_override_pattern must have the same length as num_hidden_layers"
204
+ assert re.match(r"^[*-M]+$", self.hybrid_override_pattern), "hybrid_override_pattern must only contain characters 'M', '*', or '-'"
205
+
206
+ # for backward compatibility
207
+ if num_key_value_heads is None:
208
+ num_key_value_heads = num_attention_heads
209
+
210
+ self.num_key_value_heads = num_key_value_heads
211
+ self.mlp_hidden_act = mlp_hidden_act
212
+ self.attention_bias = attention_bias
213
+ self.mlp_bias = mlp_bias
214
+ self.use_bias = use_bias
215
+ self.initializer_range = initializer_range
216
+ self.layer_norm_epsilon = layer_norm_epsilon
217
+ self.residual_in_fp32 = residual_in_fp32
218
+
219
+ self.use_cache = use_cache
220
+ self.num_logits_to_keep = num_logits_to_keep
221
+
222
+ self.use_mamba_kernels = use_mamba_kernels
223
+ self.n_groups = mamba_n_groups
224
+ self.mamba_head_dim = mamba_head_dim
225
+ self.ssm_state_size = ssm_state_size
226
+ self.mamba_num_heads = mamba_num_heads
227
+ self.conv_kernel = mamba_d_conv
228
+ self.expand = mamba_expand
229
+ self.mamba_hidden_act = mamba_hidden_act
230
+ self.time_step_min = mamba_dt_min
231
+ self.time_step_max = mamba_dt_max
232
+ self.time_step_limit = mamba_dt_limit
233
+ self.time_step_floor = mamba_dt_init_floor
234
+ self.use_conv_bias = mamba_conv_bias
235
+ self.mamba_proj_bias = mamba_proj_bias
236
+ self.chunk_size = mamba_chunk_size
237
+ self.rescale_prenorm_residual = rescale_prenorm_residual
238
+ self.n_routed_experts = n_routed_experts
239
+ self.n_shared_experts = n_shared_experts
240
+ self.moe_intermediate_size = moe_intermediate_size
241
+ self.moe_shared_expert_intermediate_size = moe_shared_expert_intermediate_size
242
+ self.num_experts_per_tok = num_experts_per_tok
243
+ self.routed_scaling_factor = routed_scaling_factor
244
+ self.n_group = n_group
245
+ self.topk_group = topk_group
246
+ self.norm_topk_prob = norm_topk_prob
247
+
248
+ super().__init__(
249
+ pad_token_id=pad_token_id,
250
+ bos_token_id=bos_token_id,
251
+ eos_token_id=eos_token_id,
252
+ tie_word_embeddings=tie_word_embeddings,
253
+ **kwargs,
254
+ )
255
+
256
+ @property
257
+ def layers_block_type(self):
258
+ return [
259
+ "mamba" if self.hybrid_override_pattern[i] == "M" else
260
+ "attention" if self.hybrid_override_pattern[i] == "*" else
261
+ "mlp" if self.hybrid_override_pattern[i] == "-" else "moe"
262
+ for i in range(self.num_hidden_layers)]
explainability.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ | Field | Response |
2
+ | :---- | :---- |
3
+ | Intended Task/Domain: | Text generation, reasoning, and chat |
4
+ | Model Type: | Text-to-text Mamba2-Transformer Hybrid |
5
+ | Intended Users: | Generative AI creators working with conversational AI models and image content. |
6
+ | Output: | Text |
7
+ | Tools used to evaluate datasets to identify synthetic data and ensure data authenticity. | We used a Gemma-3 4B-based filtering model fine-tuned on [Nemotron Content Safety Dataset v2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0) to ensure the quality of synthetic data. |
8
+ | Describe how the model works: | Generates text by predicting the next word or token based on the context provided in the input sequence using multiple self-attention layers. |
9
+ | Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable |
10
+ | Technical Limitations & Mitigation: | This model performs particularly well in instruction following regimes, as such may be strongly influenced by untrusted inputs and should be paired with appropriate guardrails and data filtering to better align use-case behaviors when exposed to such data. |
11
+ | Verified to have met prescribed NVIDIA quality standards: | Yes |
12
+ | Performance Metrics: | Accuracy, Throughput, and User-side throughput |
13
+ | Potential Known Risks: | The model was optimized explicitly for instruction following and as such may be influenced by untrusted inputs (prompt injection, indirect prompt injection, jailbreaking, web search, etc.) as a result of its instruction tuning that may degrade safety alignment and other training efforts. This model should be paired with additional guardrails and data filtering to limit exposure to instructions from malicious sources. Bypassing of safety alignment, system guardrails, and filters may allow harmful outcomes up to and including remote code execution in some agentic systems when effective security controls are not in place. The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may generate and amplify harmful, biased, or otherwise unsafe content reinforcing these biases and return toxic responses especially when prompted with toxic prompts. The model may also generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. The model may exhibit self-anthropomorphism (e.g., displaying human-like characteristics in dialogue, such as expressing preferences and emotions). In integrated system contexts, the model could potentially be exploited to access or disclose information beyond the model’s intended permissions or scope of operation. |
14
+ | Licensing: | [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) |
generation_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "do_sample": true,
4
+ "bos_token_id": 1,
5
+ "eos_token_id": [2, 11],
6
+ "pad_token_id": 0,
7
+ "temperature": 1.0,
8
+ "top_p": 1.0,
9
+ "transformers_version": "4.55.4"
10
+ }
model-00001-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c77b0f1717f1fb11791fb62fc57ca56f59fd1427ac466849ef9705ac90729ea
3
+ size 4991205008
model-00002-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e3de804d8c8bc6607a86d486f47301822a17f274e3c54425e71ff3516cde9b6
3
+ size 4992601472
model-00003-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e113d2a3f81515599744eab31a7ea4f6cb4e6fc2089fedeb22137e14ee792c9f
3
+ size 4992601824
model-00004-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c64af357042231114495897474859e515c7d9ac00a7819ecf57d634ad8753ec5
3
+ size 4995693256
model-00005-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1aa5867c6483ac2d52891e5cdce00ee49840c2c33709f3242b05e4682b39ead0
3
+ size 4980545984
model-00006-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd411a714fad4954ee87fa76554ef79d3c85d309aff83c88b758061bf46009f1
3
+ size 4999410040
model-00007-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d16bc0bd0521e93b799e66ec913b2548417578a5f290f7023c4045dcd002f647
3
+ size 4992601952
model-00008-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc0aea38d897f28b9cc506d0fa2c2ae040562c185691d11351478841f1f474cb
3
+ size 4992601976
model-00009-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34b4715b5765fdc8fb496573a4c6c8536ad426c40d266a47d0ce1f22de441c3f
3
+ size 4995693256
model-00010-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4124abfaa922336fa8a6ba1b8f55010caac4d62a24e5990e1a819266cedcd494
3
+ size 4992601976
model-00011-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3bf1c127982233bef8ac299d5359f30aa0eaf013f8fca0645873b8e29393719
3
+ size 4995693256
model-00012-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4abbf8125860c87189dc4f37625fdba5b0c51af52eb2644f70836d9a4776f169
3
+ size 4995693272
model-00013-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9458d10c7e999db805c5fa6ffa778cc0dc63478ea4210a942759358736bebf1d
3
+ size 3239751000
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
modeling_nemotron_h.py ADDED
@@ -0,0 +1,1736 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 HuggingFace Inc. team.
3
+ # Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """PyTorch NemotronH model."""
17
+
18
+ import math
19
+ from dataclasses import dataclass
20
+ from typing import Any, Dict, Optional, Tuple, Union
21
+
22
+ import torch
23
+ import torch.utils.checkpoint
24
+ from torch import nn
25
+ from torch.nn import CrossEntropyLoss
26
+ import torch.nn.functional as F
27
+
28
+ from transformers.activations import ACT2FN
29
+ from transformers.cache_utils import DynamicCache # we need __iter__ and __len__ of pkv
30
+ from transformers.generation import GenerationMixin
31
+ from transformers.modeling_attn_mask_utils import (
32
+ AttentionMaskConverter,
33
+ )
34
+ from transformers.modeling_utils import PreTrainedModel
35
+ from transformers.utils import (
36
+ ModelOutput,
37
+ add_code_sample_docstrings,
38
+ add_start_docstrings,
39
+ add_start_docstrings_to_model_forward,
40
+ logging,
41
+ )
42
+ from transformers.utils.import_utils import (
43
+ is_causal_conv1d_available,
44
+ is_flash_attn_2_available,
45
+ is_flash_attn_greater_or_equal_2_10,
46
+ is_mamba_2_ssm_available,
47
+ )
48
+ from .configuration_nemotron_h import NemotronHConfig
49
+
50
+
51
+ logger = logging.get_logger(__name__)
52
+
53
+
54
+ # Copied from transformers.models.mamba.modeling_mamba2.modeling_mamba2.py with MAMBA2->NEMOTRONH,Mamba2->NemotronH
55
+ # For Mamba2 components Mamba2->NemotronHMamba2
56
+ if is_mamba_2_ssm_available():
57
+ from mamba_ssm.ops.triton.selective_state_update import selective_state_update
58
+ from mamba_ssm.ops.triton.ssd_combined import mamba_chunk_scan_combined, mamba_split_conv1d_scan_combined
59
+ else:
60
+ mamba_chunk_scan_combined, mamba_split_conv1d_scan_combined, selective_state_update = None, None, None
61
+
62
+ try:
63
+ #from mamba_ssm.ops.triton.layernorm_gated import RMSNorm as RMSNormGated
64
+ from mamba_ssm.ops.triton.layernorm_gated import rmsnorm_fn
65
+ except ImportError:
66
+ raise ImportError("mamba-ssm is required by the Mamba model but cannot be imported")
67
+
68
+ if is_causal_conv1d_available():
69
+ from causal_conv1d import causal_conv1d_fn, causal_conv1d_update
70
+ else:
71
+ causal_conv1d_update, causal_conv1d_fn = None, None
72
+
73
+ if is_flash_attn_2_available():
74
+ from transformers.modeling_flash_attention_utils import _flash_attention_forward
75
+
76
+ is_fast_path_available = all(
77
+ (
78
+ selective_state_update,
79
+ mamba_chunk_scan_combined,
80
+ mamba_split_conv1d_scan_combined,
81
+ causal_conv1d_fn,
82
+ causal_conv1d_update,
83
+ )
84
+ )
85
+
86
+
87
+ _CHECKPOINT_FOR_DOC = "nvidia/Nemotron-H-56B-Base-8K"
88
+ _CONFIG_FOR_DOC = "NemotronHConfig"
89
+
90
+
91
+ # Helper methods for segment sum computation
92
+
93
+
94
+ def pad_tensor_by_size(input_tensor: torch.Tensor, pad_size: int):
95
+ """
96
+ Padding x tensor with `pad_size` on the seq_len dim (dim=1)
97
+
98
+ Assumes that we only have tensors of either size 4 or 3
99
+ """
100
+ pad_shape = (0, 0, 0, 0, 0, pad_size, 0, 0) if len(input_tensor.shape) == 4 else (0, 0, 0, pad_size, 0, 0)
101
+
102
+ return torch.nn.functional.pad(input_tensor, pad_shape, mode="constant", value=0)
103
+
104
+
105
+ def reshape_into_chunks(input_tensor, pad_size, chunk_size):
106
+ """
107
+ Padding input_tensor with `pad_size` on the seq_len dim (dim=1) and
108
+ simultaneously splitting it into chunk sequences.
109
+
110
+ Assumes that we only have tensors of either size 4 or 3
111
+ """
112
+ # [bsz, seq_len, ...] -> [bsz, seq_len multiple of chunk_size, ...]
113
+ input_tensor = pad_tensor_by_size(input_tensor, pad_size)
114
+
115
+ if len(input_tensor.shape) == 3:
116
+ # [bsz, seq_len multiple of chunk_size, num_heads] -> [bsz, -1, chunk_size, num_heads]
117
+ return input_tensor.reshape(input_tensor.shape[0], -1, chunk_size, input_tensor.shape[2])
118
+ else:
119
+ # [bsz, seq_len multiple of chunk_size, num_heads, head_dim or state_size] -> [bsz, -1, chunk_size, num_heads, head_dim or state_size]
120
+ return input_tensor.reshape(
121
+ input_tensor.shape[0], -1, chunk_size, input_tensor.shape[2], input_tensor.shape[3]
122
+ )
123
+
124
+
125
+ def segment_sum(input_tensor):
126
+ """
127
+ More stable segment sum calculation. Uses cumulative sums and masking instead of direct subtractions.
128
+ """
129
+ chunk_size = input_tensor.size(-1)
130
+ # 1. expand input tensor to have an additional dimension and repeat along that dimension
131
+ # [..., chunk_size] -> [..., chunk_size, chunk_size]
132
+ input_tensor = input_tensor[..., None].expand(*input_tensor.size(), chunk_size)
133
+ # 2. create a lower triangular mask with the diagonal set to 0 to 0 out elements above diag
134
+ mask = torch.tril(torch.ones(chunk_size, chunk_size, device=input_tensor.device, dtype=torch.bool), diagonal=-1)
135
+ input_tensor = input_tensor.masked_fill(~mask, 0)
136
+ # 3. compute actual cumsum
137
+ tensor_segsum = torch.cumsum(input_tensor, dim=-2)
138
+
139
+ # 4. apply mask to keep only the lower triangular part of the cumulative sum result (incl diagonal this time)
140
+ mask = torch.tril(torch.ones(chunk_size, chunk_size, device=input_tensor.device, dtype=torch.bool), diagonal=0)
141
+ tensor_segsum = tensor_segsum.masked_fill(~mask, -torch.inf)
142
+ return tensor_segsum
143
+
144
+
145
+ def apply_mask_to_padding_states(hidden_states, attention_mask):
146
+ """
147
+ Tunes out the hidden states for padding tokens, see https://github.com/state-spaces/mamba/issues/66
148
+ """
149
+ if attention_mask is not None and attention_mask.shape[1] > 1 and attention_mask.shape[0] > 1:
150
+ dtype = hidden_states.dtype
151
+ hidden_states = (hidden_states * attention_mask[:, :, None]).to(dtype)
152
+
153
+ return hidden_states
154
+
155
+ # Copied from https://github.com/huggingface/transformers/blob/main/src/transformers/models/jamba/modeling_jamba.py
156
+ class HybridMambaAttentionDynamicCache(DynamicCache):
157
+ """
158
+ A dynamic cache that can handle both the attention cache (which has a seq_len dimension) and the mamba cache
159
+ (which has a constant shape regardless of seq_len).
160
+
161
+ This cache has two sets of lists of tensors: `key_cache` and `value_cache` for attention cache and `conv_states`
162
+ and `ssm_states` for mamba cache. Each of these lists has `num_layers` tensors. The expected shape for each tensor
163
+ For attention layers, `key_cache` and `value_cache` have a shape of `(batch_size, num_heads, seq_len, head_dim)`,
164
+ while `conv_states` and `ssm_states` have a shape of `(batch_size, 0)` (empty tensors).
165
+ For mamba layers, `key_cache` and `value_cache` have a shape of `(batch_size, 0)` (empty tensors),
166
+ while `conv_states` represents the convolution state and has a shape of `(batch_size, d_inner, d_conv)`,
167
+ and `ssm_states` represents the ssm state and has a shape of `(batch_size, d_inner, d_state)`.
168
+ """
169
+
170
+ def __init__(self, config, batch_size, dtype=torch.float16, device=None):
171
+ super().__init__()
172
+ self.dtype = dtype
173
+ self.hybrid_override_pattern = config.hybrid_override_pattern
174
+ self.has_previous_state = False # only used by mamba
175
+ intermediate_size = config.mamba_num_heads * config.mamba_head_dim
176
+ ssm_state_size = config.ssm_state_size
177
+ conv_kernel_size = config.conv_kernel
178
+ self.conv_states = []
179
+ self.ssm_states = []
180
+ self.transformer_layers = []
181
+ for i in range(config.num_hidden_layers):
182
+ if self.hybrid_override_pattern[i] == "M":
183
+ # Mamba layer
184
+ self.conv_states += [
185
+ torch.zeros(batch_size, intermediate_size, conv_kernel_size, device=device, dtype=dtype)
186
+ ]
187
+ self.ssm_states += [
188
+ torch.zeros(batch_size, intermediate_size, ssm_state_size, device=device, dtype=dtype)
189
+ ]
190
+ else:
191
+ # Attention or MLP layer
192
+ self.conv_states += [torch.tensor([[]] * batch_size, device=device)]
193
+ self.ssm_states += [torch.tensor([[]] * batch_size, device=device)]
194
+ self.transformer_layers.append(i)
195
+
196
+ self.key_cache = [torch.tensor([[]] * batch_size, device=device) for _ in range(config.num_hidden_layers)]
197
+ self.value_cache = [torch.tensor([[]] * batch_size, device=device) for _ in range(config.num_hidden_layers)]
198
+
199
+ def update(
200
+ self,
201
+ key_states: torch.Tensor,
202
+ value_states: torch.Tensor,
203
+ layer_idx: int,
204
+ cache_kwargs: Optional[Dict[str, Any]] = None,
205
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
206
+ # Update the cache
207
+ if self.key_cache[layer_idx].shape[-1] == 0:
208
+ self.key_cache[layer_idx] = key_states
209
+ self.value_cache[layer_idx] = value_states
210
+ else:
211
+ self.key_cache[layer_idx] = torch.cat([self.key_cache[layer_idx], key_states], dim=2)
212
+ self.value_cache[layer_idx] = torch.cat([self.value_cache[layer_idx], value_states], dim=2)
213
+
214
+ return self.key_cache[layer_idx], self.value_cache[layer_idx]
215
+
216
+ def reorder_cache(self, beam_idx: torch.LongTensor):
217
+ """Reorders the cache for beam search, given the selected beam indices."""
218
+ for layer_idx in range(len(self.key_cache)):
219
+ device = self.key_cache[layer_idx].device
220
+ self.key_cache[layer_idx] = self.key_cache[layer_idx].index_select(0, beam_idx.to(device))
221
+ device = self.value_cache[layer_idx].device
222
+ self.value_cache[layer_idx] = self.value_cache[layer_idx].index_select(0, beam_idx.to(device))
223
+
224
+ device = self.conv_states[layer_idx].device
225
+ self.conv_states[layer_idx] = self.conv_states[layer_idx].index_select(0, beam_idx.to(device))
226
+ device = self.ssm_states[layer_idx].device
227
+ self.ssm_states[layer_idx] = self.ssm_states[layer_idx].index_select(0, beam_idx.to(device))
228
+
229
+ def get_seq_length(self, layer_idx: Optional[int] = 0) -> int:
230
+ """Returns the sequence length of the cached states. A layer index can be optionally passed."""
231
+ # take any layer that contains cache and not empty tensor
232
+ layer_idx = self.transformer_layers[0] if layer_idx not in self.transformer_layers else layer_idx
233
+ if len(self.key_cache) <= layer_idx:
234
+ return 0
235
+ return self.key_cache[layer_idx].shape[-2]
236
+
237
+ def to_legacy_cache(self) -> Tuple[Tuple[torch.Tensor], Tuple[torch.Tensor]]:
238
+ raise NotImplementedError("HybridMambaAttentionDynamicCache does not have a legacy cache equivalent.")
239
+
240
+ @classmethod
241
+ def from_legacy_cache(cls, past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None) -> "DynamicCache":
242
+ raise NotImplementedError("HybridMambaAttentionDynamicCache does not have a legacy cache equivalent.")
243
+
244
+ # Copied from modeling_mamba2.py
245
+ def update_conv_state(
246
+ self, layer_idx: int, new_conv_state: torch.Tensor, cache_init: bool = False
247
+ ) -> torch.Tensor:
248
+ if cache_init:
249
+ self.conv_states[layer_idx] = new_conv_state.to(self.conv_states.device)
250
+ else:
251
+ self.conv_states[layer_idx] = self.conv_states[layer_idx].roll(shifts=-1, dims=-1)
252
+ self.conv_states[layer_idx][:, :, -1] = new_conv_state[:, 0, :].to(self.conv_states.device)
253
+ return self.conv_states[layer_idx]
254
+
255
+ def update_ssm_state(self, layer_idx: int, new_ssm_state: torch.Tensor):
256
+ self.ssm_states[layer_idx] = new_ssm_state.to(self.ssm_states.device)
257
+ return self.ssm_states[layer_idx]
258
+
259
+ def reset(self):
260
+ self.conv_states.zero_()
261
+ self.ssm_states.zero_()
262
+
263
+ class MambaRMSNormGated(torch.nn.Module):
264
+ def __init__(self, hidden_size, group_size, eps=1e-5):
265
+ super().__init__()
266
+ self.weight = nn.Parameter(torch.ones(hidden_size))
267
+ self.variance_epsilon = eps
268
+ self.group_size = group_size
269
+
270
+ # jan28b version
271
+ def forward(self, hidden_states, gate=None):
272
+ return rmsnorm_fn(x=hidden_states,
273
+ weight=self.weight,
274
+ bias=None, # No bias
275
+ z=gate,
276
+ eps=self.variance_epsilon,
277
+ group_size=self.group_size,
278
+ norm_before_gate=False
279
+ )
280
+
281
+ class NemotronHMamba2Mixer(nn.Module):
282
+ """
283
+ Compute ∆, A, B, C, and D the state space parameters and compute the `contextualized_states`.
284
+ A, D are input independent (see Mamba paper [1] Section 3.5.2 "Interpretation of A" for why A isn't selective)
285
+ ∆, B, C are input-dependent (this is a key difference between Mamba and the linear time invariant S4,
286
+ and is why Mamba is called **selective** state spaces)
287
+ """
288
+
289
+ def __init__(self, config: NemotronHConfig, layer_idx: int):
290
+ super().__init__()
291
+ self.num_heads = config.mamba_num_heads
292
+ self.hidden_size = config.hidden_size
293
+ self.ssm_state_size = config.ssm_state_size
294
+ self.conv_kernel_size = config.conv_kernel
295
+ self.intermediate_size = config.mamba_num_heads * config.mamba_head_dim
296
+ self.layer_idx = layer_idx
297
+ self.use_conv_bias = config.use_conv_bias
298
+ self.activation = config.mamba_hidden_act
299
+ self.act = ACT2FN[config.mamba_hidden_act]
300
+
301
+ self.layer_norm_epsilon = config.layer_norm_epsilon
302
+
303
+ self.n_groups = config.n_groups
304
+ self.head_dim = config.mamba_head_dim
305
+ self.chunk_size = config.chunk_size
306
+
307
+ self.time_step_limit = config.time_step_limit
308
+ self.time_step_min = config.time_step_min
309
+ self.time_step_max = config.time_step_max
310
+
311
+ self.conv_dim = self.intermediate_size + 2 * self.n_groups * self.ssm_state_size
312
+ self.conv1d = nn.Conv1d(
313
+ in_channels=self.conv_dim,
314
+ out_channels=self.conv_dim,
315
+ bias=config.use_conv_bias,
316
+ kernel_size=config.conv_kernel,
317
+ groups=self.conv_dim,
318
+ padding=config.conv_kernel - 1,
319
+ )
320
+
321
+ # projection of the input hidden states
322
+ projection_size = self.intermediate_size + self.conv_dim + self.num_heads
323
+ self.in_proj = nn.Linear(
324
+ self.hidden_size,
325
+ projection_size,
326
+ bias=config.use_bias,
327
+ )
328
+ # selective projection used to make dt, B and C input dependant
329
+
330
+ # time step projection (discretization)
331
+ # instantiate once and copy inv_dt in init_weights of PretrainedModel
332
+ self.dt_bias = nn.Parameter(torch.ones(self.num_heads))
333
+
334
+ # S4D real initialization. These are not discretized!
335
+ # The core is to load them, compute the discrete states, then write the updated state. Keeps the memory bounded
336
+ A = torch.arange(1, self.num_heads + 1)
337
+ self.A_log = nn.Parameter(torch.log(A))
338
+ self.A_log._no_weight_decay = True
339
+ self.norm = MambaRMSNormGated(self.intermediate_size, eps=self.layer_norm_epsilon, group_size=self.intermediate_size // self.n_groups)
340
+ self.D = nn.Parameter(torch.ones(self.num_heads))
341
+ self.D._no_weight_decay = True
342
+
343
+ self.out_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=config.use_bias)
344
+ self.use_bias = config.use_bias
345
+
346
+ if not is_fast_path_available:
347
+ logger.warning_once(
348
+ "The fast path is not available because on of `(selective_state_update, causal_conv1d_fn, causal_conv1d_update)`"
349
+ " is None. Falling back to the naive implementation. To install follow https://github.com/state-spaces/mamba/#installation and"
350
+ " https://github.com/Dao-AILab/causal-conv1d"
351
+ )
352
+
353
+ def cuda_kernels_forward(
354
+ self,
355
+ hidden_states: torch.Tensor,
356
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None,
357
+ cache_position: Optional[torch.LongTensor] = None,
358
+ attention_mask: Optional[torch.Tensor] = None,
359
+ ):
360
+ # 1. Gated MLP's linear projection
361
+ hidden_states = apply_mask_to_padding_states(hidden_states, attention_mask)
362
+ projected_states = self.in_proj(hidden_states)
363
+
364
+ # Set up dimensions for reshapes later
365
+ batch_size, seq_len, _ = hidden_states.shape
366
+ groups_time_state_size = self.n_groups * self.ssm_state_size
367
+ d_mlp = (
368
+ projected_states.shape[-1]
369
+ - 2 * self.intermediate_size
370
+ - 2 * self.n_groups * self.ssm_state_size
371
+ - self.num_heads
372
+ ) // 2
373
+
374
+ # Single step calculations via cache
375
+ if cache_params is not None and cache_position is not None and cache_position[0] > 0:
376
+ _, _, gate, hidden_states_B_C, dt = projected_states.squeeze(1).split(
377
+ [d_mlp, d_mlp, self.intermediate_size, self.conv_dim, self.num_heads], dim=-1
378
+ )
379
+
380
+ # 2. Convolution sequence transformation
381
+ hidden_states_B_C = causal_conv1d_update(
382
+ hidden_states_B_C,
383
+ cache_params.conv_states[self.layer_idx],
384
+ self.conv1d.weight.squeeze(1),
385
+ self.conv1d.bias,
386
+ self.activation,
387
+ )
388
+
389
+ hidden_states, B, C = torch.split(
390
+ hidden_states_B_C,
391
+ [self.intermediate_size, groups_time_state_size, groups_time_state_size],
392
+ dim=-1,
393
+ )
394
+
395
+ # 3. SSM transformation
396
+ A = -torch.exp(self.A_log.float()) # (nheads,)
397
+ A = A[:, None, ...][:, :, None].expand(-1, self.head_dim, self.ssm_state_size).to(dtype=torch.float32)
398
+ dt = dt[:, :, None].expand(-1, -1, self.head_dim)
399
+ dt_bias = self.dt_bias[:, None, ...].expand(-1, self.head_dim)
400
+ D = self.D[:, None, ...].expand(-1, self.head_dim)
401
+ B = B.view(batch_size, self.n_groups, B.shape[1] // self.n_groups)
402
+ C = C.view(batch_size, self.n_groups, C.shape[1] // self.n_groups)
403
+ hidden_states_reshaped = hidden_states.view(batch_size, self.num_heads, self.head_dim)
404
+ hidden_states = selective_state_update(
405
+ cache_params.ssm_states[self.layer_idx],
406
+ hidden_states_reshaped,
407
+ dt,
408
+ A,
409
+ B,
410
+ C,
411
+ D,
412
+ z=None,
413
+ dt_bias=dt_bias,
414
+ dt_softplus=True,
415
+ )
416
+ hidden_states = hidden_states.view(batch_size, self.num_heads * self.head_dim)
417
+ hidden_states = self.norm(hidden_states, gate)
418
+
419
+ # 4. Final linear projection
420
+ out = self.out_proj(hidden_states)[:, None, ...]
421
+
422
+ # Fused calculations or step by step if no initialized cache is found
423
+ else:
424
+ A = -torch.exp(self.A_log.float()) # (num_heads) or (intermediate_size, state_size)
425
+ dt_limit_kwargs = {} if self.time_step_limit == (0.0, float("inf")) else {"dt_limit": self.time_step_limit}
426
+
427
+ # 2-4. Fused kernel for conv1d, SSM, and the final projection
428
+ if self.training and cache_params is None:
429
+ out = mamba_split_conv1d_scan_combined(
430
+ projected_states,
431
+ self.conv1d.weight.squeeze(1),
432
+ self.conv1d.bias,
433
+ self.dt_bias,
434
+ A,
435
+ D=self.D,
436
+ chunk_size=self.chunk_size,
437
+ seq_idx=None, # was seq_idx
438
+ activation=self.activation,
439
+ rmsnorm_weight=self.norm.weight,
440
+ rmsnorm_eps=self.norm.variance_epsilon,
441
+ outproj_weight=self.out_proj.weight,
442
+ outproj_bias=self.out_proj.bias,
443
+ headdim=self.head_dim,
444
+ ngroups=self.n_groups,
445
+ norm_before_gate=False,
446
+ return_final_states=False,
447
+ **dt_limit_kwargs,
448
+ )
449
+
450
+ else:
451
+ _, _, gate, hidden_states_B_C, dt = projected_states.split(
452
+ [d_mlp, d_mlp, self.intermediate_size, self.conv_dim, self.num_heads], dim=-1
453
+ )
454
+
455
+ # 2. Convolution sequence transformation
456
+ # Init cache
457
+ if cache_params is not None:
458
+ hidden_states_B_C_transposed = hidden_states_B_C.transpose(1, 2)
459
+ conv_states = nn.functional.pad(
460
+ hidden_states_B_C_transposed,
461
+ (cache_params.conv_kernel_size - hidden_states_B_C_transposed.shape[-1], 0),
462
+ )
463
+ cache_params.update_conv_state(
464
+ layer_idx=self.layer_idx, new_conv_state=conv_states, cache_init=True
465
+ )
466
+
467
+ if self.activation not in ["silu", "swish"]:
468
+ hidden_states_B_C = self.act(
469
+ self.conv1d(hidden_states_B_C.transpose(1, 2))[..., :seq_len].transpose(1, 2)
470
+ )
471
+ else:
472
+ hidden_states_B_C = causal_conv1d_fn(
473
+ x=hidden_states_B_C.transpose(1, 2),
474
+ weight=self.conv1d.weight.squeeze(1),
475
+ bias=self.conv1d.bias,
476
+ activation=self.activation,
477
+ ).transpose(1, 2)
478
+ hidden_states_B_C = apply_mask_to_padding_states(hidden_states_B_C, attention_mask)
479
+ hidden_states, B, C = torch.split(
480
+ hidden_states_B_C,
481
+ [self.intermediate_size, groups_time_state_size, groups_time_state_size],
482
+ dim=-1,
483
+ )
484
+
485
+ # 3. SSM transformation
486
+ scan_output, ssm_state = mamba_chunk_scan_combined(
487
+ hidden_states.view(batch_size, seq_len, -1, self.head_dim),
488
+ dt,
489
+ A,
490
+ B.view(batch_size, seq_len, self.n_groups, -1),
491
+ C.view(batch_size, seq_len, self.n_groups, -1),
492
+ chunk_size=self.chunk_size,
493
+ D=self.D,
494
+ z=None,
495
+ seq_idx=None,
496
+ return_final_states=True,
497
+ dt_bias=self.dt_bias,
498
+ dt_softplus=True,
499
+ **dt_limit_kwargs,
500
+ )
501
+
502
+ # Init cache
503
+ if ssm_state is not None and cache_params is not None:
504
+ cache_params.update_ssm_state(layer_idx=self.layer_idx, new_ssm_state=ssm_state)
505
+
506
+ scan_output = scan_output.view(batch_size, seq_len, -1)
507
+
508
+ # Multiply "gate" branch and apply extra normalization layer
509
+ scan_output = self.norm(scan_output, gate)
510
+
511
+ # 4. Final linear projection
512
+ out = self.out_proj(scan_output)
513
+ return out
514
+
515
+ # fmt: off
516
+ def torch_forward(self, input_states, cache_params: Optional[HybridMambaAttentionDynamicCache]=None, cache_position:Optional[torch.LongTensor]=None, attention_mask: Optional[torch.Tensor]=None):
517
+ batch_size, seq_len, _ = input_states.shape
518
+ dtype = input_states.dtype
519
+
520
+ # 1. Gated MLP's linear projection
521
+ input_states = apply_mask_to_padding_states(input_states, attention_mask)
522
+ projected_states = self.in_proj(input_states)
523
+ d_mlp = (projected_states.shape[-1] - 2 * self.intermediate_size - 2 * self.n_groups * self.ssm_state_size-self.num_heads) // 2
524
+ _, _, gate, hidden_states_B_C, dt = projected_states.split(
525
+ [d_mlp, d_mlp, self.intermediate_size, self.conv_dim, self.num_heads], dim=-1
526
+ )
527
+
528
+ # 2. Convolution sequence transformation
529
+ if cache_params is not None and cache_position is not None and cache_position[0] > 0:
530
+ cache_params.update_conv_state(layer_idx=self.layer_idx, new_conv_state=hidden_states_B_C, cache_init=False)
531
+
532
+ # We need to guarantee that anything regarding the cache is on the same device
533
+ conv_states = cache_params.conv_states[self.layer_idx].to(device=self.conv1d.weight.device)
534
+
535
+ hidden_states_B_C = torch.sum(
536
+ conv_states * self.conv1d.weight.squeeze(1), dim=-1
537
+ )
538
+ if self.use_conv_bias:
539
+ hidden_states_B_C = hidden_states_B_C + self.conv1d.bias
540
+ hidden_states_B_C = self.act(hidden_states_B_C)
541
+ else:
542
+ # Init cache
543
+ if cache_params is not None:
544
+ hidden_states_B_C_transposed = hidden_states_B_C.transpose(1, 2)
545
+ conv_states = nn.functional.pad(
546
+ hidden_states_B_C_transposed, (cache_params.conv_kernel_size - hidden_states_B_C_transposed.shape[-1], 0)
547
+ )
548
+ cache_params.update_conv_state(layer_idx=self.layer_idx, new_conv_state=conv_states, cache_init=True)
549
+
550
+ hidden_states_B_C = self.act(self.conv1d(hidden_states_B_C.transpose(1, 2))[..., :seq_len].transpose(1, 2))
551
+
552
+ hidden_states_B_C = apply_mask_to_padding_states(hidden_states_B_C, attention_mask)
553
+ hidden_states, B, C = torch.split(
554
+ hidden_states_B_C,
555
+ [self.intermediate_size, self.n_groups * self.ssm_state_size, self.n_groups * self.ssm_state_size],
556
+ dim=-1
557
+ )
558
+
559
+ # 3. SSM transformation
560
+ A = -torch.exp(self.A_log.float()) # [num_heads]
561
+ if cache_params is not None and cache_position is not None and cache_position[0] > 0:
562
+ # We need to guarantee that anything regarding the cache is on the same device
563
+ cache_device = cache_params.ssm_states.device
564
+
565
+ # Note: there is no need to pad parameter matrices here, as there is just one new token
566
+ # for batched generation
567
+ dt = dt[:, 0, :][:, None, ...]
568
+ dt = dt.transpose(1, 2).expand(batch_size, dt.shape[-1], self.head_dim)
569
+ # [num_heads] -> [num_heads, head_dim]
570
+ dt_bias = self.dt_bias[..., None].expand(self.dt_bias.shape[0], self.head_dim)
571
+
572
+ dt = torch.nn.functional.softplus(dt + dt_bias.to(dt.dtype))
573
+ dt = torch.clamp(dt, self.time_step_limit[0], self.time_step_limit[1])
574
+ A = A[..., None, None].expand(self.num_heads, self.head_dim, self.ssm_state_size).to(dtype=torch.float32)
575
+ # [bsz, num_heads, head_dim, state_size]
576
+ dA = (torch.exp(dt[..., None] * A)).to(device=cache_device)
577
+
578
+ # Discretize B
579
+ # [bsz, n_groups * state_size] -> [bsz, n_groups, 1, state_size] ->
580
+ # -> [bsz, n_groups, group to head repetition factor, state_size] -> [bsz, num_heads, state_size]
581
+ B = B.reshape(batch_size, self.n_groups, -1)[..., None, :]
582
+ B = B.expand(batch_size, self.n_groups, self.num_heads // self.n_groups, B.shape[-1]).contiguous()
583
+ B = B.reshape(batch_size, -1, B.shape[-1])
584
+ # [bsz, num_heads, head_dim, state_size]
585
+ dB = dt[..., None] * B[..., None, :]
586
+
587
+ # Discretize x into dB
588
+ # [bsz, intermediate_size] -> [bsz, num_heads, head_dim]
589
+ hidden_states = hidden_states.reshape(batch_size, -1, self.head_dim)
590
+ dBx = (dB * hidden_states[..., None]).to(device=cache_device)
591
+
592
+ # State calculation
593
+ cache_params.update_ssm_state(
594
+ layer_idx=self.layer_idx,
595
+ new_ssm_state=cache_params.ssm_states[self.layer_idx] * dA + dBx
596
+ )
597
+
598
+ # Subsequent output
599
+ # [bsz, n_groups * state_size] -> [bsz, num_heads, state_size]
600
+ C = C.reshape(batch_size, self.n_groups, -1)[..., None, :]
601
+ C = C.expand(batch_size, self.n_groups, self.num_heads // self.n_groups, C.shape[-1]).contiguous()
602
+ C = C.reshape(batch_size, -1, C.shape[-1])
603
+ # [bsz, num_heads, head_dim]
604
+
605
+ ssm_states = cache_params.ssm_states[self.layer_idx].to(device=C.device, dtype=C.dtype) # Shape: [b, h, d, n]
606
+ # Reshape ssm_states to merge the first two dimensions
607
+ ssm_states_reshaped = ssm_states.view(batch_size * self.num_heads, self.head_dim, self.ssm_state_size) # Shape: [b*h, d, n]
608
+ C_reshaped = C.view(batch_size * self.num_heads, self.ssm_state_size, 1) # Shape: [b*h, n, 1]
609
+ y = torch.bmm(ssm_states_reshaped, C_reshaped)
610
+ y = y.view(batch_size, self.num_heads, self.head_dim)
611
+
612
+ # D skip connection
613
+ # [num_heads] -> [num_heads, head_dim]
614
+ D = self.D[..., None].expand(self.D.shape[0], self.head_dim)
615
+ y = (y + hidden_states * D).to(y.dtype)
616
+
617
+ # [bsz, num_heads, head_dim] -> [bsz, 1, intermediate_size]
618
+ y = y.reshape(batch_size, -1)[:, None, ...]
619
+ else:
620
+ # begin ssd naive implementation without einsums
621
+ dt = nn.functional.softplus(dt + self.dt_bias)
622
+ dt = torch.clamp(dt, self.time_step_limit[0], self.time_step_limit[1])
623
+ hidden_states = hidden_states.reshape(batch_size, seq_len, -1, self.head_dim).float()
624
+ B = B.reshape(batch_size, seq_len, -1, self.ssm_state_size).float()
625
+ C = C.reshape(batch_size, seq_len, -1, self.ssm_state_size).float()
626
+ B = B.repeat_interleave(self.num_heads // self.n_groups, dim=2, output_size=self.num_heads)
627
+ C = C.repeat_interleave(self.num_heads // self.n_groups, dim=2, output_size=self.num_heads)
628
+ pad_size = (self.chunk_size - seq_len % self.chunk_size) % self.chunk_size
629
+
630
+ D_residual = self.D[..., None] * pad_tensor_by_size(hidden_states, pad_size)
631
+
632
+ # Discretize x and A
633
+ hidden_states = hidden_states * dt[..., None]
634
+ A = A.to(hidden_states.dtype) * dt
635
+
636
+ # Rearrange into blocks/chunks
637
+ hidden_states, A, B, C = [reshape_into_chunks(t, pad_size, self.chunk_size) for t in (hidden_states, A, B, C)]
638
+
639
+ # [bsz, -1, chunk_size, num_heads] -> [bsz, num_heads, -1, chunk_size]
640
+ A = A.permute(0, 3, 1, 2)
641
+ A_cumsum = torch.cumsum(A, dim=-1)
642
+
643
+ # 1. Compute the output for each intra-chunk (diagonal blocks)
644
+ # This is the analog of a causal mask
645
+ L = torch.exp(segment_sum(A))
646
+
647
+ # Contraction of C and B to get G (attention-weights like)
648
+ G_intermediate = C[:, :, :, None, :, :] * B[:, :, None, :, :, :] # shape: (b, c, l, s, h, n)
649
+ G = G_intermediate.sum(dim=-1) # shape: (b, c, l, s, h)
650
+
651
+ # Compute M, equivalent to applying attention mask to weights
652
+ M_intermediate = G[..., None] * L.permute(0, 2, 3, 4, 1)[..., None]
653
+ M = M_intermediate.sum(dim=-1)
654
+
655
+ # Compute Y_diag (apply to values)
656
+ Y_diag = (M[..., None] * hidden_states[:, :, None]).sum(dim=3)
657
+
658
+ # 2. Compute the state for each intra-chunk
659
+ # (right term of low-rank factorization of off-diagonal blocks; B terms)
660
+ decay_states = torch.exp((A_cumsum[:, :, :, -1:] - A_cumsum))
661
+ B_decay = B * decay_states.permute(0, -2, -1, 1)[..., None]
662
+ states = (B_decay[..., None, :] * hidden_states[..., None]).sum(dim=2)
663
+
664
+ # 3. Compute the inter-chunk SSM recurrence; produces correct SSM states at chunk boundaries
665
+ # (middle term of factorization of off-diag blocks; A terms)
666
+ if cache_params is not None and cache_position is not None and cache_position[0] > 0:
667
+ previous_states = cache_params.ssm_states[self.layer_idx][:, None, ...].to(device=states.device)
668
+ else:
669
+ previous_states = torch.zeros_like(states[:, :1])
670
+ states = torch.cat([previous_states, states], dim=1)
671
+ decay_chunk = torch.exp(segment_sum(nn.functional.pad(A_cumsum[:, :, :, -1], (1, 0))))
672
+ decay_chunk = decay_chunk.transpose(1, 3)
673
+ new_states = (decay_chunk[..., None, None] * states[:, :, None, ...]).sum(dim=1)
674
+ states, ssm_state = new_states[:, :-1], new_states[:, -1]
675
+
676
+ # 4. Compute state -> output conversion per chunk
677
+ # (left term of low-rank factorization of off-diagonal blocks; C terms)
678
+ state_decay_out = torch.exp(A_cumsum)
679
+ C_times_states = (C[..., None, :] * states[:, :, None, ...])
680
+ state_decay_out_permuted = state_decay_out.permute(0, 2, 3, 1)
681
+ Y_off = (C_times_states.sum(-1) * state_decay_out_permuted[..., None])
682
+
683
+ # Add output of intra-chunk and inter-chunk terms (diagonal and off-diagonal blocks)
684
+ y = Y_diag + Y_off
685
+ # [bsz, -1, self.chunk_size, num_heads, head_dim] -> [bsz, (padded) seq_len, num_heads, head_dim]
686
+ y = y.reshape(batch_size, -1, self.num_heads, self.head_dim)
687
+
688
+ y = y + D_residual
689
+ # Cutting off padded chunks
690
+ if pad_size > 0:
691
+ y = y[:, :seq_len, :, :]
692
+ y = y.reshape(batch_size, seq_len, -1)
693
+
694
+ # Init cache
695
+ if ssm_state is not None and cache_params is not None:
696
+ cache_params.update_ssm_state(layer_idx=self.layer_idx, new_ssm_state=ssm_state)
697
+
698
+ scan_output = self.norm(y, gate)
699
+
700
+ # end ssd naive
701
+
702
+ # 4. Final linear projection
703
+ contextualized_states = self.out_proj(scan_output.to(dtype)) # [batch, seq_len, hidden_size]
704
+ return contextualized_states
705
+ # fmt: on
706
+
707
+ def forward(
708
+ self,
709
+ hidden_states,
710
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None,
711
+ cache_position: Optional[torch.LongTensor] = None,
712
+ attention_mask: Optional[torch.Tensor] = None,
713
+ ):
714
+ if is_fast_path_available and "cuda" in self.in_proj.weight.device.type:
715
+ return self.cuda_kernels_forward(hidden_states, cache_params, cache_position, attention_mask)
716
+ dtype = hidden_states.dtype
717
+ if attention_mask is not None and attention_mask.shape[1] > 1 and attention_mask.shape[0] > 1:
718
+ # tune out hidden states for pad tokens, see https://github.com/state-spaces/mamba/issues/66
719
+ hidden_states = (hidden_states * attention_mask[:, :, None]).to(dtype)
720
+
721
+ return self.torch_forward(hidden_states, cache_params, cache_position, attention_mask)
722
+
723
+
724
+ class NemotronHRMSNorm(nn.Module):
725
+ def __init__(self, hidden_size, eps=1e-6):
726
+ """
727
+ NemotronHRMSNorm is equivalent to T5LayerNorm and LlamaRMSNorm
728
+ """
729
+ super().__init__()
730
+ self.weight = nn.Parameter(torch.ones(hidden_size))
731
+ self.variance_epsilon = eps
732
+
733
+ def forward(self, hidden_states):
734
+ input_dtype = hidden_states.dtype
735
+ hidden_states = hidden_states.to(torch.float32)
736
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
737
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
738
+ # Weights are in float32
739
+ return (self.weight.to(torch.float32) * hidden_states).to(input_dtype)
740
+
741
+ class NemotronHBlock(nn.Module):
742
+ def __init__(self, config, layer_idx):
743
+ super().__init__()
744
+ self.config = config
745
+ self.layer_idx = layer_idx
746
+ self.residual_in_fp32 = config.residual_in_fp32
747
+ self.norm = NemotronHRMSNorm(config.hidden_size, eps=config.layer_norm_epsilon)
748
+
749
+ # M: Mamba2, *: Attention, -: MLP
750
+ self.block_type = config.layers_block_type[layer_idx]
751
+ if self.block_type == "mamba":
752
+ self.mixer = NemotronHMamba2Mixer(config, layer_idx=layer_idx)
753
+ elif self.block_type == "attention":
754
+ self.mixer = NEMOTRONH_ATTENTION_CLASSES[config._attn_implementation](config, layer_idx=layer_idx)
755
+ elif self.block_type == "mlp":
756
+ self.mixer = NemotronHMLP(config, layer_idx=layer_idx)
757
+ elif self.block_type == "moe":
758
+ self.mixer = NemotronHMOE(config, layer_idx=layer_idx)
759
+ else:
760
+ raise ValueError(f"Invalid layer pattern {config.hybrid_override_pattern[layer_idx]}")
761
+
762
+ def forward(
763
+ self,
764
+ hidden_states,
765
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None,
766
+ cache_position: Optional[torch.LongTensor] = None,
767
+ attention_mask: Optional[torch.Tensor] = None,
768
+ ):
769
+ with torch.cuda.stream(torch.cuda.default_stream(hidden_states.device)):
770
+ # * Use torch.cuda.stream() to avoid NaN issues when using multiple GPUs
771
+ residual = hidden_states
772
+ hidden_states = self.norm(hidden_states.to(dtype=self.norm.weight.dtype))
773
+ if self.residual_in_fp32:
774
+ residual = residual.to(torch.float32)
775
+
776
+ if self.block_type == "mamba":
777
+ hidden_states = self.mixer(
778
+ hidden_states, cache_params=cache_params, cache_position=cache_position
779
+ )
780
+ elif self.block_type == "attention":
781
+ hidden_states = self.mixer(
782
+ hidden_states, cache_position=cache_position
783
+ )
784
+ hidden_states = hidden_states[0]
785
+ elif self.block_type in ["mlp", "moe"]:
786
+ hidden_states = self.mixer(
787
+ hidden_states
788
+ )
789
+ else:
790
+ raise ValueError(f"Invalid block_type: {self.block_type}")
791
+
792
+ hidden_states = residual + hidden_states
793
+ return hidden_states
794
+
795
+
796
+ # Copied from transformers.models.nemotron.modeling_nemotron Nemotron->NemotronH
797
+ class NemotronHMLP(nn.Module):
798
+ def __init__(self, config, intermediate_size=None, layer_idx: Optional[int] = None):
799
+ super().__init__()
800
+ self.config = config
801
+ self.layer_idx = layer_idx
802
+ if layer_idx is None:
803
+ logger.warning_once(
804
+ f"Instantiating {self.__class__.__name__} without passing a `layer_idx` is not recommended and will "
805
+ "lead to errors during the forward call if caching is used. Please make sure to provide a `layer_idx` "
806
+ "when creating this class."
807
+ )
808
+ self.hidden_size = config.hidden_size
809
+ self.intermediate_size = intermediate_size or config.intermediate_size
810
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=config.mlp_bias)
811
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=config.mlp_bias)
812
+ self.act_fn = ACT2FN[config.mlp_hidden_act]
813
+
814
+ def forward(self, x):
815
+ return self.down_proj(self.act_fn(self.up_proj(x)))
816
+
817
+
818
+ class NemotronHMOE(nn.Module):
819
+ def __init__(self, config, layer_idx: Optional[int] = None):
820
+ super().__init__()
821
+ self.config = config
822
+ self.experts = nn.ModuleList(
823
+ [
824
+ NemotronHMLP(config, intermediate_size=config.moe_intermediate_size, layer_idx=layer_idx)
825
+ for _ in range(config.n_routed_experts)
826
+ ]
827
+ )
828
+ self.gate = NemotronHTopkRouter(config)
829
+ self.shared_experts = NemotronHMLP(
830
+ config=config, intermediate_size=config.moe_shared_expert_intermediate_size, layer_idx=layer_idx
831
+ )
832
+
833
+ def moe(self, hidden_states: torch.Tensor, topk_indices: torch.Tensor, topk_weights: torch.Tensor):
834
+ r"""
835
+ CALL FOR CONTRIBUTION! I don't have time to optimise this right now, but expert weights need to be fused
836
+ to not have to do a loop here (deepseek has 256 experts soooo yeah).
837
+ """
838
+ final_hidden_states = torch.zeros_like(hidden_states, dtype=topk_weights.dtype)
839
+ expert_mask = torch.nn.functional.one_hot(topk_indices, num_classes=len(self.experts))
840
+ expert_mask = expert_mask.permute(2, 0, 1)
841
+
842
+ for expert_idx in range(len(self.experts)):
843
+ expert = self.experts[expert_idx]
844
+ mask = expert_mask[expert_idx]
845
+ token_indices, weight_indices = torch.where(mask)
846
+
847
+ if token_indices.numel() > 0:
848
+ expert_weights = topk_weights[token_indices, weight_indices]
849
+ expert_input = hidden_states[token_indices]
850
+ expert_output = expert(expert_input)
851
+ weighted_output = expert_output * expert_weights.unsqueeze(-1)
852
+ final_hidden_states.index_add_(0, token_indices, weighted_output)
853
+ else:
854
+ # Local empty expert: no-op compute that still marks params as used.
855
+ expert_dtype = expert.down_proj.weight.dtype
856
+ dummy_out = expert(torch.zeros_like(hidden_states[0]).unsqueeze(0).to(expert_dtype))
857
+ final_hidden_states = final_hidden_states + dummy_out
858
+
859
+ # in original deepseek, the output of the experts are gathered once we leave this module
860
+ # thus the moe module is itelsf an IsolatedParallel module
861
+ # and all expert are "local" meaning we shard but we don't gather
862
+ return final_hidden_states.type(hidden_states.dtype)
863
+
864
+ def forward(self, hidden_states):
865
+ residuals = hidden_states
866
+ orig_shape = hidden_states.shape
867
+ topk_indices, topk_weights = self.gate(hidden_states)
868
+ hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
869
+ hidden_states = self.moe(hidden_states, topk_indices, topk_weights).view(*orig_shape)
870
+ hidden_states = hidden_states + self.shared_experts(residuals)
871
+ return hidden_states
872
+
873
+
874
+ class NemotronHTopkRouter(nn.Module):
875
+ def __init__(self, config):
876
+ super().__init__()
877
+ self.config = config
878
+ self.top_k = config.num_experts_per_tok
879
+ self.n_routed_experts = config.n_routed_experts
880
+ self.routed_scaling_factor = config.routed_scaling_factor
881
+ self.n_group = config.n_group
882
+ self.topk_group = config.topk_group
883
+ self.norm_topk_prob = config.norm_topk_prob
884
+
885
+ self.weight = nn.Parameter(torch.empty((self.n_routed_experts, config.hidden_size), dtype=torch.float32))
886
+ self.register_buffer("e_score_correction_bias", torch.zeros(self.n_routed_experts, dtype=torch.float32))
887
+
888
+ @torch.no_grad()
889
+ def get_topk_indices(self, scores):
890
+ scores_for_choice = scores.view(-1, self.n_routed_experts) + self.e_score_correction_bias.unsqueeze(0)
891
+ group_scores = (
892
+ scores_for_choice.view(-1, self.n_group, self.n_routed_experts // self.n_group)
893
+ .topk(2, dim=-1)[0]
894
+ .sum(dim=-1)
895
+ )
896
+ group_idx = torch.topk(group_scores, k=self.topk_group, dim=-1, sorted=False)[1]
897
+ group_mask = torch.zeros_like(group_scores)
898
+ group_mask.scatter_(1, group_idx, 1)
899
+ score_mask = (
900
+ group_mask.unsqueeze(-1)
901
+ .expand(-1, self.n_group, self.n_routed_experts // self.n_group)
902
+ .reshape(-1, self.n_routed_experts)
903
+ )
904
+ scores_for_choice = scores_for_choice.masked_fill(~score_mask.bool(), 0.0)
905
+ topk_indices = torch.topk(scores_for_choice, k=self.top_k, dim=-1, sorted=False)[1]
906
+ return topk_indices
907
+
908
+ def forward(self, hidden_states):
909
+ hidden_states = hidden_states.view(-1, self.config.hidden_size)
910
+ router_logits = F.linear(hidden_states.type(torch.float32), self.weight.type(torch.float32))
911
+ scores = router_logits.sigmoid()
912
+ topk_indices = self.get_topk_indices(scores)
913
+ topk_weights = scores.gather(1, topk_indices)
914
+ if self.norm_topk_prob:
915
+ denominator = topk_weights.sum(dim=-1, keepdim=True) + 1e-20
916
+ topk_weights /= denominator
917
+ topk_weights = topk_weights * self.routed_scaling_factor
918
+ return topk_indices, topk_weights
919
+
920
+ # Copied from transformers.models.llama.modeling_llama.repeat_kv
921
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
922
+ """
923
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
924
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
925
+ """
926
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
927
+ if n_rep == 1:
928
+ return hidden_states
929
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
930
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
931
+
932
+
933
+ class NemotronHAttention(nn.Module):
934
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
935
+
936
+ def __init__(self, config: NemotronHConfig, layer_idx: Optional[int] = None):
937
+ super().__init__()
938
+ self.config = config
939
+ self.layer_idx = layer_idx
940
+ if layer_idx is None:
941
+ logger.warning_once(
942
+ f"Instantiating {self.__class__.__name__} without passing a `layer_idx` is not recommended and will "
943
+ "lead to errors during the forward call if caching is used. Please make sure to provide a `layer_idx` "
944
+ "when creating this class."
945
+ )
946
+
947
+ self.attention_dropout = config.attention_dropout
948
+ self.hidden_size = config.hidden_size
949
+ self.num_heads = config.num_attention_heads
950
+ if hasattr(config, "head_dim") and config.head_dim is not None:
951
+ self.head_dim = config.head_dim
952
+ else:
953
+ self.head_dim = config.hidden_size // self.num_attention_heads
954
+ self.num_key_value_heads = config.num_key_value_heads
955
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
956
+ self.max_position_embeddings = config.max_position_embeddings
957
+ self.is_causal = True
958
+
959
+ self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=config.attention_bias)
960
+ self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=config.attention_bias)
961
+ self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=config.attention_bias)
962
+ self.o_proj = nn.Linear(self.head_dim * self.num_heads, self.hidden_size, bias=config.attention_bias)
963
+
964
+ def forward(
965
+ self,
966
+ hidden_states: torch.Tensor,
967
+ # position_embeddings: Tuple[torch.Tensor, torch.Tensor], #TODO
968
+ attention_mask: Optional[torch.Tensor] = None,
969
+ position_ids: Optional[torch.LongTensor] = None,
970
+ past_key_value: Optional[HybridMambaAttentionDynamicCache] = None,
971
+ output_attentions: bool = False,
972
+ use_cache: bool = False,
973
+ cache_position: Optional[torch.LongTensor] = None,
974
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
975
+ bsz, q_len, _ = hidden_states.size()
976
+
977
+ query_states = self.q_proj(hidden_states)
978
+ key_states = self.k_proj(hidden_states)
979
+ value_states = self.v_proj(hidden_states)
980
+
981
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
982
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
983
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
984
+
985
+ if past_key_value is not None:
986
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx)
987
+
988
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
989
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
990
+
991
+ causal_mask = attention_mask
992
+ if attention_mask is not None: # no matter the length, we just slice it
993
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
994
+
995
+ if query_states.device.type == "cuda" and attention_mask is not None:
996
+ query_states = query_states.contiguous()
997
+ key_states = key_states.contiguous()
998
+ value_states = value_states.contiguous()
999
+
1000
+ is_causal = True if causal_mask is None and q_len > 1 else False
1001
+
1002
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
1003
+ query_states,
1004
+ key_states,
1005
+ value_states,
1006
+ attn_mask=causal_mask,
1007
+ dropout_p=self.attention_dropout if self.training else 0.0,
1008
+ is_causal=is_causal,
1009
+ )
1010
+ attn_output = attn_output.transpose(1, 2).contiguous()
1011
+ #attn_output = attn_output.view(bsz, q_len, self.hidden_size)
1012
+ attn_output = attn_output.view(bsz, q_len, self.num_heads * self.head_dim)
1013
+
1014
+ attn_output = self.o_proj(attn_output)
1015
+
1016
+ return attn_output, None, past_key_value
1017
+
1018
+
1019
+ # Adapted from transformers.models.mistral.modeling_mistral.MistralFlashAttention2 with Mistral->Jamba
1020
+ #class JambaFlashAttention2(JambaAttention):
1021
+ class NemotronHFlashAttention2(NemotronHAttention):
1022
+ """
1023
+ Jamba flash attention module. This module inherits from `JambaAttention` as the weights of the module stays
1024
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
1025
+ flash attention and deal with padding tokens in case the input contains any of them.
1026
+ """
1027
+ def __init__(self, *args, **kwargs):
1028
+ super().__init__(*args, **kwargs)
1029
+
1030
+ # TODO: Should be removed once Flash Attention for RoCm is bumped to 2.1.
1031
+ # flash_attn<2.1 generates top-left aligned causal mask, while what is needed here is bottom-right alignement, that was made default for flash_attn>=2.1. This attribute is used to handle this difference. Reference: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.1.0.
1032
+ # Beware that with flash_attn<2.1, using q_seqlen != k_seqlen (except for the case q_seqlen == 1) produces a wrong mask (top-left).
1033
+ self._flash_attn_uses_top_left_mask = not is_flash_attn_greater_or_equal_2_10()
1034
+
1035
+ def forward(
1036
+ self,
1037
+ hidden_states: torch.Tensor,
1038
+ attention_mask: Optional[torch.Tensor] = None,
1039
+ position_ids: Optional[torch.LongTensor] = None,
1040
+ past_key_value: Optional[HybridMambaAttentionDynamicCache] = None,
1041
+ output_attentions: bool = False,
1042
+ use_cache: bool = False,
1043
+ cache_position: Optional[torch.LongTensor] = None,
1044
+ **kwargs,
1045
+ ):
1046
+ bsz, q_len, _ = hidden_states.size()
1047
+
1048
+ query_states = self.q_proj(hidden_states)
1049
+ key_states = self.k_proj(hidden_states)
1050
+ value_states = self.v_proj(hidden_states)
1051
+
1052
+ # Flash attention requires the input to have the shape
1053
+ # batch_size x seq_length x head_dim x hidden_dim
1054
+ # therefore we just need to keep the original shape
1055
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim)
1056
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
1057
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
1058
+
1059
+ if past_key_value is not None:
1060
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx)
1061
+
1062
+ # repeat k/v heads if n_kv_heads < n_heads
1063
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
1064
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
1065
+ dropout_rate = 0.0 if not self.training else self.attention_dropout
1066
+
1067
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
1068
+ # therefore the input hidden states gets silently casted in float32. Hence, we need
1069
+ # cast them back in float16 just to be sure everything works as expected.
1070
+ input_dtype = query_states.dtype
1071
+ if input_dtype == torch.float32:
1072
+ if torch.is_autocast_enabled():
1073
+ target_dtype = torch.get_autocast_gpu_dtype()
1074
+ # Handle the case where the model is quantized
1075
+ elif hasattr(self.config, "_pre_quantization_dtype"):
1076
+ target_dtype = self.config._pre_quantization_dtype
1077
+ else:
1078
+ target_dtype = self.q_proj.weight.dtype
1079
+
1080
+ logger.warning_once(
1081
+ f"The input hidden states seems to be silently casted in float32, this might be related to"
1082
+ f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
1083
+ f" {target_dtype}."
1084
+ )
1085
+
1086
+ query_states = query_states.to(target_dtype)
1087
+ key_states = key_states.to(target_dtype)
1088
+ value_states = value_states.to(target_dtype)
1089
+
1090
+ # Reashape to the expected shape for Flash Attention
1091
+ key_states = key_states.transpose(1, 2)
1092
+ value_states = value_states.transpose(1, 2)
1093
+
1094
+ attn_output = _flash_attention_forward(
1095
+ query_states,
1096
+ key_states,
1097
+ value_states,
1098
+ attention_mask,
1099
+ q_len,
1100
+ dropout=dropout_rate,
1101
+ sliding_window=getattr(self.config, "sliding_window", None),
1102
+ is_causal=self.is_causal,
1103
+ use_top_left_mask=self._flash_attn_uses_top_left_mask,
1104
+ )
1105
+
1106
+ #attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
1107
+ attn_output = attn_output.reshape(bsz, q_len, self.num_heads * self.head_dim).contiguous()
1108
+ attn_output = self.o_proj(attn_output)
1109
+
1110
+ if not output_attentions:
1111
+ attn_weights = None
1112
+
1113
+ return attn_output, attn_weights, past_key_value
1114
+
1115
+
1116
+ # Adapted from transformers.models.mistral.modeling_mistral.MistralSdpaAttention with Mistral->Jamba
1117
+ #class JambaSdpaAttention(JambaAttention):
1118
+ class NemotronHSdpaAttention(NemotronHAttention):
1119
+ """
1120
+ Jamba attention module using torch.nn.functional.scaled_dot_product_attention. This module inherits from
1121
+ `JambaAttention` as the weights of the module stays untouched. The only changes are on the forward pass to adapt to
1122
+ SDPA API.
1123
+ """
1124
+
1125
+ # Adapted from NemotronHAttention.forward
1126
+ def forward(
1127
+ self,
1128
+ hidden_states: torch.Tensor,
1129
+ attention_mask: Optional[torch.Tensor] = None,
1130
+ position_ids: Optional[torch.LongTensor] = None,
1131
+ past_key_value: Optional[HybridMambaAttentionDynamicCache] = None,
1132
+ output_attentions: bool = False,
1133
+ use_cache: bool = False,
1134
+ cache_position: Optional[torch.LongTensor] = None,
1135
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
1136
+ if output_attentions:
1137
+ # TODO: Improve this warning with e.g. `model.config.attn_implementation = "manual"` once this is implemented.
1138
+ logger.warning_once(
1139
+ "NemotronHModel is using NemotronHSdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to the manual attention implementation, "
1140
+ 'but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
1141
+ )
1142
+ return super().forward(
1143
+ hidden_states=hidden_states,
1144
+ attention_mask=attention_mask,
1145
+ position_ids=position_ids,
1146
+ past_key_value=past_key_value,
1147
+ output_attentions=output_attentions,
1148
+ use_cache=use_cache,
1149
+ )
1150
+
1151
+ bsz, q_len, _ = hidden_states.size()
1152
+
1153
+ query_states = self.q_proj(hidden_states)
1154
+ key_states = self.k_proj(hidden_states)
1155
+ value_states = self.v_proj(hidden_states)
1156
+
1157
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
1158
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
1159
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
1160
+
1161
+ if past_key_value is not None:
1162
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx)
1163
+
1164
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
1165
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
1166
+
1167
+ causal_mask = attention_mask
1168
+ if attention_mask is not None:
1169
+ causal_mask = causal_mask[:, :, :, : key_states.shape[-2]]
1170
+
1171
+ # SDPA with memory-efficient backend is currently (torch==2.1.2) bugged with non-contiguous inputs with custom attn_mask,
1172
+ # Reference: https://github.com/pytorch/pytorch/issues/112577.
1173
+ if query_states.device.type == "cuda" and attention_mask is not None:
1174
+ query_states = query_states.contiguous()
1175
+ key_states = key_states.contiguous()
1176
+ value_states = value_states.contiguous()
1177
+
1178
+ # We dispatch to SDPA's Flash Attention or Efficient kernels via this `is_causal` if statement instead of an inline conditional assignment
1179
+ # in SDPA to support both torch.compile's dynamic shapes and full graph options. An inline conditional prevents dynamic shapes from compiling.
1180
+ # The q_len > 1 is necessary to match with AttentionMaskConverter.to_causal_4d that does not create a causal mask in case q_len == 1.
1181
+ is_causal = True if self.is_causal and causal_mask is None and q_len > 1 else False
1182
+
1183
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
1184
+ query_states,
1185
+ key_states,
1186
+ value_states,
1187
+ attn_mask=causal_mask,
1188
+ dropout_p=self.attention_dropout if self.training else 0.0,
1189
+ is_causal=is_causal,
1190
+ )
1191
+
1192
+ attn_output = attn_output.transpose(1, 2).contiguous()
1193
+ attn_output = attn_output.view(bsz, q_len, self.hidden_size)
1194
+
1195
+ attn_output = self.o_proj(attn_output)
1196
+
1197
+ return attn_output, None, past_key_value
1198
+
1199
+
1200
+ NEMOTRONH_ATTENTION_CLASSES = {
1201
+ "eager": NemotronHAttention,
1202
+ "flash_attention_2": NemotronHFlashAttention2,
1203
+ "sdpa": NemotronHSdpaAttention,
1204
+ }
1205
+
1206
+ # Copied from transformers.models.mamba.modeling_mamba2.Mamba2PreTrainedModel
1207
+ class NemotronHPreTrainedModel(PreTrainedModel):
1208
+ """
1209
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
1210
+ models.
1211
+ """
1212
+
1213
+ config_class = NemotronHConfig
1214
+ base_model_prefix = "backbone"
1215
+ _no_split_modules = ["NemotronHBlock"]
1216
+ supports_gradient_checkpointing = True
1217
+ _is_stateful = True
1218
+
1219
+ def _init_weights(self, module):
1220
+ """Initialize the weights."""
1221
+ if isinstance(module, NemotronHMamba2Mixer):
1222
+ module.A_log._no_weight_decay = True
1223
+ module.D._no_weight_decay = True
1224
+
1225
+ dt = torch.exp(
1226
+ torch.rand(self.config.mamba_num_heads)
1227
+ * (math.log(self.config.time_step_max) - math.log(self.config.time_step_min))
1228
+ + math.log(self.config.time_step_min)
1229
+ ).clamp(min=self.config.time_step_floor)
1230
+
1231
+ # # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759
1232
+ inv_dt = dt + torch.log(-torch.expm1(-dt))
1233
+ with torch.no_grad():
1234
+ module.dt_bias.copy_(inv_dt)
1235
+ module.dt_bias._no_reinit = True
1236
+
1237
+ if isinstance(module, nn.Linear):
1238
+ if module.bias is not None:
1239
+ if not getattr(module.bias, "_no_reinit", False):
1240
+ nn.init.zeros_(module.bias)
1241
+ elif isinstance(module, nn.Embedding):
1242
+ nn.init.normal_(module.weight, std=self.config.initializer_range)
1243
+
1244
+ # TODO: Check
1245
+ if self.config.rescale_prenorm_residual:
1246
+ # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
1247
+ # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
1248
+ # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
1249
+ # > -- GPT-2 :: https://openai.com/blog/better-language-models/
1250
+ #
1251
+ # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
1252
+ for name, p in module.named_parameters():
1253
+ if name in ["out_proj.weight"]:
1254
+ # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
1255
+ # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
1256
+ # We need to reinit p since this code could be called multiple times
1257
+ # Having just p *= scale would repeatedly scale it down
1258
+ nn.init.kaiming_uniform_(p, a=math.sqrt(5))
1259
+ with torch.no_grad():
1260
+ p /= math.sqrt(self.config.num_hidden_layers)
1261
+
1262
+
1263
+ @dataclass
1264
+ # Copied from transformers.models.mamba.modeling_mamba2.Mamba2Output with MAMBA2->NemotronH,Mamba2->NemotronH
1265
+ class NemotronHOutput(ModelOutput):
1266
+ """
1267
+ Class for the NemotronH model outputs.
1268
+
1269
+ Args:
1270
+ last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
1271
+ Sequence of hidden-states at the output of the last layer of the model.
1272
+ cache_params (`HybridMambaAttentionDynamicCache`):
1273
+ The state of the model at the last time step. Can be used in a forward method with the next `input_ids` to
1274
+ avoid providing the old `input_ids`.
1275
+
1276
+ Includes both the State space model state matrices after the selective scan, and the Convolutional states
1277
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
1278
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
1279
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
1280
+
1281
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
1282
+ """
1283
+
1284
+ last_hidden_state: Optional[torch.FloatTensor] = None
1285
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None
1286
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
1287
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
1288
+
1289
+
1290
+ @dataclass
1291
+ # Copied from transformers.models.mamba2.modeling_mamba2.MambaCausalLMOutput with Mamba2->NemotronH
1292
+ class NemotronHCausalLMOutput(ModelOutput):
1293
+ """
1294
+ Base class for causal language model (or autoregressive) outputs.
1295
+
1296
+ Args:
1297
+ loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
1298
+ Language modeling loss (for next-token prediction).
1299
+ logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
1300
+ Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
1301
+ cache_params (`HybridMambaAttentionDynamicCache`):
1302
+ The state of the model at the last time step. Can be used in a forward method with the next `input_ids` to
1303
+ avoid providing the old `input_ids`.
1304
+
1305
+ Includes both the State space model state matrices after the selective scan, and the Convolutional states
1306
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
1307
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
1308
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
1309
+
1310
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
1311
+ """
1312
+
1313
+ loss: Optional[torch.FloatTensor] = None
1314
+ logits: Optional[torch.FloatTensor] = None
1315
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None
1316
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
1317
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
1318
+
1319
+
1320
+ NEMOTRONH_START_DOCSTRING = r"""
1321
+
1322
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
1323
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
1324
+ etc.)
1325
+
1326
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
1327
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
1328
+ and behavior.
1329
+
1330
+ Parameters:
1331
+ config ([`NemotronHConfig`]): Model configuration class with all the parameters of the model.
1332
+ Initializing with a config file does not load the weights associated with the model, only the
1333
+ configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
1334
+ """
1335
+
1336
+ NEMOTRONH_INPUTS_DOCSTRING = r"""
1337
+ Args:
1338
+ input_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`, *optional*):
1339
+ Indices of input sequence tokens in the vocabulary.
1340
+
1341
+ If `cache_params.seqlen_offset>0`, only `input_ids` that do not have their past calculated should be passed as
1342
+ `input_ids`.
1343
+
1344
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
1345
+ [`PreTrainedTokenizer.__call__`] for details.
1346
+
1347
+ [What are input IDs?](../glossary#input-ids)
1348
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
1349
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
1350
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
1351
+ model's internal embedding lookup matrix.
1352
+ position_ids (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1353
+ Indices of positions of each input sequence tokens in the position embeddings.
1354
+ cache_params (`HybridMambaAttentionDynamicCache`, *optional*):
1355
+ If passed along, the model uses the previous state in all the blocks (which will give the output for the
1356
+ `input_ids` provided as if the model add `state_input_ids + input_ids` as context).
1357
+ use_cache (`bool`, *optional*):
1358
+ If set to `True`, the `cache_params` is returned and can be used to quickly generate the next logits.
1359
+ output_attentions (`bool`, *optional*):
1360
+ Whether or not to return the attentions tensors of all attention layers.
1361
+ output_hidden_states (`bool`, *optional*):
1362
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
1363
+ more detail.
1364
+ return_dict (`bool`, *optional*):
1365
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
1366
+ cache_position (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1367
+ The position of the current input in the cache. This is used to ensure that the cache is correctly updated.
1368
+ If `cache_params` is passed, `cache_position` should also be passed.
1369
+ attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
1370
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
1371
+
1372
+ - 1 for tokens that are **not masked**,
1373
+ - 0 for tokens that are **masked**.
1374
+
1375
+ [What are attention masks?](../glossary#attention-mask)
1376
+ """
1377
+
1378
+
1379
+ @add_start_docstrings(
1380
+ "The bare NemotronH Model transformer outputting raw hidden-states without any specific head on top.",
1381
+ NEMOTRONH_START_DOCSTRING,
1382
+ )
1383
+ class NemotronHModel(NemotronHPreTrainedModel):
1384
+ def __init__(self, config):
1385
+ super().__init__(config)
1386
+
1387
+ self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size)
1388
+ self.layers = nn.ModuleList([NemotronHBlock(config, layer_idx=idx) for idx in range(config.num_hidden_layers)])
1389
+
1390
+ self.gradient_checkpointing = False
1391
+ self.norm_f = NemotronHRMSNorm(config.hidden_size, eps=config.layer_norm_epsilon)
1392
+ # Initialize weights and apply final processing
1393
+ self._register_load_state_dict_pre_hook(self.load_hook)
1394
+ self.post_init()
1395
+
1396
+ def load_hook(self, state_dict, prefix, *args):
1397
+ for k in state_dict:
1398
+ if "embedding." in k:
1399
+ state_dict[k.replace("embedding.", "embeddings.")] = state_dict.pop(k)
1400
+ break
1401
+
1402
+ def get_input_embeddings(self):
1403
+ return self.embeddings
1404
+
1405
+ def set_input_embeddings(self, new_embeddings):
1406
+ self.embeddings = new_embeddings
1407
+
1408
+ @add_start_docstrings_to_model_forward(NEMOTRONH_INPUTS_DOCSTRING)
1409
+ @add_code_sample_docstrings(
1410
+ checkpoint=_CHECKPOINT_FOR_DOC,
1411
+ output_type=NemotronHOutput,
1412
+ config_class=_CONFIG_FOR_DOC,
1413
+ )
1414
+ def forward(
1415
+ self,
1416
+ input_ids: Optional[torch.LongTensor] = None,
1417
+ inputs_embeds: Optional[torch.LongTensor] = None,
1418
+ position_ids: Optional[torch.LongTensor] = None,
1419
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None,
1420
+ use_cache: Optional[bool] = None,
1421
+ output_attentions: Optional[bool] = None,
1422
+ output_hidden_states: Optional[bool] = None,
1423
+ return_dict: Optional[bool] = None,
1424
+ cache_position: Optional[torch.LongTensor] = None,
1425
+ attention_mask: Optional[torch.Tensor] = None,
1426
+ **kwargs,
1427
+ ) -> Union[Tuple, NemotronHOutput]:
1428
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1429
+ output_hidden_states = (
1430
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1431
+ )
1432
+ # use_cache = use_cache if use_cache is not None else self.config.use_cache
1433
+ use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False)
1434
+
1435
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1436
+
1437
+ if (input_ids is None) ^ (inputs_embeds is not None): # ^ is python for xor
1438
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
1439
+
1440
+ if inputs_embeds is None:
1441
+ inputs_embeds = self.embeddings(input_ids)
1442
+
1443
+ if self.gradient_checkpointing and self.training and use_cache:
1444
+ logger.warning_once(
1445
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
1446
+ )
1447
+ use_cache = False
1448
+
1449
+ # From zamba_modeling.py
1450
+ if use_cache and cache_params is None:
1451
+ logger.warning_once(
1452
+ "NemotronH requires an initialized `NemotronHHybridDynamicCache` to return a cache. None was "
1453
+ "provided, so no cache will be returned."
1454
+ )
1455
+
1456
+ hidden_states = inputs_embeds
1457
+
1458
+ if cache_position is None:
1459
+ cache_position = torch.arange(hidden_states.shape[1], device=hidden_states.device)
1460
+ if position_ids is None:
1461
+ position_ids = cache_position.unsqueeze(0)
1462
+
1463
+ causal_mask = self._update_causal_mask(attention_mask, inputs_embeds, cache_position)
1464
+ mamba_mask = self._update_mamba_mask(attention_mask, cache_position)
1465
+
1466
+ all_hidden_states = () if output_hidden_states else None
1467
+ all_self_attns = () if output_attentions else None
1468
+ # Until HERE
1469
+
1470
+ for layer_idx, mixer_block in enumerate(self.layers):
1471
+ # Depending on the layer type we opt for 2D base attention mask (Mamba) or 4D causal mask (Attention)
1472
+ if mixer_block.block_type == "mamba":
1473
+ layer_mask = mamba_mask
1474
+ elif mixer_block.block_type == "attention":
1475
+ layer_mask = causal_mask
1476
+ elif mixer_block.block_type in ["mlp", "moe"]:
1477
+ layer_mask = None
1478
+ else:
1479
+ raise ValueError(f"Invalid block_type: {self.block_type}")
1480
+
1481
+ if output_hidden_states:
1482
+ all_hidden_states += (hidden_states,)
1483
+
1484
+ if self.gradient_checkpointing and self.training:
1485
+ hidden_states = self._gradient_checkpointing_func(
1486
+ mixer_block.__call__, hidden_states, cache_params, cache_position, layer_mask
1487
+ )
1488
+ else:
1489
+ hidden_states = mixer_block(
1490
+ hidden_states,
1491
+ cache_params=cache_params,
1492
+ cache_position=cache_position,
1493
+ attention_mask=layer_mask,
1494
+ )
1495
+
1496
+ # TODO: Store attentions
1497
+ # if output_attentions:
1498
+ # if layer_outputs[1] is not None:
1499
+ # # append attentions only of attention layers. Mamba layers return `None` as the attention weights
1500
+ # all_self_attns += (layer_outputs[1],)
1501
+
1502
+ # TODO (Check): should it happen before the forward pass?
1503
+ # if output_hidden_states:
1504
+ # all_hidden_states = all_hidden_states + (hidden_states,)
1505
+
1506
+ hidden_states = self.norm_f(hidden_states)
1507
+
1508
+ if output_hidden_states:
1509
+ all_hidden_states = all_hidden_states + (hidden_states,)
1510
+
1511
+ if not return_dict:
1512
+ return tuple(v for v in [hidden_states, cache_params, all_hidden_states] if v is not None)
1513
+
1514
+ return NemotronHOutput(
1515
+ last_hidden_state=hidden_states,
1516
+ cache_params=cache_params if use_cache else None,
1517
+ hidden_states=all_hidden_states,
1518
+ attentions=all_self_attns,
1519
+ )
1520
+
1521
+ # Copied from transformers.models.jamba.modeling_jamba.JambaModel._update_causal_mask
1522
+ def _update_causal_mask(self, attention_mask, input_tensor, cache_position):
1523
+ if self.config._attn_implementation == "flash_attention_2":
1524
+ if attention_mask is not None and 0.0 in attention_mask:
1525
+ return attention_mask
1526
+ return None
1527
+
1528
+ dtype, device = input_tensor.dtype, input_tensor.device
1529
+ min_dtype = torch.finfo(dtype).min
1530
+ sequence_length = input_tensor.shape[1]
1531
+ target_length = cache_position[-1] + 1
1532
+
1533
+ causal_mask = torch.full((sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device)
1534
+ if sequence_length != 1:
1535
+ causal_mask = torch.triu(causal_mask, diagonal=1)
1536
+ causal_mask *= torch.arange(target_length, device=device) > cache_position.reshape(-1, 1)
1537
+ causal_mask = causal_mask[None, None, :, :].expand(input_tensor.shape[0], 1, -1, -1)
1538
+ if attention_mask is not None:
1539
+ causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
1540
+ if attention_mask.dim() == 2:
1541
+ mask_length = attention_mask.shape[-1]
1542
+ padding_mask = causal_mask[..., :mask_length].eq(0.0) * attention_mask[:, None, None, :].eq(0.0)
1543
+ causal_mask[..., :mask_length] = causal_mask[..., :mask_length].masked_fill(padding_mask, min_dtype)
1544
+
1545
+ if (
1546
+ self.config._attn_implementation == "sdpa"
1547
+ and attention_mask is not None
1548
+ and attention_mask.device.type == "cuda"
1549
+ ):
1550
+ # Attend to all tokens in fully masked rows in the causal_mask, for example the relevant first rows when
1551
+ # using left padding. This is required by F.scaled_dot_product_attention memory-efficient attention path.
1552
+ # Details: https://github.com/pytorch/pytorch/issues/110213
1553
+ causal_mask = AttentionMaskConverter._unmask_unattended(causal_mask, min_dtype)
1554
+
1555
+ return causal_mask
1556
+
1557
+ def _update_mamba_mask(self, attention_mask, cache_position):
1558
+ """
1559
+ No need for zeroing states when
1560
+ 1. Cached forward
1561
+ 2. Attending to all inputs
1562
+ """
1563
+ mamba_mask = attention_mask
1564
+ if cache_position[0] > 0 or (attention_mask is not None and torch.all(attention_mask == 1)):
1565
+ mamba_mask = None
1566
+ return mamba_mask
1567
+
1568
+
1569
+ @add_start_docstrings(
1570
+ """
1571
+ The NEMOTRONH Model transformer with a language modeling head on top (linear layer with weights not tied to the input
1572
+ embeddings).
1573
+ """,
1574
+ NEMOTRONH_START_DOCSTRING,
1575
+ )
1576
+ class NemotronHForCausalLM(NemotronHPreTrainedModel, GenerationMixin):
1577
+ _tied_weights_keys = ["lm_head.weight"]
1578
+
1579
+ def __init__(self, config):
1580
+ super().__init__(config)
1581
+ self.backbone = NemotronHModel(config)
1582
+ self.vocab_size = config.vocab_size
1583
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1584
+
1585
+ # Initialize weights and apply final processing
1586
+ self.post_init()
1587
+
1588
+ def get_input_embeddings(self):
1589
+ return self.backbone.get_input_embeddings()
1590
+
1591
+ def set_input_embeddings(self, new_embeddings):
1592
+ return self.backbone.set_input_embeddings(new_embeddings)
1593
+
1594
+ def get_output_embeddings(self):
1595
+ return self.lm_head
1596
+
1597
+ def set_output_embeddings(self, new_embeddings):
1598
+ self.lm_head = new_embeddings
1599
+
1600
+ def get_decoder(self):
1601
+ return self.model
1602
+
1603
+ def set_decoder(self, decoder):
1604
+ self.model = decoder
1605
+
1606
+ def prepare_inputs_for_generation(
1607
+ self,
1608
+ input_ids,
1609
+ past_key_values=None,
1610
+ attention_mask=None,
1611
+ inputs_embeds=None,
1612
+ cache_position=None,
1613
+ position_ids=None,
1614
+ use_cache=True,
1615
+ **kwargs,
1616
+ ):
1617
+ # Copy from https://github.com/huggingface/transformers/blob/main/src/transformers/models/jamba/modeling_jamba.py
1618
+ # Overwitten -- uses `cache_params` as opposed to `past_key_values`
1619
+ empty_past_kv = past_key_values is None
1620
+
1621
+ # If we have cache: let's slice `input_ids` through `cache_position`, to keep only the unprocessed tokens
1622
+ # Exception 1: when passing input_embeds, input_ids may be missing entries
1623
+ # Exception 2: some generation methods do special slicing of input_ids, so we don't need to do it here
1624
+ # Exception 3: with synced GPUs cache_position may go out of bounds, but we only want dummy token in that case.
1625
+ # (we can't check exception 3 while compiling)
1626
+ if not empty_past_kv:
1627
+ if (
1628
+ inputs_embeds is not None # Exception 1
1629
+ or cache_position[-1] >= input_ids.shape[1] # Exception 3
1630
+ ):
1631
+ input_ids = input_ids[:, -cache_position.shape[0] :]
1632
+ elif input_ids.shape[1] != cache_position.shape[0]: # Default case (the "else", a no op, is Exception 2)
1633
+ input_ids = input_ids[:, cache_position]
1634
+ else:
1635
+ past_key_values = HybridMambaAttentionDynamicCache(
1636
+ self.config, input_ids.shape[0], self.dtype, device=self.device
1637
+ )
1638
+
1639
+ if attention_mask is not None and position_ids is None:
1640
+ # create position_ids on the fly for batch generation
1641
+ position_ids = attention_mask.long().cumsum(-1) - 1
1642
+ position_ids.masked_fill_(attention_mask == 0, 1)
1643
+ if not empty_past_kv:
1644
+ position_ids = position_ids[:, -input_ids.shape[1] :]
1645
+
1646
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1647
+ if inputs_embeds is not None and empty_past_kv:
1648
+ model_inputs = {"inputs_embeds": inputs_embeds}
1649
+ else:
1650
+ model_inputs = {"input_ids": input_ids.contiguous()} # `contiguous()` needed for compilation use cases
1651
+
1652
+ model_inputs.update(
1653
+ {
1654
+ "position_ids": position_ids,
1655
+ "past_key_values": past_key_values,
1656
+ "use_cache": use_cache,
1657
+ "attention_mask": attention_mask,
1658
+ "logits_to_keep": self.config.num_logits_to_keep,
1659
+ "cache_position": cache_position,
1660
+ }
1661
+ )
1662
+ return model_inputs
1663
+
1664
+ @add_start_docstrings_to_model_forward(NEMOTRONH_INPUTS_DOCSTRING)
1665
+ @add_code_sample_docstrings(
1666
+ checkpoint=_CHECKPOINT_FOR_DOC,
1667
+ output_type=NemotronHCausalLMOutput,
1668
+ config_class=_CONFIG_FOR_DOC,
1669
+ )
1670
+ def forward(
1671
+ self,
1672
+ input_ids: Optional[torch.LongTensor] = None,
1673
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1674
+ position_ids: Optional[torch.LongTensor] = None,
1675
+ cache_params: Optional[HybridMambaAttentionDynamicCache] = None,
1676
+ labels: Optional[torch.LongTensor] = None,
1677
+ output_attentions: Optional[bool] = None,
1678
+ output_hidden_states: Optional[bool] = None,
1679
+ return_dict: Optional[bool] = None,
1680
+ use_cache: Optional[bool] = None,
1681
+ cache_position: Optional[torch.Tensor] = None,
1682
+ attention_mask: Optional[torch.Tensor] = None,
1683
+ **kwargs, # for now we need this for generation
1684
+ ) -> Union[Tuple, NemotronHCausalLMOutput]:
1685
+ r"""
1686
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1687
+ Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
1688
+ `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
1689
+ are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
1690
+ """
1691
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1692
+
1693
+ output_hidden_states = (
1694
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1695
+ )
1696
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1697
+
1698
+ nemotron_h_outputs = self.backbone(
1699
+ input_ids,
1700
+ cache_params=cache_params,
1701
+ inputs_embeds=inputs_embeds,
1702
+ output_attentions=output_attentions,
1703
+ output_hidden_states=output_hidden_states,
1704
+ return_dict=return_dict,
1705
+ use_cache=use_cache,
1706
+ cache_position=cache_position,
1707
+ attention_mask=attention_mask,
1708
+ )
1709
+ hidden_states = nemotron_h_outputs[0]
1710
+
1711
+ # TODO: Check zamba_modeling.py: https://github.com/huggingface/transformers/blob/d7188ba600e36d3fd191b12e19f1b3bb81a8404f/src/transformers/models/zamba/modeling_zamba.py#L1284C1-L1286C2
1712
+ #logits = self.lm_head(hidden_states.to(self.lm_head.weight.dtype)).float()
1713
+ logits = self.lm_head(hidden_states.to(self.lm_head.weight.dtype)).float()
1714
+
1715
+ loss = None
1716
+ if labels is not None:
1717
+ # move labels to correct device to enable model parallelism
1718
+ labels = labels.to(logits.device)
1719
+ # Shift so that tokens < n predict n
1720
+ shift_logits = logits[..., :-1, :].contiguous()
1721
+ shift_labels = labels[..., 1:].contiguous()
1722
+ # Flatten the tokens
1723
+ loss_fct = CrossEntropyLoss()
1724
+ loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
1725
+
1726
+ if not return_dict:
1727
+ output = (logits,) + nemotron_h_outputs[1:]
1728
+ return ((loss,) + output) if loss is not None else output
1729
+
1730
+ return NemotronHCausalLMOutput(
1731
+ loss=loss,
1732
+ logits=logits,
1733
+ cache_params=nemotron_h_outputs.cache_params,
1734
+ hidden_states=nemotron_h_outputs.hidden_states,
1735
+ attentions=nemotron_h_outputs.attentions,
1736
+ )
nano_v3_reasoning_parser.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from vllm.reasoning.abs_reasoning_parsers import ReasoningParserManager
2
+ from vllm.reasoning.deepseek_r1_reasoning_parser import DeepSeekR1ReasoningParser
3
+
4
+
5
+ @ReasoningParserManager.register_module("nano_v3")
6
+ class NanoV3ReasoningParser(DeepSeekR1ReasoningParser):
7
+ def extract_reasoning(self, model_output, request):
8
+ reasoning_content, final_content = super().extract_reasoning(
9
+ model_output, request
10
+ )
11
+ if (
12
+ hasattr(request, "chat_template_kwargs")
13
+ and request.chat_template_kwargs
14
+ and request.chat_template_kwargs.get("enable_thinking") is False
15
+ and final_content is None
16
+ ):
17
+ reasoning_content, final_content = final_content, reasoning_content
18
+
19
+ return reasoning_content, final_content
nemo-evaluator-launcher-configs/local_nvidia_nemotron_3_nano_30b_a3b.yaml ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SPDX-FileCopyrightText: Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
2
+ # SPDX-License-Identifier: Apache-2.0
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ defaults:
17
+ - execution: local
18
+ - deployment: none
19
+ - _self_
20
+
21
+ execution:
22
+ output_dir: ./results_nvidia_nemotron_3_nano_30b_a3b
23
+ mounts:
24
+ evaluation:
25
+ ./hf_cache: /root/.cache/huggingface
26
+ env_vars:
27
+ evaluation: {}
28
+
29
+ target:
30
+ api_endpoint:
31
+ model_id: nvidia/nemotron-nano-3-30b-a3b
32
+ url: https://integrate.api.nvidia.com/v1/chat/completions
33
+ api_key_name: NGC_API_KEY # API Key with access to build.nvidia.com
34
+
35
+ evaluation:
36
+ env_vars:
37
+ HF_TOKEN: HF_TOKEN
38
+ JUDGE_API_KEY: JUDGE_API_KEY # API Key with access to gpt-4o for HLE
39
+ HF_HOME: HF_HOME
40
+ nemo_evaluator_config:
41
+ config:
42
+ params:
43
+ max_new_tokens: 131072
44
+ temperature: 0.99999
45
+ top_p: 0.99999
46
+ parallelism: 512
47
+ request_timeout: 3600
48
+ max_retries: 10
49
+ extra:
50
+ tokenizer: NVIDIA-Nemotron-Nano-3-30B-A3B-BF16
51
+ tokenizer_backend: huggingface
52
+ target:
53
+ api_endpoint:
54
+ adapter_config:
55
+ use_caching: true
56
+ tracking_requests_stats: true
57
+ log_failed_requests: true
58
+ use_request_logging: true
59
+ max_logged_requests: 10
60
+ use_response_logging: true
61
+ max_logged_responses: 10
62
+ tasks:
63
+ - name: ns_bfcl_v3
64
+ env_vars:
65
+ HF_TOKEN: HF_TOKEN
66
+ nemo_evaluator_config:
67
+ config:
68
+ params:
69
+ temperature: 0.6
70
+ top_p: 0.95
71
+ parallelism: 32
72
+ extra:
73
+ num_repeats: 1
74
+ args: ++use_client_parsing=False
75
+ target:
76
+ api_endpoint:
77
+ adapter_config:
78
+ use_caching: false
79
+ - name: ns_bfcl_v4
80
+ env_vars:
81
+ HF_TOKEN: HF_TOKEN
82
+ nemo_evaluator_config:
83
+ config:
84
+ params:
85
+ max_new_tokens: 8192
86
+ parallelism: 128
87
+ temperature: 0.6
88
+ top_p: 0.95
89
+ extra:
90
+ num_repeats: 1
91
+ args: ++use_client_parsing=False
92
+ - name: ns_livecodebench
93
+ env_vars:
94
+ HF_TOKEN: HF_TOKEN
95
+ nemo_evaluator_config:
96
+ config:
97
+ params:
98
+ extra:
99
+ num_repeats: 8
100
+ dataset_split: test_v5_2407_2412
101
+ - name: ns_mmlu_pro
102
+ env_vars:
103
+ HF_TOKEN: HF_TOKEN
104
+ nemo_evaluator_config:
105
+ config:
106
+ params:
107
+ extra:
108
+ num_repeats: 1
109
+ args: "++prompt_config=eval/aai/mcq-10choices-boxed"
110
+ - name: ns_gpqa
111
+ env_vars:
112
+ HF_TOKEN: HF_TOKEN
113
+ nemo_evaluator_config:
114
+ config:
115
+ params:
116
+ extra:
117
+ num_repeats: 8
118
+ args: "++prompt_config=eval/aai/mcq-4choices"
119
+ - name: ns_aime2025
120
+ env_vars:
121
+ HF_TOKEN: HF_TOKEN
122
+ JUDGE_API_KEY: JUDGE_API_KEY
123
+ nemo_evaluator_config:
124
+ config:
125
+ params:
126
+ extra:
127
+ num_repeats: 64
128
+ args: ++prompt_config=/prompt_templates/math-oai.yaml
129
+ - name: ns_scicode
130
+ env_vars:
131
+ HF_TOKEN: HF_TOKEN
132
+ JUDGE_API_KEY: JUDGE_API_KEY
133
+ nemo_evaluator_config:
134
+ config:
135
+ params:
136
+ extra:
137
+ num_repeats: 8
138
+ - name: ns_ifbench
139
+ env_vars:
140
+ HF_TOKEN: HF_TOKEN
141
+ nemo_evaluator_config:
142
+ config:
143
+ params:
144
+ extra:
145
+ num_repeats: 8
146
+ - name: ns_hle
147
+ env_vars:
148
+ HF_TOKEN: HF_TOKEN
149
+ JUDGE_API_KEY: JUDGE_API_KEY
150
+ nemo_evaluator_config:
151
+ config:
152
+ params:
153
+ extra:
154
+ num_repeats: 1
155
+ judge_support: true
156
+ judge:
157
+ parallelism: 16
158
+ model_id: openai/gpt-4o
159
+ url: <OPENAI_API_URL_FOR_JUDGE>
160
+ api_key: JUDGE_API_KEY
notebook.ipynb ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "nbformat": 4,
3
+ "nbformat_minor": 0,
4
+ "metadata": {
5
+ "colab": {
6
+ "provenance": [],
7
+ "gpuType": "A100"
8
+ },
9
+ "kernelspec": {
10
+ "name": "python3",
11
+ "display_name": "Python 3"
12
+ },
13
+ "language_info": {
14
+ "name": "python"
15
+ },
16
+ "accelerator": "GPU"
17
+ },
18
+ "cells": [
19
+ {
20
+ "cell_type": "code",
21
+ "execution_count": null,
22
+ "metadata": {
23
+ "id": "aCl-IzLoDr2H"
24
+ },
25
+ "outputs": [],
26
+ "source": [
27
+ "!pip install -U transformers mamba-ssm"
28
+ ]
29
+ },
30
+ {
31
+ "cell_type": "markdown",
32
+ "source": [
33
+ "# Load Models"
34
+ ],
35
+ "metadata": {
36
+ "id": "SpRo_KJIRsxv"
37
+ }
38
+ },
39
+ {
40
+ "cell_type": "code",
41
+ "source": [
42
+ "import torch\n",
43
+ "from transformers import AutoTokenizer, AutoModelForCausalLM\n",
44
+ "\n",
45
+ "# Load tokenizer and model\n",
46
+ "tokenizer = AutoTokenizer.from_pretrained(\"nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16\")\n",
47
+ "model = AutoModelForCausalLM.from_pretrained(\n",
48
+ " \"nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16\",\n",
49
+ " torch_dtype=torch.bfloat16,\n",
50
+ " trust_remote_code=True,\n",
51
+ " device_map=\"auto\"\n",
52
+ ")\n"
53
+ ],
54
+ "metadata": {
55
+ "id": "waveliieEI1n"
56
+ },
57
+ "execution_count": null,
58
+ "outputs": []
59
+ },
60
+ {
61
+ "cell_type": "markdown",
62
+ "source": [
63
+ "# Define Input with Tools"
64
+ ],
65
+ "metadata": {
66
+ "id": "xjVkqaSdRx0_"
67
+ }
68
+ },
69
+ {
70
+ "cell_type": "code",
71
+ "source": [
72
+ "from transformers.utils import get_json_schema\n",
73
+ "\n",
74
+ "def multiply(a: float, b: float):\n",
75
+ " \"\"\"\n",
76
+ " A function that multiplies two numbers\n",
77
+ "\n",
78
+ " Args:\n",
79
+ " a: The first number to multiply\n",
80
+ " b: The second number to multiply\n",
81
+ " \"\"\"\n",
82
+ " return a * b\n",
83
+ "\n",
84
+ "messages = [\n",
85
+ " {\"role\": \"user\", \"content\": \"what is 2.0909090923 x 0.897987987\"},\n",
86
+ "]\n",
87
+ "\n",
88
+ "tokenized_chat = tokenizer.apply_chat_template(\n",
89
+ " messages,\n",
90
+ " tools=[\n",
91
+ " multiply\n",
92
+ " ],\n",
93
+ " tokenize=True,\n",
94
+ " add_generation_prompt=True,\n",
95
+ " return_tensors=\"pt\"\n",
96
+ ").to(model.device)\n"
97
+ ],
98
+ "metadata": {
99
+ "id": "zxZZ7iMZETsw"
100
+ },
101
+ "execution_count": null,
102
+ "outputs": []
103
+ },
104
+ {
105
+ "cell_type": "markdown",
106
+ "source": [
107
+ "# Inference"
108
+ ],
109
+ "metadata": {
110
+ "id": "SVBAG3dLRw4v"
111
+ }
112
+ },
113
+ {
114
+ "cell_type": "code",
115
+ "source": [
116
+ "outputs = model.generate(\n",
117
+ " tokenized_chat,\n",
118
+ " max_new_tokens=1024,\n",
119
+ " temperature=1.0,\n",
120
+ " top_p=1.0,\n",
121
+ " eos_token_id=tokenizer.eos_token_id\n",
122
+ ")\n",
123
+ "print(tokenizer.decode(outputs[0]))"
124
+ ],
125
+ "metadata": {
126
+ "id": "BKYqPT5ORDx3"
127
+ },
128
+ "execution_count": null,
129
+ "outputs": []
130
+ }
131
+ ]
132
+ }
privacy.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ | Field | Response |
2
+ | :---- | :---- |
3
+ | Generatable or reverse engineerable personal data? | No |
4
+ | Personal data used to create this model? | No |
5
+ | Was consent obtained for any personal data used? | Not Applicable |
6
+ | A description of any methods implemented in data acquisition or processing, if any, to address the prevalence of personal data in the training data, where relevant and applicable. | We used only prompts that do not contain any personal data for synthetic data generation. |
7
+ | How often is the dataset reviewed? | Before Release |
8
+ | Is there provenance for all datasets used in training? | Yes |
9
+ | Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
10
+ | Is data compliant with data subject requests for data correction or removal, if such a request was made? | No, not possible with externally-sourced data. |
11
+ | Applicable Privacy Policy | [NVIDIA Privacy Policy](https://www.nvidia.com/en-us/about-nvidia/privacy-policy/) |
12
+ | During AI model development, strict adherence to copyright policy ensured compliance through risk mitigation and legal reviews. Post-data collection, reserved rights content is identified and removed, with verified opt-out processes for rightsholders. Detailed records document due diligence and transparency. | True |
13
+ | We employ automated tools and data processing techniques during pre-training to identify and filter certain categories of personal information. Scans of training datasets detected no PII. | True. We employ automated tools and data processing techniques to scan for Personally Identifiable Information (PII) during pre-training to identify and filter certain categories of personal information, including public-facing contact details such as email addresses and phone numbers. Scans of Common Crawl, CC-News, and Wikimedia datasets did not detect PII in the majority of samples. However, Microsoft Presidio indicated potential findings including business contact information embedded in natural language, such as email addresses and phone numbers. These were removed using verified instances of PII through a combination of automated filtering and human-in-the-loop validation. This evaluation used a 3,000-sample subset per dataset, identified as the optimal threshold for maximizing embedder accuracy. |
safety.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ | Field | Response |
2
+ | :---- | :---- |
3
+ | Model Application Field(s): | Chat, Instruction Following, Chatbot Development, Code Generation, Reasoning, Customer Service |
4
+ | Describe the life critical impact (if present). | Not Applicable |
5
+ | Description of methods implemented in data acquisition or processing, if any, to address other types of potentially harmful data in the training, testing, and validation data: | We used a guard model for content safety to exclude potentially harmful data from training. |
6
+ | Description of any methods implemented in data acquisition or processing, if any, to address illegal or harmful content in the training data, including, but not limited to, child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) | We used a Gemma-3 4B-based guard model trained on [Nemotron Content Safety Dataset v2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0) for content safety to exclude potentially illegal or harmful content from the training. |
7
+ | Use Case Restrictions: | Abide by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). |
8
+ | Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. |
9
+ | This AI model was developed based on our policies to ensure responsible data handling and risk mitigation. The datasets used for training have been scanned for harmful content and illegal content, consistent with our policies including scanning for Child Sexual Abuse Material (CSAM). Ongoing review and monitoring mechanisms are in place based on our policies and to maintain data integrity. | True. We use [Nemotron Content Safety Dataset V2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0) and an internal safety dataset specialized for minority sexuality for content safety evaluation to ensure the safety of this model. |
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|im_end|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6021eb6847e682f89aa52d5eb6e8c7d902a23acfc8137e25211cf84828f1592
3
+ size 17077485
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff