File size: 10,056 Bytes
70d5b6c 26c6fe1 1cd7e54 70d5b6c b6e2f83 70d5b6c 08482da 70d5b6c 09cbede 70d5b6c 27964e3 70d5b6c 27964e3 70d5b6c 1a28cca 35d4e6c 1a28cca 70d5b6c 1cd7e54 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 | ---
license: other
library_name: transformers
base_model:
- nvidia/Mistral-NeMo-Minitron-8B-Base
license_name: nvidia-community-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-community-models-license/
model-index:
- name: Mistral-NeMo-Minitron-8B-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 50.04
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nvidia/Mistral-NeMo-Minitron-8B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 34.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nvidia/Mistral-NeMo-Minitron-8B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.45
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nvidia/Mistral-NeMo-Minitron-8B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.03
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nvidia/Mistral-NeMo-Minitron-8B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.37
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nvidia/Mistral-NeMo-Minitron-8B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 33.23
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nvidia/Mistral-NeMo-Minitron-8B-Instruct
name: Open LLM Leaderboard
---
# Mistral-NeMo-Minitron-8B-Instruct
## Model Overview
Mistral-NeMo-Minitron-8B-Instruct is a model for generating responses for various text-generation tasks including roleplaying, retrieval augmented generation, and function calling. It is a fine-tuned version of [nvidia/Mistral-NeMo-Minitron-8B-Base](https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Base), which was pruned and distilled from [Mistral-NeMo 12B](https://huggingface.co/nvidia/Mistral-NeMo-12B-Base) using [our LLM compression technique](https://arxiv.org/abs/2407.14679). The model was trained using a multi-stage SFT and preference-based alignment technique with [NeMo Aligner](https://github.com/NVIDIA/NeMo-Aligner). For details on the alignment technique, please refer to the [Nemotron-4 340B Technical Report](https://arxiv.org/abs/2406.11704). The model supports a context length of 8,192 tokens.
Try this model on [build.nvidia.com](https://build.nvidia.com/nvidia/mistral-nemo-minitron-8b-8k-instruct).
**Model Developer:** NVIDIA
**Model Dates:** Mistral-NeMo-Minitron-8B-Instruct was trained between August 2024 and September 2024.
## License
[NVIDIA Community Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-community-models-license/)
## Model Architecture
Mistral-NeMo-Minitron-8B-Instruct uses a model embedding size of 4096, 32 attention heads, MLP intermediate dimension of 11520, with 40 layers in total. Additionally, it uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
**Architecture Type:** Transformer Decoder (Auto-regressive Language Model)
**Network Architecture:** Mistral-NeMo
## Prompt Format:
We recommend using the following prompt template, which was used to fine-tune the model. The model may not perform optimally without it.
```
<extra_id_0>System
{system prompt}
<extra_id_1>User
{prompt}
<extra_id_1>Assistant\n
```
- Note that a newline character `\n` should be added at the end of the prompt.
- We recommend using `<extra_id_1>` as a stop token.
## Usage
```
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("nvidia/Mistral-NeMo-Minitron-8B-Instruct")
model = AutoModelForCausalLM.from_pretrained("nvidia/Mistral-NeMo-Minitron-8B-Instruct")
# Use the prompt template
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, stop_strings=["<extra_id_1>"], tokenizer=tokenizer)
print(tokenizer.decode(outputs[0]))
```
You can also use `pipeline` but you need to create a tokenizer object and assign it to the pipeline manually.
```
from transformers import AutoTokenizer
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("nvidia/Mistral-NeMo-Minitron-8B-Instruct")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="nvidia/Mistral-NeMo-Minitron-8B-Instruct")
pipe(messages, max_new_tokens=64, stop_strings=["<extra_id_1>"], tokenizer=tokenizer)
```
## Evaluation Results
| Category | Benchmark | # Shots | Mistral-NeMo-Minitron-8B-Instruct |
|:----------------------|:----------------------|--------:|----------------------------------:|
| General | MMLU | 5 | 70.4 |
| | MT Bench (GPT4-Turbo) | 0 | 7.86 |
| Math | GMS8K | 0 | 87.1 |
| Reasoning | GPQA | 0 | 31.5 |
| Code | HumanEval | 0 | 71.3 |
| | MBPP | 0 | 72.5 |
| Instruction Following | IFEval | 0 | 84.4 |
| Tool Use | BFCL v2 Live | 0 | 67.6 |
## AI Safety Efforts
The Mistral-NeMo-Minitron-8B-Instruct model underwent AI safety evaluation including adversarial testing via three distinct methods:
- [Garak](https://github.com/leondz/garak), is an automated LLM vulnerability scanner that probes for common weaknesses, including prompt injection and data leakage.
- [AEGIS](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-1.0), is a content safety evaluation dataset and LLM based content safety classifier model, that adheres to a broad taxonomy of 13 categories of critical risks in human-LLM interactions.
- Human Content Red Teaming leveraging human interaction and evaluation of the models' responses.
## Limitations
The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. This issue could be exacerbated without the use of the recommended prompt template. This issue could be exacerbated without the use of the recommended prompt template. If you are going to use this model in an agentic workflow, validate that the imported packages are from a trusted source to ensure end-to-end security.
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the [Model Card++](https://build.nvidia.com/nvidia/mistral-nemo-minitron-8b-8k-instruct/modelcard). Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nvidia__Mistral-NeMo-Minitron-8B-Instruct)
| Metric |Value|
|-------------------|----:|
|Avg. |21.71|
|IFEval (0-Shot) |50.04|
|BBH (3-Shot) |34.13|
|MATH Lvl 5 (4-Shot)| 0.45|
|GPQA (0-shot) | 5.03|
|MuSR (0-shot) | 7.37|
|MMLU-PRO (5-shot) |33.23|
|