Update README.md
Browse files
README.md
CHANGED
|
@@ -7,19 +7,65 @@ tags:
|
|
| 7 |
# gemma-2-9b-it-FP8
|
| 8 |
|
| 9 |
## Model Overview
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
-
gemma-2-9b-it
|
| 16 |
-
|
| 17 |
-
Reduces space on disk by ~50%.
|
| 18 |
-
Part of the [FP8 LLMs for vLLM collection](https://huggingface.co/collections/neuralmagic/fp8-llms-for-vllm-666742ed2b78b7ac8df13127).
|
| 19 |
|
|
|
|
| 20 |
|
| 21 |
-
|
| 22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
```python
|
| 25 |
from datasets import load_dataset
|
|
@@ -64,63 +110,99 @@ quantized_model_dir = f"{final_model_dir}-FP8"
|
|
| 64 |
model.save_quantized(quantized_model_dir)
|
| 65 |
```
|
| 66 |
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
```bash
|
| 70 |
-
#!/bin/bash
|
| 71 |
-
|
| 72 |
-
# Example usage:
|
| 73 |
-
# CUDA_VISIBLE_DEVICES=0 ./eval_openllm.sh "neuralmagic/gemma-2-9b-it-FP8" "tensor_parallel_size=1,max_model_len=4096,add_bos_token=True,gpu_memory_utilization=0.7"
|
| 74 |
-
|
| 75 |
-
export MODEL_DIR=${1}
|
| 76 |
-
export MODEL_ARGS=${2}
|
| 77 |
-
|
| 78 |
-
declare -A tasks_fewshot=(
|
| 79 |
-
["arc_challenge"]=25
|
| 80 |
-
["winogrande"]=5
|
| 81 |
-
["truthfulqa_mc2"]=0
|
| 82 |
-
["hellaswag"]=10
|
| 83 |
-
["mmlu"]=5
|
| 84 |
-
["gsm8k"]=5
|
| 85 |
-
)
|
| 86 |
-
|
| 87 |
-
declare -A batch_sizes=(
|
| 88 |
-
["arc_challenge"]="auto"
|
| 89 |
-
["winogrande"]="auto"
|
| 90 |
-
["truthfulqa_mc2"]="auto"
|
| 91 |
-
["hellaswag"]="auto"
|
| 92 |
-
["mmlu"]=1
|
| 93 |
-
["gsm8k"]="auto"
|
| 94 |
-
)
|
| 95 |
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
--write_out \
|
| 104 |
-
--show_config \
|
| 105 |
-
--device cuda \
|
| 106 |
-
--batch_size ${BATCH_SIZE} \
|
| 107 |
-
--output_path="results/${TASK}"
|
| 108 |
-
done
|
| 109 |
```
|
| 110 |
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
# gemma-2-9b-it-FP8
|
| 8 |
|
| 9 |
## Model Overview
|
| 10 |
+
- **Model Architecture:** Gemma 2
|
| 11 |
+
- **Input:** Text
|
| 12 |
+
- **Output:** Text
|
| 13 |
+
- **Model Optimizations:**
|
| 14 |
+
- **Weight quantization:** FP8
|
| 15 |
+
- **Activation quantization:** FP8
|
| 16 |
+
- **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), this models is intended for assistant-like chat.
|
| 17 |
+
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
|
| 18 |
+
- **Release Date:** 7/8/2024
|
| 19 |
+
- **Version:** 1.0
|
| 20 |
+
- **Model Developers:** Neural Magic
|
| 21 |
|
| 22 |
+
Quantized version of [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
|
| 23 |
+
It achieves an average score of 73.49 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.23.
|
|
|
|
|
|
|
| 24 |
|
| 25 |
+
### Model Optimizations
|
| 26 |
|
| 27 |
+
This model was obtained by quantizing the weights and activations of [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) to FP8 data type, ready for inference with vLLM >= 0.5.1.
|
| 28 |
+
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
|
| 29 |
+
|
| 30 |
+
Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a linear scaling per output dimension maps the FP8 representations of the quantized weights and activations.
|
| 31 |
+
[AutoFP8](https://github.com/neuralmagic/AutoFP8) is used for quantization with a single instance of every token in random order.
|
| 32 |
+
|
| 33 |
+
<!-- ## Deployment
|
| 34 |
+
|
| 35 |
+
### Use with vLLM
|
| 36 |
+
|
| 37 |
+
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
|
| 38 |
+
|
| 39 |
+
```python
|
| 40 |
+
from vllm import LLM, SamplingParams
|
| 41 |
+
from transformers import AutoTokenizer
|
| 42 |
+
|
| 43 |
+
model_id = "neuralmagic/gemma-2-9b-it-FP8"
|
| 44 |
+
|
| 45 |
+
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
|
| 46 |
+
|
| 47 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 48 |
+
|
| 49 |
+
messages = [
|
| 50 |
+
{"role": "user", "content": "Who are you? Please respond in pirate speak!"},
|
| 51 |
+
]
|
| 52 |
+
|
| 53 |
+
prompts = tokenizer.apply_chat_template(messages, tokenize=False)
|
| 54 |
+
|
| 55 |
+
llm = LLM(model=model_id)
|
| 56 |
+
|
| 57 |
+
outputs = llm.generate(prompts, sampling_params)
|
| 58 |
+
|
| 59 |
+
generated_text = outputs[0].outputs[0].text
|
| 60 |
+
print(generated_text)
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. -->
|
| 64 |
+
|
| 65 |
+
## Creation
|
| 66 |
+
|
| 67 |
+
This model was created by applying [AutoFP8 with calibration samples from ultrachat](https://github.com/neuralmagic/AutoFP8/blob/147fa4d9e1a90ef8a93f96fc7d9c33056ddc017a/example_dataset.py), as presented in the code snipet below.
|
| 68 |
+
Although AutoFP8 was used for this particular model, Neural Magic is transitioning to using [llm-compressor](https://github.com/vllm-project/llm-compressor) which supports several quantization schemes and models not supported by AutoFP8.
|
| 69 |
|
| 70 |
```python
|
| 71 |
from datasets import load_dataset
|
|
|
|
| 110 |
model.save_quantized(quantized_model_dir)
|
| 111 |
```
|
| 112 |
|
| 113 |
+
## Evaluation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
+
The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
|
| 116 |
+
```
|
| 117 |
+
lm_eval \
|
| 118 |
+
--model vllm \
|
| 119 |
+
--model_args pretrained="neuralmagic/gemma-2-9b-it-FP8",dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096 \
|
| 120 |
+
--tasks openllm \
|
| 121 |
+
--batch_size auto
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 122 |
```
|
| 123 |
|
| 124 |
+
### Accuracy
|
| 125 |
+
|
| 126 |
+
#### Open LLM Leaderboard evaluation scores
|
| 127 |
+
<table>
|
| 128 |
+
<tr>
|
| 129 |
+
<td><strong>Benchmark</strong>
|
| 130 |
+
</td>
|
| 131 |
+
<td><strong>gemma-2-9b-it</strong>
|
| 132 |
+
</td>
|
| 133 |
+
<td><strong>gemma-2-9b-it-FP8(this model)</strong>
|
| 134 |
+
</td>
|
| 135 |
+
<td><strong>Recovery</strong>
|
| 136 |
+
</td>
|
| 137 |
+
</tr>
|
| 138 |
+
<tr>
|
| 139 |
+
<td>MMLU (5-shot)
|
| 140 |
+
</td>
|
| 141 |
+
<td>72.28
|
| 142 |
+
</td>
|
| 143 |
+
<td>71.99
|
| 144 |
+
</td>
|
| 145 |
+
<td>99.59%
|
| 146 |
+
</td>
|
| 147 |
+
</tr>
|
| 148 |
+
<tr>
|
| 149 |
+
<td>ARC Challenge (25-shot)
|
| 150 |
+
</td>
|
| 151 |
+
<td>71.50
|
| 152 |
+
</td>
|
| 153 |
+
<td>71.50
|
| 154 |
+
</td>
|
| 155 |
+
<td>100.0%
|
| 156 |
+
</td>
|
| 157 |
+
</tr>
|
| 158 |
+
<tr>
|
| 159 |
+
<td>GSM-8K (5-shot, strict-match)
|
| 160 |
+
</td>
|
| 161 |
+
<td>76.26
|
| 162 |
+
</td>
|
| 163 |
+
<td>76.87
|
| 164 |
+
</td>
|
| 165 |
+
<td>100.7%
|
| 166 |
+
</td>
|
| 167 |
+
</tr>
|
| 168 |
+
<tr>
|
| 169 |
+
<td>Hellaswag (10-shot)
|
| 170 |
+
</td>
|
| 171 |
+
<td>81.91
|
| 172 |
+
</td>
|
| 173 |
+
<td>81.70
|
| 174 |
+
</td>
|
| 175 |
+
<td>99.74%
|
| 176 |
+
</td>
|
| 177 |
+
</tr>
|
| 178 |
+
<tr>
|
| 179 |
+
<td>Winogrande (5-shot)
|
| 180 |
+
</td>
|
| 181 |
+
<td>77.11
|
| 182 |
+
</td>
|
| 183 |
+
<td>78.37
|
| 184 |
+
</td>
|
| 185 |
+
<td>101.6%
|
| 186 |
+
</td>
|
| 187 |
+
</tr>
|
| 188 |
+
<tr>
|
| 189 |
+
<td>TruthfulQA (0-shot)
|
| 190 |
+
</td>
|
| 191 |
+
<td>60.32
|
| 192 |
+
</td>
|
| 193 |
+
<td>60.52
|
| 194 |
+
</td>
|
| 195 |
+
<td>100.3%
|
| 196 |
+
</td>
|
| 197 |
+
</tr>
|
| 198 |
+
<tr>
|
| 199 |
+
<td><strong>Average</strong>
|
| 200 |
+
</td>
|
| 201 |
+
<td><strong>73.23</strong>
|
| 202 |
+
</td>
|
| 203 |
+
<td><strong>73.49</strong>
|
| 204 |
+
</td>
|
| 205 |
+
<td><strong>100.36%</strong>
|
| 206 |
+
</td>
|
| 207 |
+
</tr>
|
| 208 |
+
</table>
|