alexmarques commited on
Commit
bf96195
·
verified ·
1 Parent(s): 8d69974

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +213 -0
README.md ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ license: apache-2.0
6
+ license_link: https://www.apache.org/licenses/LICENSE-2.0
7
+ ---
8
+
9
+ # Mistral-Nemo-Instruct-2407-quantized.w8a16
10
+
11
+ ## Model Overview
12
+ - **Model Architecture:** Mistral
13
+ - **Input:** Text
14
+ - **Output:** Text
15
+ - **Model Optimizations:**
16
+ - **Weight quantization:** INT8
17
+ - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407), this models is intended for assistant-like chat.
18
+ - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
19
+ - **Release Date:** 7/31/2024
20
+ - **Version:** 1.0
21
+ - **License(s):** [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0)
22
+ - **Model Developers:** Neural Magic
23
+
24
+ Quantized version of [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407).
25
+ It achieves an average score of 71.79 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 71.57.
26
+
27
+ ### Model Optimizations
28
+
29
+ This model was obtained by quantizing the weights of [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) to INT8 data type.
30
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
31
+
32
+ Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT8 and floating point representations of the quantized weights.
33
+ The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library. GPTQ used a 1% damping factor and 256 sequences of 8,192 random tokens.
34
+
35
+
36
+ ## Deployment
37
+
38
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
39
+
40
+ ```python
41
+ from vllm import LLM, SamplingParams
42
+ from transformers import AutoTokenizer
43
+
44
+ model_id = "mistralai/Mistral-Nemo-Instruct-2407-quantized.w8a16"
45
+ number_gpus = 1
46
+ max_model_len = 8192
47
+
48
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
49
+
50
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
51
+
52
+ messages = [
53
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
54
+ {"role": "user", "content": "Who are you?"},
55
+ ]
56
+
57
+ prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
58
+
59
+ llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len)
60
+
61
+ outputs = llm.generate(prompts, sampling_params)
62
+
63
+ generated_text = outputs[0].outputs[0].text
64
+ print(generated_text)
65
+ ```
66
+
67
+ vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
68
+
69
+
70
+ ## Creation
71
+
72
+ This model was created by using the [llm-compressor](https://github.com/vllm-project/llm-compressor) library as presented in the code snipet below.
73
+
74
+ ```python
75
+ from transformers import AutoTokenizer
76
+ from datasets import Dataset
77
+ from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
78
+ from llmcompressor.modifiers.quantization import GPTQModifier
79
+ import random
80
+
81
+ model_id = "mistralai/Mistral-Nemo-Instruct-2407"
82
+
83
+ num_samples = 256
84
+ max_seq_len = 8192
85
+
86
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
87
+
88
+ max_token_id = len(tokenizer.get_vocab()) - 1
89
+ input_ids = [[random.randint(0, max_token_id) for _ in range(max_seq_len)] for _ in range(num_samples)]
90
+ attention_mask = num_samples * [max_seq_len * [1]]
91
+ ds = Dataset.from_dict({"input_ids": input_ids, "attention_mask": attention_mask})
92
+
93
+ recipe = GPTQModifier(
94
+ targets="Linear",
95
+ scheme="W8A16",
96
+ ignore=["lm_head"],
97
+ dampening_frac=0.01,
98
+ )
99
+
100
+ model = SparseAutoModelForCausalLM.from_pretrained(
101
+ model_id,
102
+ device_map="auto",
103
+ trust_remote_code=True,
104
+ )
105
+
106
+ oneshot(
107
+ model=model,
108
+ dataset=ds,
109
+ recipe=recipe,
110
+ max_seq_length=max_seq_len,
111
+ num_calibration_samples=num_samples,
112
+ )
113
+ model.save_pretrained("Mistral-Nemo-Instruct-2407-quantized.w8a16")
114
+ ```
115
+
116
+
117
+
118
+ ## Evaluation
119
+
120
+ The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
121
+ ```
122
+ lm_eval \
123
+ --model vllm \
124
+ --model_args pretrained="neuralmagic/Mistral-Nemo-Instruct-2407-quantized.w8a16",dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096 \
125
+ --tasks openllm \
126
+ --batch_size auto
127
+ ```
128
+
129
+ ### Accuracy
130
+
131
+ #### Open LLM Leaderboard evaluation scores
132
+ <table>
133
+ <tr>
134
+ <td><strong>Benchmark</strong>
135
+ </td>
136
+ <td><strong>Mistral-Nemo-Instruct-2407 </strong>
137
+ </td>
138
+ <td><strong>Mistral-Nemo-Instruct-2407-quantized.w8a16 (this model)</strong>
139
+ </td>
140
+ <td><strong>Recovery</strong>
141
+ </td>
142
+ </tr>
143
+ <tr>
144
+ <td>MMLU (5-shot)
145
+ </td>
146
+ <td>68.49
147
+ </td>
148
+ <td>68.34
149
+ </td>
150
+ <td>99.8%
151
+ </td>
152
+ </tr>
153
+ <tr>
154
+ <td>ARC Challenge (25-shot)
155
+ </td>
156
+ <td>66.89
157
+ </td>
158
+ <td>67.06
159
+ </td>
160
+ <td>100.3%
161
+ </td>
162
+ </tr>
163
+ <tr>
164
+ <td>GSM-8K (5-shot, strict-match)
165
+ </td>
166
+ <td>72.40
167
+ </td>
168
+ <td>73.16
169
+ </td>
170
+ <td>101.0%
171
+ </td>
172
+ </tr>
173
+ <tr>
174
+ <td>Hellaswag (10-shot)
175
+ </td>
176
+ <td>84.44
177
+ </td>
178
+ <td>84.43
179
+ </td>
180
+ <td>100.0%
181
+ </td>
182
+ </tr>
183
+ <tr>
184
+ <td>Winogrande (5-shot)
185
+ </td>
186
+ <td>82.32
187
+ </td>
188
+ <td>82.40
189
+ </td>
190
+ <td>100.1%
191
+ </td>
192
+ </tr>
193
+ <tr>
194
+ <td>TruthfulQA (0-shot, mc2)
195
+ </td>
196
+ <td>54.91
197
+ </td>
198
+ <td>55.35
199
+ </td>
200
+ <td>100.8%
201
+ </td>
202
+ </tr>
203
+ <tr>
204
+ <td><strong>Average</strong>
205
+ </td>
206
+ <td><strong>71.57</strong>
207
+ </td>
208
+ <td><strong>71.79</strong>
209
+ </td>
210
+ <td><strong>100.3%</strong>
211
+ </td>
212
+ </tr>
213
+ </table>