alexmarques commited on
Commit
222674f
·
verified ·
1 Parent(s): c114618

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +253 -0
README.md CHANGED
@@ -0,0 +1,253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ ---
6
+
7
+ # Qwen2-72B-Instruct-quantized.w8a16
8
+
9
+ ## Model Overview
10
+ - **Model Architecture:** Qwen2
11
+ - **Input:** Text
12
+ - **Output:** Text
13
+ - **Model Optimizations:**
14
+ - **Weight quantization:** INT8
15
+ - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct), this models is intended for assistant-like chat.
16
+ - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
17
+ - **Release Date:** 7/3/2024
18
+ - **Version:** 1.0
19
+ - **Model Developers:** Neural Magic
20
+
21
+ Quantized version of [Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct).
22
+ It achieves an average score of 80.09 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 80.01.
23
+
24
+ ### Model Optimizations
25
+
26
+ This model was obtained by quantizing the weights of [Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) to INT8 data type.
27
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
28
+
29
+ Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT8 and floating point representations of the quantized weights.
30
+ [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) is used for quantization with 1% damping factor and 256 sequences of 8,192 random tokens.
31
+
32
+
33
+ ## Deployment
34
+
35
+ ### Use with vLLM
36
+
37
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
38
+
39
+ ```python
40
+ from vllm import LLM, SamplingParams
41
+ from transformers import AutoTokenizer
42
+
43
+ model_id = "neuralmagic/Qwen2-72B-Instruct-quantized.w8a16"
44
+
45
+ sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
46
+
47
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
48
+
49
+ messages = [
50
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
51
+ {"role": "user", "content": "Who are you?"},
52
+ ]
53
+
54
+ prompts = tokenizer.apply_chat_template(messages, tokenize=False)
55
+
56
+ llm = LLM(model=model_id)
57
+
58
+ outputs = llm.generate(prompts, sampling_params)
59
+
60
+ generated_text = outputs[0].outputs[0].text
61
+ print(generated_text)
62
+ ```
63
+
64
+ vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
65
+
66
+ ### Use with transformers
67
+
68
+ This model is supported by Transformers leveraging the integration with the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) data format.
69
+ The following example contemplates how the model can be used using the `generate()` function.
70
+
71
+ ```python
72
+ from transformers import AutoTokenizer, AutoModelForCausalLM
73
+
74
+ model_id = "neuralmagic/Qwen2-72B-Instruct-quantized.w8a16"
75
+
76
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
77
+ model = AutoModelForCausalLM.from_pretrained(
78
+ model_id,
79
+ torch_dtype="auto",
80
+ device_map="auto",
81
+ )
82
+
83
+ messages = [
84
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
85
+ {"role": "user", "content": "Who are you?"},
86
+ ]
87
+
88
+ input_ids = tokenizer.apply_chat_template(
89
+ messages,
90
+ add_generation_prompt=True,
91
+ return_tensors="pt"
92
+ ).to(model.device)
93
+
94
+ terminators = [
95
+ tokenizer.eos_token_id,
96
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
97
+ ]
98
+
99
+ outputs = model.generate(
100
+ input_ids,
101
+ max_new_tokens=256,
102
+ eos_token_id=terminators,
103
+ do_sample=True,
104
+ temperature=0.7,
105
+ top_p=0.8,
106
+ )
107
+ response = outputs[0][input_ids.shape[-1]:]
108
+ print(tokenizer.decode(response, skip_special_tokens=True))
109
+ ```
110
+
111
+ ## Creation
112
+
113
+ This model was created by applying the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) library as presented in the code snipet below.
114
+ Although AutoGPTQ was used for this particular model, Neural Magic is transitioning to using [llm-compressor](https://github.com/vllm-project/llm-compressor) which supports several quantization schemes and models not supported by AutoGPTQ.
115
+
116
+ ```python
117
+ from transformers import AutoTokenizer
118
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
119
+ import random
120
+
121
+ model_id = "Qwen/Qwen2-72B-Instruct"
122
+
123
+ num_samples = 256
124
+ max_seq_len = 8192
125
+
126
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
127
+
128
+ max_token_id = len(tokenizer.get_vocab()) - 1
129
+ examples = []
130
+ for _ in range(num_samples):
131
+ examples.append(
132
+ {
133
+ "input_ids": [random.randint(0, max_token_id) for _ in range(max_seq_len)],
134
+ "attention_mask": max_seq_len*[1],
135
+ }
136
+ )
137
+
138
+ quantize_config = BaseQuantizeConfig(
139
+ bits=8,
140
+ group_size=-1,
141
+ desc_act=False,
142
+ model_file_base_name="model",
143
+ damp_percent=0.01,
144
+ )
145
+
146
+ model = AutoGPTQForCausalLM.from_pretrained(
147
+ model_id,
148
+ quantize_config,
149
+ device_map="auto",
150
+ )
151
+
152
+ model.quantize(examples)
153
+ model.save_pretrained("Meta-Llama-3-8B-Instruct-quantized.w8a16")
154
+ ```
155
+
156
+
157
+
158
+ ## Evaluation
159
+
160
+ The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
161
+ ```
162
+ lm_eval \
163
+ --model vllm \
164
+ --model_args pretrained="neuralmagic/Qwen2-72B-Instruct-quantized.w8a16",dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096 \
165
+ --tasks openllm \
166
+ --batch_size auto
167
+ ```
168
+
169
+ ### Accuracy
170
+
171
+ #### Open LLM Leaderboard evaluation scores
172
+ <table>
173
+ <tr>
174
+ <td><strong>Benchmark</strong>
175
+ </td>
176
+ <td><strong>Qwen2-72B-Instruct</strong>
177
+ </td>
178
+ <td><strong>Qwen2-72B-Instruct-quantized.w8a16(this model)</strong>
179
+ </td>
180
+ <td><strong>Recovery</strong>
181
+ </td>
182
+ </tr>
183
+ <tr>
184
+ <td>MMLU (5-shot)
185
+ </td>
186
+ <td>83.97
187
+ </td>
188
+ <td>83.93
189
+ </td>
190
+ <td>100.0%
191
+ </td>
192
+ </tr>
193
+ <tr>
194
+ <td>ARC Challenge (25-shot)
195
+ </td>
196
+ <td>71.59
197
+ </td>
198
+ <td>72.10
199
+ </td>
200
+ <td>100.7%
201
+ </td>
202
+ </tr>
203
+ <tr>
204
+ <td>GSM-8K (5-shot, strict-match)
205
+ </td>
206
+ <td>88.25
207
+ </td>
208
+ <td>87.49
209
+ </td>
210
+ <td>99.1%
211
+ </td>
212
+ </tr>
213
+ <tr>
214
+ <td>Hellaswag (10-shot)
215
+ </td>
216
+ <td>86.94
217
+ </td>
218
+ <td>87.15
219
+ </td>
220
+ <td>100.2%
221
+ </td>
222
+ </tr>
223
+ <tr>
224
+ <td>Winogrande (5-shot)
225
+ </td>
226
+ <td>82.79
227
+ </td>
228
+ <td>82.48
229
+ </td>
230
+ <td>99.6%
231
+ </td>
232
+ </tr>
233
+ <tr>
234
+ <td>TruthfulQA (0-shot)
235
+ </td>
236
+ <td>66.98
237
+ </td>
238
+ <td>66.90
239
+ </td>
240
+ <td>99.9%
241
+ </td>
242
+ </tr>
243
+ <tr>
244
+ <td><strong>Average</strong>
245
+ </td>
246
+ <td><strong>80.09</strong>
247
+ </td>
248
+ <td><strong>80.01</strong>
249
+ </td>
250
+ <td><strong>99.9%</strong>
251
+ </td>
252
+ </tr>
253
+ </table>