alexmarques commited on
Commit
7403075
·
verified ·
1 Parent(s): b13b7e3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +252 -0
README.md CHANGED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ ---
6
+
7
+ # Meta-Llama-3-70B-Instruct-quantized.w8a16
8
+
9
+ ## Model Overview
10
+ - **Model Architecture:** Meta-Llama-3
11
+ - **Input:** Text
12
+ - **Output:** Text
13
+ - **Model Optimizations:**
14
+ - **Weight quantization:** INT8
15
+ - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct), this models is intended for assistant-like chat.
16
+ - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
17
+ - **Release Date:** 7/2/2024
18
+ - **Version:** 1.0
19
+ - **Model Developers:** Neural Magic
20
+
21
+ Quantized version of [Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
22
+ It achieves an average score of 79.18 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 77.90.
23
+
24
+ ### Model Optimizations
25
+
26
+ This model was obtained by quantizing the weights of [Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) to INT8 data type.
27
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
28
+
29
+ Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT8 and floating point representations of the quantized weights.
30
+ [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) is used for quantization with 10% damping factor and 128 sequences taken from Neural Magic's [LLM compression calibration dataset](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration).
31
+
32
+
33
+ ## Deployment
34
+
35
+ ### Use with vLLM
36
+
37
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below (using 2 GPUs).
38
+
39
+ ```python
40
+ from vllm import LLM, SamplingParams
41
+ from transformers import AutoTokenizer
42
+
43
+ model_id = "neuralmagic/Meta-Llama-3-70B-Instruct-quantized.w8a16"
44
+
45
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
46
+
47
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
48
+
49
+ messages = [
50
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
51
+ {"role": "user", "content": "Who are you?"},
52
+ ]
53
+
54
+ prompts = tokenizer.apply_chat_template(messages, tokenize=False)
55
+
56
+ llm = LLM(model=model_id, tensor_parallel_size=2)
57
+
58
+ outputs = llm.generate(prompts, sampling_params)
59
+
60
+ generated_text = outputs[0].outputs[0].text
61
+ print(generated_text)
62
+ ```
63
+
64
+ vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
65
+
66
+ ### Use with transformers
67
+
68
+ This model is supported by Transformers leveraging the integration with the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) data format.
69
+ The following example contemplates how the model can be used using the `generate()` function.
70
+
71
+ ```python
72
+ from transformers import AutoTokenizer, AutoModelForCausalLM
73
+
74
+ model_id = "neuralmagic/Meta-Llama-3-70B-Instruct-quantized.w8a16"
75
+
76
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
77
+ model = AutoModelForCausalLM.from_pretrained(
78
+ model_id,
79
+ torch_dtype="auto",
80
+ device_map="auto",
81
+ )
82
+
83
+ messages = [
84
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
85
+ {"role": "user", "content": "Who are you?"},
86
+ ]
87
+
88
+ input_ids = tokenizer.apply_chat_template(
89
+ messages,
90
+ add_generation_prompt=True,
91
+ return_tensors="pt"
92
+ ).to(model.device)
93
+
94
+ terminators = [
95
+ tokenizer.eos_token_id,
96
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
97
+ ]
98
+
99
+ outputs = model.generate(
100
+ input_ids,
101
+ max_new_tokens=256,
102
+ eos_token_id=terminators,
103
+ do_sample=True,
104
+ temperature=0.6,
105
+ top_p=0.9,
106
+ )
107
+ response = outputs[0][input_ids.shape[-1]:]
108
+ print(tokenizer.decode(response, skip_special_tokens=True))
109
+ ```
110
+
111
+ ## Creation
112
+
113
+ This model was created by applying the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) library as presented in the code snipet below.
114
+ Although AutoGPTQ was used for this particular model, Neural Magic is transitioning to using [llm-compressor](https://github.com/vllm-project/llm-compressor) which supports several quantization schemes and models not supported by AutoGPTQ.
115
+
116
+ ```python
117
+ from transformers import AutoTokenizer
118
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
119
+ from datasets import load_dataset
120
+
121
+ model_id = "meta-llama/Meta-Llama-3-70B-Instruct"
122
+
123
+ num_samples = 128
124
+ max_seq_len = 8192
125
+
126
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
127
+
128
+ def preprocess_fn(example):
129
+ return {"text": tokenizer.apply_chat_template(example["messages"], add_generation_prompt=False, tokenize=False)}
130
+
131
+ ds = load_dataset("neuralmagic/LLM_compression_calibration", split="train")
132
+ ds = ds.shuffle().select(range(num_samples))
133
+ ds = ds.map(preprocess_fn)
134
+
135
+ examples = [tokenizer(example["text"], padding=False, max_length=max_seq_len, truncation=True) for example in ds]
136
+
137
+ quantize_config = BaseQuantizeConfig(
138
+ bits=8,
139
+ group_size=-1,
140
+ desc_act=False,
141
+ model_file_base_name="model",
142
+ damp_percent=0.1,
143
+ )
144
+
145
+ model = AutoGPTQForCausalLM.from_pretrained(
146
+ model_id,
147
+ quantize_config,
148
+ device_map="auto",
149
+ )
150
+
151
+ model.quantize(examples)
152
+ model.save_pretrained("Meta-Llama-3-70B-Instruct-quantized.w8a16")
153
+ ```
154
+
155
+
156
+
157
+ ## Evaluation
158
+
159
+ The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command (using 8 GPUs):
160
+ ```
161
+ lm_eval \
162
+ --model vllm \
163
+ --model_args pretrained="neuralmagic/Meta-Llama-3-70B-Instruct-quantized.w8a16",tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096 \
164
+ --tasks openllm \
165
+ --batch_size auto
166
+ ```
167
+
168
+ ### Accuracy
169
+
170
+ #### Open LLM Leaderboard evaluation scores
171
+ <table>
172
+ <tr>
173
+ <td><strong>Benchmark</strong>
174
+ </td>
175
+ <td><strong>Meta-Llama-3-70B-Instruct </strong>
176
+ </td>
177
+ <td><strong>Meta-Llama-3-70B-Instruct-quantized.w8a16(this model)</strong>
178
+ </td>
179
+ <td><strong>Recovery</strong>
180
+ </td>
181
+ </tr>
182
+ <tr>
183
+ <td>MMLU (5-shot)
184
+ </td>
185
+ <td>80.18
186
+ </td>
187
+ <td>78.69
188
+ </td>
189
+ <td>98.1%
190
+ </td>
191
+ </tr>
192
+ <tr>
193
+ <td>ARC Challenge (25-shot)
194
+ </td>
195
+ <td>72.44
196
+ </td>
197
+ <td>71.59
198
+ </td>
199
+ <td>98.8%
200
+ </td>
201
+ </tr>
202
+ <tr>
203
+ <td>GSM-8K (5-shot, strict-match)
204
+ </td>
205
+ <td>90.83
206
+ </td>
207
+ <td>86.43
208
+ </td>
209
+ <td>95.2%
210
+ </td>
211
+ </tr>
212
+ <tr>
213
+ <td>Hellaswag (10-shot)
214
+ </td>
215
+ <td>85.54
216
+ </td>
217
+ <td>85.65
218
+ </td>
219
+ <td>100.1%
220
+ </td>
221
+ </tr>
222
+ <tr>
223
+ <td>Winogrande (5-shot)
224
+ </td>
225
+ <td>83.19
226
+ </td>
227
+ <td>83.11
228
+ </td>
229
+ <td>98.8%
230
+ </td>
231
+ </tr>
232
+ <tr>
233
+ <td>TruthfulQA (0-shot)
234
+ </td>
235
+ <td>62.92
236
+ </td>
237
+ <td>61.94
238
+ </td>
239
+ <td>98.4%
240
+ </td>
241
+ </tr>
242
+ <tr>
243
+ <td><strong>Average</strong>
244
+ </td>
245
+ <td><strong>79.18</strong>
246
+ </td>
247
+ <td><strong>77.90</strong>
248
+ </td>
249
+ <td><strong>98.4%</strong>
250
+ </td>
251
+ </tr>
252
+ </table>