alexmarques commited on
Commit
cb21814
·
verified ·
1 Parent(s): e2e9e05

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +211 -0
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ tags:
4
+ - int8
5
+ - vllm
6
+ license: gemma
7
+ base_model: google/gemma-2-27b-it
8
+ ---
9
+
10
+ # gemma-2-27b-it-quantized.w8a16
11
+
12
+ ## Model Overview
13
+ - **Model Architecture:** Gemma 2
14
+ - **Input:** Text
15
+ - **Output:** Text
16
+ - **Model Optimizations:**
17
+ - **Weight quantization:** INT8
18
+ - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it), this models is intended for assistant-like chat.
19
+ - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
20
+ - **Release Date:** 8/22/2024
21
+ - **Version:** 1.0
22
+ - **License(s):** [gemma](https://ai.google.dev/gemma/terms)
23
+ - **Model Developers:** Neural Magic
24
+
25
+ Quantized version of [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it).
26
+ It achieves an average score of 73.80 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.80.
27
+
28
+ ### Model Optimizations
29
+
30
+ This model was obtained by quantizing the weights of [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it) to INT8 data type.
31
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
32
+
33
+ Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT8 and floating point representations of the quantized weights.
34
+ The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
35
+ GPTQ used a 1% damping factor and 256 sequences sequences taken from Neural Magic's [LLM compression calibration dataset](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration).
36
+
37
+ ## Deployment
38
+
39
+ ### Use with vLLM
40
+
41
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
42
+
43
+ ```python
44
+ from vllm import LLM, SamplingParams
45
+ from transformers import AutoTokenizer
46
+
47
+ model_id = "neuralmagic/gemma-2-27b-it-quantized.w8a16"
48
+
49
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
50
+
51
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
52
+
53
+ messages = [
54
+ {"role": "user", "content": "Who are you? Please respond in pirate speak!"},
55
+ ]
56
+
57
+ prompts = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
58
+
59
+ llm = LLM(model=model_id)
60
+
61
+ outputs = llm.generate(prompts, sampling_params)
62
+
63
+ generated_text = outputs[0].outputs[0].text
64
+ print(generated_text)
65
+ ```
66
+
67
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
68
+
69
+ ## Creation
70
+
71
+ This model was created by using the [llm-compressor](https://github.com/vllm-project/llm-compressor) library as presented in the code snipet below.
72
+
73
+ ```python
74
+ from transformers import AutoTokenizer
75
+ from datasets import load_dataset
76
+ from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
77
+ from llmcompressor.modifiers.quantization import GPTQModifier
78
+
79
+ model_id = "google/gemma-2-27b-it"
80
+
81
+ num_samples = 256
82
+ max_seq_len = 8192
83
+
84
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
85
+
86
+ def preprocess_fn(example):
87
+ return {"text": tokenizer.apply_chat_template(example["messages"], add_generation_prompt=False, tokenize=False)}
88
+
89
+ ds = load_dataset("neuralmagic/LLM_compression_calibration", split="train")
90
+ ds = ds.shuffle().select(range(num_samples))
91
+ ds = ds.map(preprocess_fn)
92
+
93
+ recipe = GPTQModifier(
94
+ targets="Linear",
95
+ scheme="W8A16",
96
+ ignore=["lm_head"],
97
+ dampening_frac=0.01,
98
+ )
99
+
100
+ model = SparseAutoModelForCausalLM.from_pretrained(
101
+ model_id,
102
+ device_map="auto",
103
+ trust_remote_code=True,
104
+ )
105
+
106
+ oneshot(
107
+ model=model,
108
+ dataset=ds,
109
+ recipe=recipe,
110
+ max_seq_length=max_seq_len,
111
+ num_calibration_samples=num_samples,
112
+ )
113
+ model.save_pretrained("gemma-2-27b-it-quantized.w8a16")
114
+ ```
115
+
116
+ ## Evaluation
117
+
118
+ The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
119
+ ```
120
+ lm_eval \
121
+ --model vllm \
122
+ --model_args pretrained="neuralmagic/gemma-2-27b-it-quantized.w8a16",dtype=auto,add_bos_token=True,max_model_len=4096 \
123
+ --tasks openllm \
124
+ --batch_size auto
125
+ ```
126
+
127
+ ### Accuracy
128
+
129
+ #### Open LLM Leaderboard evaluation scores
130
+ <table>
131
+ <tr>
132
+ <td><strong>Benchmark</strong>
133
+ </td>
134
+ <td><strong>gemma-2-27b-it</strong>
135
+ </td>
136
+ <td><strong>gemma-2-27b-it-quantized.w8a16 (this model)</strong>
137
+ </td>
138
+ <td><strong>Recovery</strong>
139
+ </td>
140
+ </tr>
141
+ <tr>
142
+ <td>MMLU (5-shot)
143
+ </td>
144
+ <td>76.34
145
+ </td>
146
+ <td>76.34
147
+ </td>
148
+ <td>100.0%
149
+ </td>
150
+ </tr>
151
+ <tr>
152
+ <td>ARC Challenge (25-shot)
153
+ </td>
154
+ <td>74.49
155
+ </td>
156
+ <td>74.49
157
+ </td>
158
+ <td>100.0%
159
+ </td>
160
+ </tr>
161
+ <tr>
162
+ <td>GSM-8K (5-shot, strict-match)
163
+ </td>
164
+ <td>21.00
165
+ </td>
166
+ <td>20.32
167
+ </td>
168
+ <td>96.8%
169
+ </td>
170
+ </tr>
171
+ <tr>
172
+ <td>Hellaswag (10-shot)
173
+ </td>
174
+ <td>86.03
175
+ </td>
176
+ <td>86.04
177
+ </td>
178
+ <td>100.0%
179
+ </td>
180
+ </tr>
181
+ <tr>
182
+ <td>Winogrande (5-shot)
183
+ </td>
184
+ <td>77.98
185
+ </td>
186
+ <td>77.74
187
+ </td>
188
+ <td>99.7%
189
+ </td>
190
+ </tr>
191
+ <tr>
192
+ <td>TruthfulQA (0-shot)
193
+ </td>
194
+ <td>64.60
195
+ </td>
196
+ <td>64.51
197
+ </td>
198
+ <td>99.9%
199
+ </td>
200
+ </tr>
201
+ <tr>
202
+ <td><strong>Average</strong>
203
+ </td>
204
+ <td><strong>67.20</strong>
205
+ </td>
206
+ <td><strong>73.80</strong>
207
+ </td>
208
+ <td><strong>100.0%</strong>
209
+ </td>
210
+ </tr>
211
+ </table>