shubhrapandit commited on
Commit
4bac285
·
verified ·
1 Parent(s): 7b4339a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +413 -0
README.md ADDED
@@ -0,0 +1,413 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - vllm
4
+ - vision
5
+ - w8a8
6
+ license: apache-2.0
7
+ license_link: >-
8
+ https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
9
+ language:
10
+ - en
11
+ base_model: mgoin/pixtral-12b
12
+ library_name: transformers
13
+ ---
14
+
15
+ # pixtral-12b-quantized.w8a8
16
+
17
+ ## Model Overview
18
+ - **Model Architecture:** mgoin/pixtral-12b
19
+ - **Input:** Vision-Text
20
+ - **Output:** Text
21
+ - **Model Optimizations:**
22
+ - **Weight quantization:** INT4
23
+ - **Activation quantization:** FP16
24
+ - **Release Date:** 2/24/2025
25
+ - **Version:** 1.0
26
+ - **Model Developers:** Neural Magic
27
+
28
+ Quantized version of [mgoin/pixtral-12b](https://huggingface.co/mgoin/pixtral-12b).
29
+
30
+ ### Model Optimizations
31
+
32
+ This model was obtained by quantizing the weights of [mgoin/pixtral-12b](https://huggingface.co/mgoin/pixtral-12b) to INT8 data type, ready for inference with vLLM >= 0.5.2.
33
+
34
+ ## Deployment
35
+
36
+ ### Use with vLLM
37
+
38
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
39
+
40
+ ```python
41
+ from vllm.assets.image import ImageAsset
42
+ from vllm import LLM, SamplingParams
43
+
44
+ # prepare model
45
+ llm = LLM(
46
+ model="neuralmagic/pixtral-12b-quantized.w8a8",
47
+ trust_remote_code=True,
48
+ max_model_len=4096,
49
+ max_num_seqs=2,
50
+ )
51
+
52
+ # prepare inputs
53
+ question = "What is the content of this image?"
54
+ inputs = {
55
+ "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
56
+ "multi_modal_data": {
57
+ "image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
58
+ },
59
+ }
60
+
61
+ # generate response
62
+ print("========== SAMPLE GENERATION ==============")
63
+ outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
64
+ print(f"PROMPT : {outputs[0].prompt}")
65
+ print(f"RESPONSE: {outputs[0].outputs[0].text}")
66
+ print("==========================================")
67
+ ```
68
+
69
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
70
+
71
+ ## Creation
72
+
73
+ This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
74
+
75
+ <details>
76
+ <summary>Model Creation Code</summary>
77
+
78
+ ```python
79
+ import requests
80
+ import torch
81
+ from PIL import Image
82
+ from transformers import AutoProcessor
83
+
84
+ from llmcompressor.modifiers.quantization import GPTQModifier
85
+ from llmcompressor.transformers import oneshot
86
+ from llmcompressor.transformers.tracing import TraceableLlavaForConditionalGeneration
87
+ import os
88
+ from clearml import InputModel, OutputModel
89
+
90
+ # Load model.
91
+ model_id = mgoin/pixtral-12b
92
+ model = TraceableLlavaForConditionalGeneration.from_pretrained(
93
+ model_id, device_map="auto", torch_dtype="auto"
94
+ )
95
+ processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
96
+
97
+ # Oneshot arguments
98
+ DATASET_ID = "flickr30k"
99
+ DATASET_SPLIT = {"calibration": "test[:512]"}
100
+ NUM_CALIBRATION_SAMPLES = 512
101
+ MAX_SEQUENCE_LENGTH = 2048
102
+
103
+
104
+ # Define a oneshot data collator for multimodal inputs.
105
+ def data_collator(batch):
106
+ assert len(batch) == 1
107
+ return {
108
+ "input_ids": torch.LongTensor(batch[0]["input_ids"]),
109
+ "attention_mask": torch.tensor(batch[0]["attention_mask"]),
110
+ "pixel_values": torch.tensor(batch[0]["pixel_values"]),
111
+ }
112
+
113
+
114
+ # Recipe
115
+ recipe = [
116
+ GPTQModifier(
117
+ targets="Linear",
118
+ scheme="W8A8",
119
+ sequential_targets=["MistralDecoderLayer"],
120
+ ignore=["re:.*lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"],
121
+ ),
122
+ ]
123
+
124
+ SAVE_DIR=f"{model_id.split('/')[1]}-quantized.w8a8"
125
+
126
+ # Perform oneshot
127
+ oneshot(
128
+ model=model,
129
+ tokenizer=model_id,
130
+ dataset=DATASET_ID,
131
+ splits=DATASET_SPLIT,
132
+ recipe=recipe,
133
+ max_seq_length=MAX_SEQUENCE_LENGTH,
134
+ num_calibration_samples=NUM_CALIBRATION_SAMPLES,
135
+ trust_remote_code_model=True,
136
+ data_collator=data_collator,
137
+ output_dir=SAVE_DIR
138
+ )
139
+
140
+ ```
141
+ </details>
142
+
143
+ ## Evaluation
144
+
145
+ The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
146
+
147
+ <details>
148
+ <summary>Evaluation Commands</summary>
149
+
150
+ ```
151
+ ```
152
+
153
+ </details>
154
+
155
+ ### Accuracy
156
+
157
+ ## Inference Performance
158
+
159
+
160
+ This model achieves up to xxx speedup in single-stream deployment and up to xxx speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
161
+ The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
162
+
163
+ <details>
164
+ <summary>Benchmarking Command</summary>
165
+ ```
166
+ guidellm --model neuralmagic/pixtral-12b-quantized.w8a8 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server
167
+ ```
168
+
169
+ </details>
170
+
171
+ ### Single-stream performance (measured with vLLM version 0.7.2)
172
+
173
+ <table border="1" class="dataframe">
174
+ <thead>
175
+ <tr>
176
+ <th></th>
177
+ <th></th>
178
+ <th></th>
179
+ <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
180
+ <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
181
+ <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
182
+ </tr>
183
+ <tr>
184
+ <th>Hardware</th>
185
+ <th>Model</th>
186
+ <th>Average Cost Reduction</th>
187
+ <th>Latency (s)</th>
188
+ <th>QPD</th>
189
+ <th>Latency (s)th>
190
+ <th>QPD</th>
191
+ <th>Latency (s)</th>
192
+ <th>QPD</th>
193
+ </tr>
194
+ </thead>
195
+ <tbody style="text-align: center">
196
+ <tr>
197
+ <th rowspan="3" valign="top">A6000x1</th>
198
+ <th>mgoin/pixtral-12b</th>
199
+ <td></td>
200
+ <td>5.7</td>
201
+ <td>796</td>
202
+ <td>4.8</td>
203
+ <td>929</td>
204
+ <td>4.7</td>
205
+ <td>964</td>
206
+ </tr>
207
+ <tr>
208
+ <th>neuralmagic/pixtral-12b-quantized.w8a8</th>
209
+ <td>1.55</td>
210
+ <td>3.7</td>
211
+ <td>1220</td>
212
+ <td>3.1</td>
213
+ <td>1437</td>
214
+ <td>3.0</td>
215
+ <td>1511</td>
216
+ </tr>
217
+ <tr>
218
+ <th>neuralmagic/pixtral-12b-quantized.w4a16</th>
219
+ <td>2.16</td>
220
+ <td>3.2</td>
221
+ <td>1417</td>
222
+ <td>2.1</td>
223
+ <td>2093</td>
224
+ <td>1.9</td>
225
+ <td>2371</td>
226
+ </tr>
227
+ <tr>
228
+ <th rowspan="3" valign="top">A100x1</th>
229
+ <th>mgoin/pixtral-12b</th>
230
+ <td></td>
231
+ <td>3.0</td>
232
+ <td>676</td>
233
+ <td>2.4</td>
234
+ <td>825</td>
235
+ <td>2.3</td>
236
+ <td>859</td>
237
+ </tr>
238
+ <tr>
239
+ <th>neuralmagic/pixtral-12b-quantized.w8a8</th>
240
+ <td>1.38</td>
241
+ <td>2.2</td>
242
+ <td>904</td>
243
+ <td>1.7</td>
244
+ <td>1159</td>
245
+ <td>1.7</td>
246
+ <td>1201</td>
247
+ </tr>
248
+ <tr>
249
+ <th>neuralmagic/pixtral-12b-quantized.w4a16</th>
250
+ <td>1.83</td>
251
+ <td>1.8</td>
252
+ <td>1096</td>
253
+ <td>1.3</td>
254
+ <td>1557</td>
255
+ <td>1.2</td>
256
+ <td>1702</td>
257
+ </tr>
258
+ <tr>
259
+ <th rowspan="3" valign="top">H100x1</th>
260
+ <th>mgoin/pixtral-12b</th>
261
+ <td></td>
262
+ <td>1.8</td>
263
+ <td>595</td>
264
+ <td>1.5</td>
265
+ <td>732</td>
266
+ <td>1.4</td>
267
+ <td>764</td>
268
+ </tr>
269
+ <tr>
270
+ <th>neuralmagic/pixtral-12b-FP8-Dynamic</th>
271
+ <td>1.35</td>
272
+ <td>1.4</td>
273
+ <td>767</td>
274
+ <td>1.1</td>
275
+ <td>1008</td>
276
+ <td>1.0</td>
277
+ <td>1056</td>
278
+ </tr>
279
+ <tr>
280
+ <th>neuralmagic/pixtral-12b-quantized.w4a16</th>
281
+ <td>1.37</td>
282
+ <td>1.4</td>
283
+ <td>787</td>
284
+ <td>1.1</td>
285
+ <td>1018</td>
286
+ <td>1.0</td>
287
+ <td>1065</td>
288
+ </tr>
289
+ </tbody>
290
+ </table>
291
+
292
+
293
+
294
+ ### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
295
+
296
+ <table border="1" class="dataframe">
297
+ <thead>
298
+ <tr>
299
+ <th></th>
300
+ <th></th>
301
+ <th></th>
302
+ <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
303
+ <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
304
+ <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
305
+ </tr>
306
+ <tr>
307
+ <th>Hardware</th>
308
+ <th>Model</th>
309
+ <th>Average Cost Reduction</th>
310
+ <th>Maximum throughput (QPS)</th>
311
+ <th>QPD</th>
312
+ <th>Maximum throughput (QPS)</th>
313
+ <th>QPD</th>
314
+ <th>Maximum throughput (QPS)</th>
315
+ <th>QPD</th>
316
+ </tr>
317
+ </thead>
318
+ <tbody style="text-align: center">
319
+ <tr>
320
+ <th rowspan="3" valign="top">A6000x1</th>
321
+ <th>mgoin/pixtral-12b</th>
322
+ <td></td>
323
+ <td>0.6</td>
324
+ <td>2632</td>
325
+ <td>0.9</td>
326
+ <td>4108</td>
327
+ <td>1.1</td>
328
+ <td>4774</td>
329
+ </tr>
330
+ <tr>
331
+ <th>neuralmagic/pixtral-12b-quantized.w8a8</th>
332
+ <td>1.50</td>
333
+ <td>0.9</td>
334
+ <td>3901</td>
335
+ <td>1.4</td>
336
+ <td>6160</td>
337
+ <td>1.6</td>
338
+ <td>7292</td>
339
+ </tr>
340
+ <tr>
341
+ <th>neuralmagic/pixtral-12b-quantized.w4a16</th>
342
+ <td>1.41</td>
343
+ <td>0.6</td>
344
+ <td>2890</td>
345
+ <td>1.3</td>
346
+ <td>5758</td>
347
+ <td>1.8</td>
348
+ <td>8312</td>
349
+ </tr>
350
+ <tr>
351
+ <th rowspan="3" valign="top">A100x1</th>
352
+ <th>mgoin/pixtral-12b</th>
353
+ <td></td>
354
+ <td>1.1</td>
355
+ <td>2291</td>
356
+ <td>1.8</td>
357
+ <td>3670</td>
358
+ <td>2.1</td>
359
+ <td>4284</td>
360
+ </tr>
361
+ <tr>
362
+ <th>neuralmagic/pixtral-12b-quantized.w8a8</th>
363
+ <td>1.38</td>
364
+ <td>1.5</td>
365
+ <td>3096</td>
366
+ <td>2.5</td>
367
+ <td>5076</td>
368
+ <td>3.0</td>
369
+ <td>5965</td>
370
+ </tr>
371
+ <tr>
372
+ <th>neuralmagic/pixtral-12b-quantized.w4a16</th>
373
+ <td>1.40</td>
374
+ <td>1.4</td>
375
+ <td>2728</td>
376
+ <td>2.6</td>
377
+ <td>5133</td>
378
+ <td>3.5</td>
379
+ <td>6943</td>
380
+ </tr>
381
+ <tr>
382
+ <th rowspan="3" valign="top">H100x1</th>
383
+ <th>BF16</th>
384
+ <td></td>
385
+ <td>2.6</td>
386
+ <td>2877</td>
387
+ <td>4.0</td>
388
+ <td>4372</td>
389
+ <td>4.7</td>
390
+ <td>5095</td>
391
+ </tr>
392
+ <tr>
393
+ <th>neuralmagic/pixtral-12b-FP8-Dynamic</th>
394
+ <td>1.33</td>
395
+ <td>3.4</td>
396
+ <td>3753</td>
397
+ <td>5.4</td>
398
+ <td>5862</td>
399
+ <td>6.3</td>
400
+ <td>6917</td>
401
+ </tr>
402
+ <tr>
403
+ <th>neuralmagic/pixtral-12b-quantized.w4a16</th>
404
+ <td>1.22</td>
405
+ <td>2.8</td>
406
+ <td>3115</td>
407
+ <td>5.0</td>
408
+ <td>5511</td>
409
+ <td>6.2</td>
410
+ <td>6777</td>
411
+ </tr>
412
+ </tbody>
413
+ </table>