GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers
Paper
•
2210.17323
•
Published
•
10
"stop_token_ids":[128001, 128009] to temporarily address the non-stop generation issue.generation_config.json.no_inject_fused_attention enabled. This is a bug with AutoGPTQ library.Parameters -> Generation -> Skip special tokens: turn this off (deselect)Parameters -> Generation -> Custom stopping strings: add "<|end_of_text|>","<|eot_id|>" to the fieldThis repo contains 4 Bit quantized GPTQ model files for meta-llama/Meta-Llama-3-8B-Instruct.
This model can be loaded with less than 6 GB of VRAM (huge reduction from the original 16.07GB model) and can be served lightning fast with the cheapest Nvidia GPUs possible (Nvidia T4, Nvidia K80, RTX 4070, etc).
The 4 bit GPTQ quant has small quality degradation from the original bfloat16 model but can be served on much smaller GPUs with maximum improvement in latency and throughput.
| Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Description |
|---|---|---|---|---|---|---|---|---|---|
| main | 4 | 128 | Yes | 0.1 | wikitext | 8192 | 5.74 GB | Yes | 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss |
| More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. |
Tested serving this model via vLLM using an Nvidia T4 (16GB VRAM).
Tested with the below command
python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit --max-model-len 8192 --dtype float16
For the non-stop token generation bug, make sure to send requests with stop_token_ids":[128001, 128009] to vLLM endpoint
Example:
{
"model": "astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who created Llama 3?"}
],
"max_tokens": 2000,
"stop_token_ids":[128001,128009]
}
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
{{prompt}}<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
Base model
meta-llama/Meta-Llama-3-8B-Instruct