Llama-3.1-8B-Instruct-INT8-W8A8
This is an INT8 W8A8 quantized version of meta-llama/Llama-3.1-8B-Instruct created using llm-compressor.
Note: This model quantizes Weights and Activations only. KV cache is NOT quantized.
Quantization Details
- Quantization Method: INT8 W8A8 (Weight and Activation only)
- Weight Precision: INT8 (8-bit integer)
- Activation Precision: INT8 (8-bit integer)
- KV Cache: Not quantized (remains in original precision)
- Quantization Strategy: Per-tensor symmetric quantization (static, non-dynamic)
- Observer: MinMax
- Ignored Layers:
lm_headonly - Calibration Dataset: CNN/DailyMail
- Calibration Samples: 512
Model Size
- Original Model: ~16GB (BF16)
- Quantized Model: ~8.5GB (INT8 W8A8)
- Compression Ratio: ~1.9x
Usage
Installation
pip install vllm>=0.6.0
With vLLM
from vllm import LLM, SamplingParams
# Load the INT8 W8A8 quantized model
llm = LLM(model="JongYeop/Llama-3.1-8B-Instruct-INT8-W8A8")
# Generate text
prompts = ["Hello, my name is"]
sampling_params = SamplingParams(temperature=0.7, top_p=0.9, max_tokens=100)
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
print(output.outputs[0].text)
With Transformers (for inspection)
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("JongYeop/Llama-3.1-8B-Instruct-INT8-W8A8")
model = AutoModelForCausalLM.from_pretrained(
"JongYeop/Llama-3.1-8B-Instruct-INT8-W8A8",
device_map="auto"
)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
output = model.generate(input_ids, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Performance
INT8 W8A8 quantization provides:
- ~2x memory reduction compared to BF16
- Faster inference with INT8-capable hardware (e.g., NVIDIA GPUs with compute capability >= 7.5)
- Minimal accuracy degradation due to symmetric per-tensor quantization
- Wide hardware compatibility compared to FP8 (which requires compute capability > 8.9)
Quantization Recipe
The quantization recipe used for this model is included in the repository as recipe.yaml.
Key configuration:
quant_stage:
quant_modifiers:
QuantizationModifier:
ignore: ["lm_head"]
config_groups:
group_0:
weights:
num_bits: 8
type: int
strategy: tensor # Per-tensor quantization
dynamic: false
symmetric: true
input_activations:
num_bits: 8
type: int
strategy: tensor # Per-tensor quantization
dynamic: false
symmetric: true
targets: ["Linear"]
Hardware Requirements
- GPU: NVIDIA GPU with compute capability >= 7.5
- Examples: RTX 2080, RTX 3090, RTX 4090, A100, L40S, H100, H200
- VRAM: Minimum 10GB for inference
Citation
If you use this model, please cite:
@software{llm-compressor,
title = {LLM Compressor},
author = {vLLM Team},
url = {https://github.com/vllm-project/llm-compressor},
year = {2024}
}
@article{llama3,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url={https://github.com/meta-llama/llama3}
}
License
This model inherits the license from the original Llama 3.1 model.
Acknowledgments
- Original model: meta-llama/Llama-3.1-8B-Instruct
- Quantization tool: llm-compressor by vLLM team
- Quantization guide: vLLM INT8 W8A8 Documentation
- Downloads last month
- 6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for JongYeop/Llama-3.1-8B-Instruct-INT8-W8A8
Base model
meta-llama/Llama-3.1-8B Finetuned
meta-llama/Llama-3.1-8B-Instruct