Qwen2.5-72B-Instruct-NVFP4
NVFP4-quantized version of Qwen/Qwen2.5-72B-Instruct, produced by Enfuse.
Model Overview
| Attribute | Value |
|---|---|
| Base Model | Qwen/Qwen2.5-72B-Instruct |
| Parameters | 72.7B |
| Architecture | Dense Transformer (GQA, 64 attn heads, 8 KV heads) |
| Quantization | NVFP4 (W4A4 with FP4 weights and dynamic FP4 activations) |
| Format | compressed-tensors (safetensors) |
| Precision | FP4 weights (group_size=16), FP8 scales, lm_head unquantized |
| Approx. Size | ~42 GB (down from ~145 GB in BF16) |
| Context Length | 32,768 tokens |
| License | Qwen License |
How to Use
vLLM (recommended)
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "enfuse/Qwen2.5-72B-Instruct-NVFP4"
tokenizer = AutoTokenizer.from_pretrained(model_id)
llm = LLM(model=model_id, tensor_parallel_size=2)
sampling_params = SamplingParams(temperature=0.7, top_p=0.9, max_tokens=512)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in simple terms."},
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
outputs = llm.generate([prompt], sampling_params)
print(outputs[0].outputs[0].text)
Hardware Requirements
- Full NVFP4 (W4A4): Requires NVIDIA Blackwell GPU (B200, GB200, RTX 5090) for native FP4 tensor core support
- Weight-only FP4: Older GPUs (H100, A100) can load the model but will only apply weight quantization, not activation quantization
- Recommended: 2x B200 with tensor parallelism for optimal throughput
Quantization Details
This model was quantized using LLM Compressor (v0.10.0) with the NVFP4 scheme:
- Method: Post-training quantization (PTQ) with calibration
- Calibration data: 512 samples from HuggingFaceH4/ultrachat_200k (
train_sftsplit) - Sequence length: 2048 tokens
- Scheme:
NVFP4-- FP4 weights with per-group (group_size=16) local scales in FP8, dynamic FP4 activations at inference - Excluded layers:
lm_head(kept in original precision)
from llmcompressor.modifiers.quantization import QuantizationModifier
recipe = QuantizationModifier(
targets="Linear",
scheme="NVFP4",
ignore=["lm_head"],
)
Infrastructure
Quantization was performed on an NVIDIA DGX B200 system:
- 8x NVIDIA B200 GPUs (183 GB HBM3e each)
- 2x Intel Xeon Platinum 8570 (224 threads)
- 2 TiB system RAM
- Ubuntu 24.04 LTS, CUDA 13.0, Driver 580.126.09
Evaluation
OpenLLM v1 Benchmarks
Evaluated using lm-evaluation-harness with vLLM backend, following the RedHatAI NVFP4 evaluation methodology.
Evaluated with --apply_chat_template --fewshot_as_multiturn flags, tensor_parallel_size=2 on NVIDIA B200 GPUs.
| Benchmark | Metric | n-shot | NVFP4 | BF16 Reference |
|---|---|---|---|---|
| ARC-Challenge | acc_norm | 25 | 70.31 | — |
| GSM8K | exact_match | 5 | 79.53 | 95.8¹ |
| HellaSwag | acc_norm | 10 | 79.75 | — |
| MMLU | acc | 5 | 83.68 | — |
| TruthfulQA MC2 | acc | 0 | 68.68 | — |
| Winogrande | acc | 5 | 71.19 | — |
¹ Qwen blog reference scores use different eval settings; direct comparison requires identical configurations.
Qwen2.5-72B-Instruct BF16 Reference Scores
Official scores from the Qwen2.5 blog (different eval methodology):
| Benchmark | BF16 Score |
|---|---|
| MMLU-Pro | 71.1 |
| GSM8K | 95.8 |
| MATH | 83.1 |
| HumanEval | 86.6 |
| IFEval (strict-prompt) | 84.1 |
| GPQA | 49.0 |
About Enfuse
Enfuse builds sovereign AI infrastructure for regulated enterprises. The Enfuse platform provides on-prem LLM orchestration and an App Factory for shipping governed, compliant AI applications on your own infrastructure.
This quantization is part of our ongoing work to make large language models more accessible and efficient for on-premise deployment, where memory efficiency directly impacts what models organizations can run within their own data centers.
Acknowledgments
- Qwen Team for the base Qwen2.5-72B-Instruct model
- vLLM Project for LLM Compressor
- NVIDIA for the NVFP4 quantization format and Blackwell hardware
- Downloads last month
- 1,923