Qwen3.5-9B-quantized.w4a16
This is a quantized version of Qwen/Qwen3.5-9B. This model accepts text and images as inputs and generates text as outputs. The weights were quantized to 4-bit integers (W4A16) using GPTQ via llm-compressor with 512 calibration samples from nvidia/Nemotron-Post-Training-Dataset-v2, reducing the model size from 18.0 GB to 10.7 GB (~1.7x reduction) with effectively lossless accuracy (>100% average recovery).
Quantization Details
- Scheme: W4A16
- Calibration: 512 samples (256 reasoning-on + 256 reasoning-off) from Nemotron-Post-Training-Dataset-v2
- Max sequence length: 4096
- dampening_frac: 0.01
Inference
This model is supported in vLLM 0.17.0. To serve the model:
vllm serve Kbenkhaled/Qwen3.5-9B-quantized.w4a16 \
--reasoning-parser qwen3 \
--enable-prefix-caching
Evaluation
Evaluated with lm-evaluation-harness, 0-shot, thinking mode ON.
| Benchmark | Qwen3.5-9B | Qwen3.5-9B-quantized.w4a16 (this model) | Recovery |
|---|---|---|---|
| GPQA Diamond | 78.79% | 80.30% | 101.9% |
| IFEval | 94.48% | 94.12% | 99.6% |
| MMLU-Redux | 91.80% | 91.58% | 99.8% |
| Average | 88.36% | 88.67% | 100.3% |
- Downloads last month
- 1,507