Qwen3.5-27B-quantized.w4a16

This is a quantized version of Qwen/Qwen3.5-27B. This model accepts text and images as inputs and generates text as outputs. The weights were quantized to INT4 using GPTQ via llm-compressor with 512 calibration samples from HuggingFaceH4/ultrachat_200k, reducing the model size from 51.8 GB to 17.3 GB (~3.0x reduction) while maintaining 100.3% average accuracy recovery.


Inference

As of 2/27/2026, this model is supported in vLLM nightly. To serve the model:

vllm serve Kbenkhaled/Qwen3.5-27B-quantized.w4a16 \
    --reasoning-parser qwen3 \
    --enable-prefix-caching

Evaluation

Evaluated with lm-evaluation-harness, 0-shot, thinking mode ON.

Benchmark Qwen3.5-27B Qwen3.5-27B-quantized.w4a16 (this model) Recovery
GPQA Diamond 80.30% 80.81% 100.6%
IFEval 95.08% 95.20% 100.1%
MMLU-Redux 93.90% 94.13% 100.2%
Average 89.76% 90.05% 100.3%
Downloads last month
568
Safetensors
Model size
28B params
Tensor type
I64
·
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Kbenkhaled/Qwen3.5-27B-quantized.w4a16

Base model

Qwen/Qwen3.5-27B
Quantized
(63)
this model

Dataset used to train Kbenkhaled/Qwen3.5-27B-quantized.w4a16