Qwen3.6-27B INT4 AutoRound — Code Calibrated (Best Recipe)
A W4A16 (INT4 weight, FP16 activation) quantization of Qwen/Qwen3.6-27B, produced with Intel's AutoRound.
Key difference from the standard AutoRound quant: This variant was calibrated on a normalized and sampled subset of nvidia/OpenCodeInstruct — a ~5 M sample, execution-verified coding dataset — instead of the default general-purpose pile corpus. Calibrating on domain-specific data guides AutoRound's weight-rounding optimization to minimize quantization error on the token distributions that matter most for code, improving accuracy on code generation, reasoning, and instruction-following for programming tasks. The
auto-round-bestpreset was used (1000 iterations, 512 calibration samples), which runs ~4–5× slower than the standard recipe but achieves the best possible INT4 accuracy. MTP (speculative decoding) and image/vision inputs work out of the box with no post-processing required.
TL;DR
- Base: Qwen3.6-27B (27B dense VLM)
- Quant: INT4 W4A16, group_size 128, symmetric
- Tool:
auto-round-best(1000 iters, 512 samples, torch.compile) - Calibration dataset:
nvidia/OpenCodeInstruct(coding domain) - Size: ~18 GB (down from ~54 GB BF16) — 3× reduction
- MTP: Native Multi-Token Prediction head preserved in BF16 — enables native speculative decoding in vLLM (~85–90% draft acceptance, ~2× throughput)
- Vision: Image inputs work via the MoonViT encoder (weights kept at original BF16/FP16 precision)
Why code calibration?
AutoRound's algorithm optimizes weight rounding by minimizing the difference between the quantized model's outputs and the full-precision model's outputs on a set of calibration samples. The calibration dataset therefore shapes which activations and weight patterns are prioritized during optimization.
Using a normalized and sampled subset of nvidia/OpenCodeInstruct — a large, execution-verified dataset of coding problems and solutions — means the rounding decisions are tuned for code-style token distributions: identifiers, keywords, indentation patterns, and structured reasoning. In practice this tends to:
- Better preserve accuracy on code generation benchmarks relative to a pile-calibrated quant
- Improve instruction following for programming tasks (function signatures, docstrings, tool calls)
- Retain structured output quality (JSON, markdown code blocks, structured diffs)
If your primary use-case is code generation or an AI coding assistant, this variant is the recommended choice. For general-purpose or multimodal usage, see the standard Qwen3.6-27B-int4-AutoRound quant.
Quick inference with vLLM (with MTP speculative decoding)
Requires vLLM v0.19.1+ with Qwen3_5 MTP support. Set the following environment variables before starting:
export VLLM_USE_FLASHINFER_SAMPLER=1
export VLLM_ALLOW_LONG_MAX_MODEL_LEN=1
export VLLM_FLOAT32_MATMUL_PRECISION=high
export PYTORCH_CUDA_ALLOC_CONF="expandable_segments:True,max_split_size_mb:512"
export VLLM_NO_USAGE_STATS=1
export VLLM_MEMORY_PROFILER_ESTIMATE_CUDAGRAPHS=1
export VLLM_MARLIN_USE_ATOMIC_ADD=1
export OMP_NUM_THREADS=1
export CUDA_DEVICE_MAX_CONNECTIONS=8
export NCCL_CUMEM_ENABLE=0
export NCCL_P2P_DISABLE=1
vllm serve webhie/Qwen3.6-27B-int4-AutoRound-Code \
--served-model-name qwen3.6-27b \
--host 0.0.0.0 --port 11434 \
--trust-remote-code \
--dtype auto \
--quantization auto_round \
--max-model-len 200704 \
--gpu-memory-utilization 0.92 \
--max-num-seqs 4 \
--kv-cache-dtype fp8_e4m3 \
--attention-backend flashinfer \
--performance-mode throughput \
--max-num-batched-tokens 2048 \
--enable-chunked-prefill \
--enable-auto-tool-choice \
--tool-call-parser qwen3_coder \
--reasoning-parser qwen3 \
--default-chat-template-kwargs '{"preserve_thinking":true}' \
--override-generation-config '{"temperature":0.6,"top_p":0.95,"top_k":20,"min_p":0.0,"presence_penalty":0.0,"repetition_penalty":1.0}' \
--enable-prompt-tokens-details \
--speculative-config '{"method":"mtp","num_speculative_tokens":3}'
Remove --speculative-config to disable MTP speculative decoding. See the vllm-blackwell-guide repo for a full Docker Compose setup with all env vars pre-configured.
OpenAI-compatible request
from openai import OpenAI
client = OpenAI(base_url="http://localhost:11434/v1", api_key="EMPTY")
r = client.chat.completions.create(
model="qwen3.6-27b",
messages=[{"role": "user", "content": "Write a quicksort in Python."}],
max_tokens=512,
)
print(r.choices[0].message.content)
Transformers (no spec decoding)
from transformers import AutoModelForCausalLM, AutoTokenizer
m = AutoModelForCausalLM.from_pretrained(
"webhie/Qwen3.6-27B-int4-AutoRound-Code",
trust_remote_code=True,
device_map="auto",
)
tok = AutoTokenizer.from_pretrained("webhie/Qwen3.6-27B-int4-AutoRound-Code")
msg = [{"role": "user", "content": "Write a binary search in Python."}]
ids = tok.apply_chat_template(msg, add_generation_prompt=True, return_tensors="pt").to(m.device)
print(tok.decode(m.generate(ids, max_new_tokens=256)[0]))
Quantization details
| Field | Value |
|---|---|
| Base | Qwen/Qwen3.6-27B |
| Method | AutoRound (intel/auto-round), best recipe |
| Scheme | W4A16 (4-bit weights, FP16 activations) |
| Bits | 4 |
| Group size | 128 |
| Symmetric | yes |
| Packing format | auto_round:auto_gptq |
| Unquantized layers | linear_attn.in_proj_a/b, all LayerNorms, RMSNorms, router gates |
| Calibration dataset | Normalized & sampled subset of nvidia/OpenCodeInstruct |
| Calibration samples | 512 |
| Iterations | 1000 |
| torch.compile | enabled |
| GPU used for quant | 1× RTX 5090 (32 GB, SM120), low_gpu_mem_usage=True |
Unquantized layers — why
linear_attn.in_proj_a/b: low-rank projections in Qwen3.6's Gated DeltaNet whose shapes aren't divisible by 32 (group_size), so AutoRound skips them automatically. Tiny fraction of total parameters.- Norms, routers: precision-sensitive and very small — kept at full precision.
Performance
Benchmarked on 1× RTX 5090 (32 GB) with vLLM + FP8 KV cache + MTP n=3:
| Config | Throughput |
|---|---|
| vLLM + MTP n=3 | ~150 tok/s |
| vLLM (MTP disabled) | ~70 tok/s |
The ~2× speedup comes from ~85–90% draft acceptance via MTP speculative decoding with num_speculative_tokens: 3.
Reproduction
pip install auto-round
# The calibration data was first normalized and sampled from nvidia/OpenCodeInstruct
# (formatting cleaned, deduplicated, balanced across domains) and exported as a
# local JSON file before quantization. Pass your own prepared subset with:
# --dataset ./subset_10k.json
auto-round-best \
--model Qwen/Qwen3.6-27B \
--scheme W4A16 \
--format auto_round \
--output_dir Qwen3.6-27B-int4-AutoRound-Code \
--enable_torch_compile \
--low_gpu_mem_usage \
--device_map 0
No post-processing needed — MTP and image inputs work out of the box.
Acknowledgements
- Alibaba / Qwen team for the base Qwen3.6-27B model
- Intel AutoRound team for the quantization framework and the
auto-round-bestrecipe - NVIDIA for the OpenCodeInstruct calibration dataset — ~5 M execution-verified coding samples used to domain-adapt this quant
- Lorbus for the original AutoRound quant of this model that inspired this release
- @eugr for the spark-vllm-docker fork and TurboQuant KV cache work
- vLLM project for the inference engine and Qwen3_5 MTP support
License
Apache 2.0 — same as Qwen3.6-27B base.
Citation
If you use this quant, please cite the original Qwen3.6 release (see base model card), the AutoRound paper, and the OpenCodeInstruct dataset:
@article{cheng2023autoround,
title = {Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs},
author = {Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi},
journal = {arXiv preprint arXiv:2309.05516},
year = {2023}
}
@misc{nvidia2025opencode,
title = {OpenCodeInstruct: A Large-scale Instruction Tuning Dataset for Code LLMs},
author = {NVIDIA},
year = {2025},
url = {https://huggingface.co/datasets/nvidia/OpenCodeInstruct}
}
- Downloads last month
- 376
Model tree for webhie/Qwen3.6-27B-int4-AutoRound-Code
Base model
Qwen/Qwen3.6-27B