Quantized Qwen3.5
Collection
Verified models. Compatible with Transformers v5.3 and vLLM v0.16.1rc1 (nightly). Under evaluation. • 3 items • Updated
This is Qwen/Qwen3.5-27B quantized with llm-compressor to NVFP4. The model is compatible with vLLM (tested: v0.16.1rc1). Tested with an H200. Currently under evaluation.
uv pip install vllm --torch-backend=auto --extra-index-url https://wheels.vllm.ai/nightly
uv pip install git+https://github.com/huggingface/transformers.git
vllm serve [this model ID] --max-model-len 262144 --reasoning-parser qwen3
Thank you Verda for providing the needed compute. I used their H200s. Verda is a European, AI-focused cloud and GPU infrastructure provider with sovereignty, sustainability, data privacy, and performance at its core. Check them out if interested.
Base model
Qwen/Qwen3.5-27B