Qwen3.5-27B-NVFP4 / README.md
bnjmnmarie's picture
Upload folder using huggingface_hub
917240b verified
metadata
license: apache-2.0
base_model:
  - Qwen/Qwen3.5-27B
tags:
  - llm-compressor

This is Qwen/Qwen3.5-27B quantized with llm-compressor to NVFP4. The model is compatible with vLLM (tested: v0.16.1rc1). Tested with an H200. Currently under evaluation.

Instructions

uv pip install vllm --torch-backend=auto --extra-index-url https://wheels.vllm.ai/nightly
uv pip install git+https://github.com/huggingface/transformers.git
vllm serve [this model ID]  --max-model-len 262144 --reasoning-parser qwen3

Acknowledgments

Thank you Verda for providing the needed compute. I used their H200s. Verda is a European, AI-focused cloud and GPU infrastructure provider with sovereignty, sustainability, data privacy, and performance at its core. Check them out if interested.