QuantFunc

Logo

🤗 Hugging Face | 🤖 ModelScope | 💬 WeChat (微信) | 🎮 Discord

Introduction

We are excited to share our latest model series based on nunchaku + qwen-image-2512 quantization. These models are carefully optimized to maintain high-quality output while significantly improving inference speed and efficiency. All models are 100% compatible with the nunchaku-comfyui && lora plugin and can be used directly in ComfyUI.

Gallery

Result 6 Result 3
Result 1 Result 2
Result 4 Result 5

Model Checkpoints

Name low_rank Notes
nunchaku_qwen_image_2512_best_quality_fp4 256 Best quality model, suitable for scenarios with extremely high quality requirements
nunchaku_qwen_image_2512_best_quality_int4 256 Best quality model, suitable for scenarios with extremely high quality requirements
nunchaku_qwen_image_2512_ultimate_speed_int4 32 Ultimate speed model, prioritizing inference speed
nunchaku_qwen_image_2512_ultimate_speed_fp4 32 Ultimate speed model, prioritizing inference speed
nunchaku_qwen_image_2512_balance_int4 128 Balanced model, achieving the best balance between quality and speed
nunchaku_qwen_image_2512_balance_fp4 128 Balanced model, achieving the best balance between quality and speed

4 steps workflow

Here’s a workflow example of integrating 4-step LoRA in ComfyUI. If you don’t need 4-step LoRA, simply remove the LoRA node. work flow

Coming Soon

If you encounter any issues during use, feel free to join our community for feedback:

  • Join our Discord server
  • Scan the QR code below to join our WeChat group

We will add support for build in lora and qwen-image-edit-2511 in approximately one month.

WeChat Group
Downloads last month
3,018
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for QuantFunc/Nunchaku-Qwen-Image-2512

Quantized
(8)
this model

Space using QuantFunc/Nunchaku-Qwen-Image-2512 1