bitsandbytes 4-bit quantization of the Qwen-Image-Layered transformer model, the original model can be found here
- Downloads last month
- 21
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
bitsandbytes 4-bit quantization of the Qwen-Image-Layered transformer model, the original model can be found here