YAML Metadata Warning: empty or missing yaml metadata in repo card

Check out the documentation for more information.

Note For the MIX Version Diffusion uses FP8, But VAE and Text Encoder remain at FP16

This is a quantized version of WAI-illustrious-SDXL, Built-in VAE and CLIP included.

The default FP16 model can take up 6-7gb just to load vram and do image Diffusion, with a separate gguf, the unet model only needs 2.8gb vram on KSampler / Processing Image.

ComfyUI Recommended

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including morikomorizz/Wai-Illustrious-FP8