How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("T5B/Qwen-Image-Layered-FP8", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Qwen-Image-Layered (FP8 E5M2 & E4M3FN)

This is a quantization of Qwen/Qwen-Image-Layered to FP8 E5M2 and FP8 E4M3FN.

Sensitive layers (norms, embeddings, biases) were kept in BF16.

License & Usage: This model strictly follows the original licensing terms and usage restrictions. Please refer to the original model card for details.

Downloads last month
301
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for T5B/Qwen-Image-Layered-FP8

Base model

Qwen/Qwen-Image
Quantized
(5)
this model