How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Ccre/Z-Image-Turbo-MXFP8", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Tested on NVIDIA Blackwell (RTX 5060 Ti). This MXFP8 version offers a ~1.9x speedup over BF16 with very little visual degradation.

See the full forensic benchmarks and 'Difference Maps' at: blackwell-mxfp8-nvfp4

Read the step-by-step installation guide for MXFP8 and NVFP4.

Images created at 12 steps.

compared

Downloads last month
57
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Ccre/Z-Image-Turbo-MXFP8

Finetuned
(97)
this model