Instructions to use T5B/Z-Image-Turbo-FP8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use T5B/Z-Image-Turbo-FP8 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("T5B/Z-Image-Turbo-FP8", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Text encoder
Is it possible to use a GGUF version of the Qwen 3 model as text encoder ?
I tried https://huggingface.co/unsloth/Qwen3-4B-GGUF/resolve/main/Qwen3-4B-Q8_0.gguf and got this error: Unexpected text model architecture type in GGUF file: 'qwen3'
I tried https://huggingface.co/unsloth/Qwen3-4B-GGUF/resolve/main/Qwen3-4B-Q8_0.gguf and got this error: Unexpected text model architecture type in GGUF file: 'qwen3'
Make sure your ComfyUI-GGUF node is updated, and use the CLIPLoader (GGUF). In the βtypeβ field, select lumina2.
Got it working after a few more tries. I had already repeated the exact same steps twice before posting here, but for some reason it finally went through this time. Your advice helped me try again β thanks a lot! π