Instructions to use RunDiffusion/Juggernaut-Z-Image with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use RunDiffusion/Juggernaut-Z-Image with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("RunDiffusion/Juggernaut-Z-Image", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Fix dtype label: main safetensors is BF16, not FP32 (per @prookyon , discussion #1)
Browse files
README.md
CHANGED
|
@@ -122,7 +122,7 @@ Cleaner structural lines and more coherent material rendering.
|
|
| 122 |
|
| 123 |
| File | Format | Notes |
|
| 124 |
| --- | --- | --- |
|
| 125 |
-
| `Juggernaut_Z_V1_by_RunDiffusion.safetensors` | safetensors (
|
| 126 |
| `Juggernaut_Z_V1_by_RunDiffusion_fp16.safetensors` | safetensors (fp16) | Half-precision |
|
| 127 |
| `Juggernaut_Z_V1_FP8_e4m3fn.safetensors` | safetensors (fp8 e4m3fn) | Lower VRAM footprint |
|
| 128 |
| `Juggernaut_Z_V1_by_RunDiffusion_q8_0.gguf` | GGUF 路 q8_0 | Highest-quality quant |
|
|
|
|
| 122 |
|
| 123 |
| File | Format | Notes |
|
| 124 |
| --- | --- | --- |
|
| 125 |
+
| `Juggernaut_Z_V1_by_RunDiffusion.safetensors` | safetensors (bf16) | Original release weights |
|
| 126 |
| `Juggernaut_Z_V1_by_RunDiffusion_fp16.safetensors` | safetensors (fp16) | Half-precision |
|
| 127 |
| `Juggernaut_Z_V1_FP8_e4m3fn.safetensors` | safetensors (fp8 e4m3fn) | Lower VRAM footprint |
|
| 128 |
| `Juggernaut_Z_V1_by_RunDiffusion_q8_0.gguf` | GGUF 路 q8_0 | Highest-quality quant |
|