|
|
--- |
|
|
base_model: |
|
|
- Tongyi-MAI/Z-Image |
|
|
base_model_relation: finetune |
|
|
frameworks: PyTorch |
|
|
language: |
|
|
- en |
|
|
- zh |
|
|
license: apache-2.0 |
|
|
pipeline_tag: text-to-image |
|
|
tasks: |
|
|
- text-to-image-synthesis |
|
|
tags: |
|
|
- Z |
|
|
--- |
|
|
## Z-Image-Distilled V3 🟥 Distilled LoRA Adapter 02/19/2026 |
|
|
|
|
|
Additionally, I've exported Redcraft DX3 ZIB Distilled LoRA in Rank-256 format. The LoRA weight can be adjusted to adapt it to various ZIB fine-tune models, fully compatible with the Z-Image(non-turbo) base model. |
|
|
|
|
|
[(Distilled LoRA FP16 (1.06 GB))](https://civitai.com/api/download/models/2680424?type=Model&format=SafeTensor&size=full&fp=fp16) <- 可以通过这里直接下载 LoRA 版本 |
|
|
|
|
|
**Redcraft DX3** ZIB Distilled on [CivitAI](https://civitai.com/models/958009?modelVersionId=2680424) |
|
|
|
|
|
上面是 Redcraft DX3 ZIB Distilled 导出为 Rank256 的LoRA版本,可以调整权重强度用于各种微调ZIT版本, 适配于 Z-Image(non-turbo) base 基底模型. |
|
|
|
|
|
--- |
|
|
|
|
|
## Z-Image-Distilled V3 2026/2/15 |
|
|
|
|
|
DF11 Lossless Compression RedZDX V3 came out, learn more: [Dynamic-length Float (DFloat11)](https://huggingface.co/DFloat11) |
|
|
|
|
|
Thanks to [mingyi456/Z-Image-Distilled-DF11-ComfyUI](https://huggingface.co/mingyi456/Z-Image-Distilled-DF11-ComfyUI) |
|
|
|
|
|
--- |
|
|
|
|
|
|
|
|
## Z-Image-Distilled V3 2026/2/11 |
|
|
|
|
|
Thanks to [Bubbliiiing](https://github.com/bubbliiiing), [VideoX-Fun](https://github.com/aigc-apps/VideoX-Fun)& [Alibaba-PAI](https://help.aliyun.com/zh/pai/) Provided us with a more efficient distillation solution |
|
|
|
|
|
https://huggingface.co/alibaba-pai/Z-Image-Fun-Lora-Distill |
|
|
|
|
|
Speed of Light, Power of Flow: The new ZID v3 "Lucis" is powered by the latest ZIB acceleration. Building on ZID v2 trainning sets, we've distilled a more efficient Zimage-based RedDX3. Now, in just 5 steps, you get solid results. |
|
|
|
|
|
Rapid Prototyping: Test LoRA training hypotheses instantly with 'near-zero' latency. |
|
|
|
|
|
Stochastic Pre-sampling: Serve as a high-speed, high-entropy source for ZiTurbo pipelines. |
|
|
|
|
|
Hybrid Workflows: Pair seamlessly with Klein 9B for cascaded refinement or ensemble generation. |
|
|
|
|
|
<p align="center"> |
|
|
<img src="8.png" width="1200"/> |
|
|
</p> |
|
|
|
|
|
- inference cfg: 1.0-1.5(建议1.0) |
|
|
- inference steps: 5(5-15步) |
|
|
- sampler / scheduler: Euler / simple |
|
|
|
|
|
Preview images generated by Z-Image Distilled V3+Moody MIX V7(ZIT finetune) Hybrid Workflow,Just for showing the style difference between ZID(RedZDX3) and ZIT(fine-tunning), |
|
|
|
|
|
no ranking intended =) |
|
|
|
|
|
演示例图使用ZIDistilled V3+Moody MIX V7混合工作流程,不用做排名对比 (L = 'ZID v3', R = 'ZIT ft') |
|
|
|
|
|
<p align="center"> |
|
|
<img src="15.png" width="1200"/> |
|
|
</p> |
|
|
|
|
|
<p align="center"> |
|
|
<img src="12.png" width="1200"/> |
|
|
</p> |
|
|
|
|
|
<p align="center"> |
|
|
<img src="10.png" width="1200"/> |
|
|
</p> |
|
|
|
|
|
<p align="center"> |
|
|
<img src="14.png" width="1200"/> |
|
|
</p> |
|
|
|
|
|
<p align="center"> |
|
|
<img src="13.png" width="1200"/> |
|
|
</p> |
|
|
|
|
|
<p align="center"> |
|
|
<img src="9.png" width="1200"/> |
|
|
</p> |
|
|
For more ZID v3 generated examples, please refrence |
|
|
|
|
|
RedCraft | 红潮 | RedZDX⚡️Distilled [[Civitai](https://civitai.com/models/958009) ] |
|
|
|
|
|
Welcome to the era of instant creativity. Welcome to 'Lucis'. |
|
|
|
|
|
|
|
|
## Z-Image-Distilled V2 2026/2/05 |
|
|
|
|
|
To a certain extent, the problem of ZImage color deviation has been reduced, but it is recommended to adjust the color appropriately according to the art style |
|
|
|
|
|
<p align="center"> |
|
|
<img src="6.png" width="1200"/> |
|
|
</p> |
|
|
|
|
|
- inference cfg: 1.0(建议1.0) |
|
|
- inference steps: 10(10-15步) |
|
|
- sampler / scheduler: Euler / simple |
|
|
|
|
|
感谢🙏这位作者完成了Z-Image的FP8mixed混合量化方案: |
|
|
|
|
|
https://huggingface.co/pachiiahri |
|
|
|
|
|
已上传 FP8 混合精度版本,请给这位作者点赞👍 |
|
|
|
|
|
Also available in NVFP4 quantized format, optimized for acceleration on Blackwell architecture GPUs.Double speed, Half resources.( like RTX50XX, PRO6000, B200, and others ) |
|
|
|
|
|
Also supports non-50 series GPUs (automatic 16-bit operation) |
|
|
|
|
|
<p align="center"> |
|
|
<img src="7.png" width="1200"/> |
|
|
</p> |
|
|
|
|
|
以上是FP8 scale&mixed 直出工作流(所有例图工作流开放[Civitai](https://civitai.com/models/958009?modelVersionId=2661885)) |
|
|
|
|
|
精度混合方案来自 https://civitai.com/models/2172944/z-image-fp8 |
|
|
|
|
|
|
|
|
The art style leans towards realism |
|
|
Retains ZIB's creative ability and reduces the collapse of Human anatomy. |
|
|
|
|
|
Thanks to @anyMODE([Civitai](https://civitai.com/models/2359857?modelVersionId=2663070)) for exporting ZID LoRAs |
|
|
|
|
|
<p align="center"> |
|
|
<img src="3.png" width="1200"/> |
|
|
</p> |
|
|
|
|
|
<p align="center"> |
|
|
<img src="4.png" width="1200"/> |
|
|
</p> |
|
|
|
|
|
## Z-Image-Distilled V1 2026/1/30 |
|
|
|
|
|
This model is a **direct distillation-accelerated version** based on the original **Z-Image** (non-Turbo) source. Its purpose is to test LoRA training effects on the Z-Image (non-turbo) version while significantly improving inference/test speed. The model **does not incorporate any weights or style from Z-Image-Turbo** at all — it is a **pure-blood version** based purely on Z-Image, effectively retaining the original Z-Image's adaptability, random diversity in outputs, and overall image style. |
|
|
|
|
|
Compared to the official Z-Image, inference is much faster (good results achievable in just 10–20 steps); compared to the official Z-Image-Turbo, this model preserves stronger diversity, better LoRA compatibility, and greater fine-tuning potential, though it is slightly slower than Turbo (still far faster than the original Z-Image's 28–50 steps). |
|
|
|
|
|
The model is mainly suitable for: |
|
|
- Users who want to train/test LoRAs on the Z-Image non-Turbo base |
|
|
- Scenarios needing faster generation than the original without sacrificing too much diversity and stylistic freedom |
|
|
- Artistic, illustration, concept design, and other generation tasks that require a certain level of randomness and style variety |
|
|
- Compatible with ComfyUI inference (layer prefix == model.diffusion_model) |
|
|
|
|
|
<p align="center"> |
|
|
<img src="0.png" width="1200"/> |
|
|
</p> |
|
|
|
|
|
### Usage Instructions: |
|
|
|
|
|
Basic workflow: please refer to the Z-Image-Turbo official workflow (fully compatible with the official Z-Image-Turbo workflow) |
|
|
|
|
|
Recommended inference parameters: |
|
|
- inference **cfg**: 1.0–2.5 (recommended range: 1.0~1.8; higher values enhance prompt adherence) |
|
|
- inference **steps**: 10–20 (10 steps for quick previews, 15–20 steps for more stable quality) |
|
|
- sampler / scheduler: **Euler / simple**, or **res_m**, or any other compatible sampler |
|
|
|
|
|
LoRA compatibility is good; recommended weight: 0.6~1.0, adjust as needed. |
|
|
|
|
|
Also on: [Civitai](https://civitai.com/models/958009/redcraft-or-redzimage-or-updated-jan30-or-latest-redzib-dx1) | [Modelscope AIGC](https://modelscope.cn/models/AiMETATRON/Z-Image-Distilled) |
|
|
#### RedCraft | 红潮造相 ⚡️ REDZimage | Updated-JAN30 | Latest - RedZiB ⚡️ DX1 Distilled Acceleration |
|
|
|
|
|
### Current Limitations & Future Directions |
|
|
|
|
|
**Current main limitations:** |
|
|
- The distillation process causes some damage to **text (especially very small-sized text)**, with rendering clarity and completeness inferior to the original Z-Image |
|
|
- Overall color tone remains consistent with the original ZI, but **certain samplers** can produce color cast issues (particularly noticeable excessive blue tint) |
|
|
|
|
|
**Next optimization directions:** |
|
|
- Further stabilize generation quality under **CFG=1** within **10 steps or fewer**, striving to achieve more usable results that are closer to the original style even at very low step counts |
|
|
- Optimize negative prompt adherence when **CFG > 1**, improving control over negative descriptions and reducing interference from unwanted elements |
|
|
- Continue improving clarity and readability in small text areas while maintaining the speed advantages brought by distillation |
|
|
|
|
|
We welcome feedback and generated examples from all users — let's collaborate to advance this pure-blood acceleration direction! |
|
|
|
|
|
### Model License: |
|
|
|
|
|
Please follow the **Apache-2.0** license of the Z-Image model. |
|
|
|
|
|
Please follow the **Apache-2.0** open source license for the Z-Image model. |