File size: 13,481 Bytes
e868545
 
 
 
 
 
 
 
 
 
 
 
 
94116db
94a4564
 
 
 
 
0cee564
94a4564
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e868545
01e56f8
3b73f9a
01e56f8
3b73f9a
 
 
 
 
 
 
 
fda064b
 
3b73f9a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fda064b
3b73f9a
fda064b
3b73f9a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fda064b
3b73f9a
fda064b
3b73f9a
 
 
 
 
 
 
ca0da2e
 
 
 
03ce50b
ca0da2e
ad2df7f
ca0da2e
 
 
 
 
0faea96
 
 
 
 
 
ca0da2e
 
0faea96
a8c20c6
 
ddbdb64
a8c20c6
 
 
27a0193
 
 
 
 
 
 
 
18b3c4f
adfefd4
18b3c4f
 
27a0193
 
 
 
10b62f7
18b3c4f
5198b1a
18b3c4f
10b62f7
0f9048c
18b3c4f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b9cdda2
18b3c4f
5bee22a
 
265c6d4
18b3c4f
27a0193
 
a8c20c6
bc7a74c
 
d2f3592
bc7a74c
0fae9c7
 
 
 
bc7a74c
 
 
 
d2f3592
f4c5377
 
 
d2f3592
f4c5377
99b9aee
fc40b75
 
 
f4c5377
 
 
 
 
 
 
 
 
bc7a74c
 
 
0443557
 
0fae9c7
 
 
 
 
 
 
 
bc7a74c
e868545
c181a35
e868545
c181a35
e868545
c181a35
 
 
 
 
e868545
 
 
 
 
c181a35
e868545
c181a35
e868545
c181a35
 
 
 
e868545
c181a35
e868545
4132e91
c181a35
e868545
c181a35
e868545
c181a35
 
 
e868545
c181a35
 
 
 
e868545
c181a35
e868545
c181a35
e868545
c181a35
e868545
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
---
base_model:
- Tongyi-MAI/Z-Image
base_model_relation: finetune
frameworks: PyTorch
language:
- en
- zh
license: apache-2.0
pipeline_tag: text-to-image
tasks:
- text-to-image-synthesis
tags:
- Z
---
## ZImage DPO “AGILE” Now Uploading 08/03/2026 女神节祝福🌸

On this Day, may every woman feel the strength, grace, and boundless potential that lives within her. Thank you for your courage, your kindness, your resilience, and the countless ways you make the world brighter and better. Here's to equality, empowerment, joy, and endless possibilities—today and every day. Happy International Women's Day🌸

**ZIDistilled FUN “AGILE” Now Released**

Special thanks to the VideoXFUN team for releasing the groundbreaking **Zimage Distilled Adapter 2603**. By incorporating this latest update, **AGILE** achieves a refined balance between speed, diversity, and visual richness — unlocking more creative freedom while maintaining exceptional efficiency.

![image](https://cdn-uploads.huggingface.co/production/uploads/65cd1967c284b4c6ad1fa5e2/KNUUgp1losI56z2rSkVBf.png)

---
**Agility in Motion, Diversity in Depth**
The brand-new ZImage FUN “AGILE” is built upon the cutting-edge **ZIB acceleration framework**. We deliberately reduced DPO & Distilled weight to preserve greater stochastic freedom, combined with the most recent training datasets, resulting in dramatically increased output variety, richer content details, and more imaginative compositions without sacrificing core stability.
For the first time, **AGILE** reaches a true quality parity challenge against the flagship  **“ZImage TURBO”**  in terms of overall image fidelity and sharpness — even in complex scenes — while delivering faster iteration and superior responsiveness.

**Key highlights:**

-   **True ZIT-level unlocked:** Photorealistic lighting, textures, and material rendering that now rivals or approaches ZImage TURBO quality, even at standard step counts.
-   **Enhanced diversity & content richness:** **DPO + Newest datasets** = more varied poses, styles, atmospheres, intricate details, and unexpected creative sparks in every generation.
-   **ZIB ecosystem ignition:** Exceptional native compatibility with **ZIB-series LoRAs**; your existing and future LoRAs now align faster, reproduce more faithfully, and shine brighter than ever before — officially kicking off the full **ZIB LoRA era**.
-   **Agile workflows:** Seamless hybrid use with **Klein 9B** for refinement, ensemble boosting, or rapid prototyping; near-instant LoRA response with preserved high-entropy creativity.

Every generation is a step toward freer, bolder imagination.

---
## Z-Image-Distilled DPO “Veris” 02/26/2026

**ZImage DPO****Veris**” Now Released

Special thanks to [@Fok](https://huggingface.co/F16/z-image-turbo-flow-dpo) for providing the **Flow-DPO technical adaptation**. By skillfully integrating the training philosophy of Direct Preference Optimization (DPO) into the distillation weights, the Zimage distilled model achieves a major leap in lighting, color fidelity, and material authenticity — more natural light & shadow, more believable colors, and details that hold up under scrutiny.

特别感谢 [@Fok](https://huggingface.co/F16/z-image-turbo-flow-dpo) 饼儿佬提供了**Flow-DPO**技术适配。通过巧妙地将直接偏好优化(DPO)的技术理念融入蒸馏权重,Zimage 蒸馏模型在光照、色彩保真度和材质真实性方面实现了重大飞跃——更自然的光影效果、更逼真的色彩,以及经得起仔细审查的细节。


![0](https://cdn-uploads.huggingface.co/production/uploads/65cd1967c284b4c6ad1fa5e2/NPoSLkWWRY-hqReOHpb24.png)

**more case** in: RedCraft | 红潮 | RedZDX⚡️Distilled [[Civitai](https://civitai.com/models/958009) ]

---

The following example shows a comparison between ZIT and Flow DPO, intended to illustrate the effect of DPO, rather than a direct demonstration of ZIB Distilled

![image](https://cdn-uploads.huggingface.co/production/uploads/65cd1967c284b4c6ad1fa5e2/DlQJt28KCbgZUaMKr5RbP.png)


![image](https://cdn-uploads.huggingface.co/production/uploads/65cd1967c284b4c6ad1fa5e2/dQxLF0nVggSKK_qpt339H.png)

---

**Speed of Truth, Fidelity of Flow**

**真实且极速,用忠诚在流动**

The all-new **ZI DPO “Veris”** is powered by the latest-generation ZIB acceleration engine. Building on the **RedZDX** training data, we further distilled a more efficient, more refined Zimage-based model.

Now — solid, highly realistic generations in just 8 steps.(**Better LoRAs alignment**)

仅需8步即可生成更有层次感、高度逼真的图像。(**LoRa对齐效果更佳**)

---

**Key highlights**:

Realism-first prototyping — near-zero latency for LoRAs, with lighting and color already very close to final training targets

High-entropy stochastic pre-sampling — delivers fast, high-quality realistic initial noise for ZImage pipelines

Hybrid realism workflows — seamless integration with Klein 9B for cascaded refinement or ensemble boosting, pushing visual fidelity and consistency even higher

**Every step toward truth deserves full commitment.**

---

欢迎体验 **ZI DPO “Veris”** ——您的LoRa训练结果不再只是“相似”,而是真正得到“复现”。

Welcome to experience **ZImage DPO “Veris”** — where your LoRAs generations are no longer just “**similar**”, but **truly are**.

同时,欢迎体验在 **ZImage****Turbo** 模型上直接加载 **DPO LoRA Adapter**:

抱脸(HF) https://huggingface.co/F16/z-image-turbo-flow-dpo

魔搭(境内) https://modelscope.cn/models/FFFFFFoo/z-image-turbo-flow-dpo

---

## Z-Image-Distilled V3 🟥 Distilled LoRA Adapter 02/19/2026

Additionally, I've exported Redcraft DX3 ZIB Distilled LoRA in Rank-256 format. The LoRA weight can be adjusted to adapt it to various ZIB fine-tune models, fully compatible with the Z-Image(non-turbo) base model.

[(Distilled LoRA FP16 (1.06 GB))](https://civitai.com/api/download/models/2680424?type=Model&format=SafeTensor&size=full&fp=fp16) <- 可以通过这里直接下载 LoRA 版本

**Redcraft DX3** ZIB Distilled on [CivitAI](https://civitai.com/models/958009?modelVersionId=2680424)

上面是 Redcraft DX3 ZIB Distilled 导出为 Rank256 的LoRA版本,可以调整权重强度用于各种微调ZIT版本, 适配于 Z-Image(non-turbo) base 基底模型.

---

## Z-Image-Distilled V3 2026/2/15

DF11 Lossless Compression RedZDX V3 came out, learn more: [Dynamic-length Float (DFloat11)](https://huggingface.co/DFloat11)

Thanks to [mingyi456/Z-Image-Distilled-DF11-ComfyUI](https://huggingface.co/mingyi456/Z-Image-Distilled-DF11-ComfyUI)

---


## Z-Image-Distilled V3 2026/2/11

Thanks to [Bubbliiiing](https://github.com/bubbliiiing), [VideoX-Fun](https://github.com/aigc-apps/VideoX-Fun)& [Alibaba-PAI](https://help.aliyun.com/zh/pai/) Provided us with a more efficient distillation solution

https://huggingface.co/alibaba-pai/Z-Image-Fun-Lora-Distill

Speed of Light, Power of Flow: The new ZID v3 "Lucis" is powered by the latest ZIB acceleration. Building on ZID v2 trainning sets, we've distilled a more efficient Zimage-based RedDX3. Now, in just 5 steps, you get solid results.

Rapid Prototyping: Test LoRA training hypotheses instantly with 'near-zero' latency.

Stochastic Pre-sampling: Serve as a high-speed, high-entropy source for ZiTurbo pipelines.

Hybrid Workflows: Pair seamlessly with Klein 9B for cascaded refinement or ensemble generation.

<p align="center">
    <img src="8.png" width="1200"/>
</p>

- inference cfg: 1.0-1.5(建议1.0)
- inference steps: 5(5-15步)
- sampler / scheduler: Euler / simple

Preview images generated by Z-Image Distilled V3+Moody MIX V7(ZIT finetune) Hybrid Workflow,Just for showing the style difference between ZID(RedZDX3) and ZIT(fine-tunning), 

no ranking intended =)

演示例图使用ZIDistilled V3+Moody MIX V7混合工作流程,不用做排名对比 (L = 'ZID v3', R = 'ZIT ft')

<p align="center">
    <img src="15.png" width="1200"/>
</p>

<p align="center">
    <img src="12.png" width="1200"/>
</p>

<p align="center">
    <img src="10.png" width="1200"/>
</p>

<p align="center">
    <img src="14.png" width="1200"/>
</p>

<p align="center">
    <img src="13.png" width="1200"/>
</p>

<p align="center">
    <img src="9.png" width="1200"/>
</p>
For more ZID v3 generated examples, please refrence 

RedCraft | 红潮 | RedZDX⚡️Distilled [[Civitai](https://civitai.com/models/958009) ]

Welcome to the era of instant creativity. Welcome to 'Lucis'.


## Z-Image-Distilled V2 2026/2/05

To a certain extent, the problem of ZImage color deviation has been reduced, but it is recommended to adjust the color appropriately according to the art style

<p align="center">
    <img src="6.png" width="1200"/>
</p>

- inference cfg: 1.0(建议1.0)
- inference steps: 10(10-15步)
- sampler / scheduler: Euler / simple

感谢🙏这位作者完成了Z-Image的FP8mixed混合量化方案:

https://huggingface.co/pachiiahri

已上传 FP8 混合精度版本,请给这位作者点赞👍

Also available in NVFP4 quantized format, optimized for acceleration on Blackwell architecture GPUs.Double speed, Half resources.( like RTX50XX, PRO6000, B200, and others )

Also supports non-50 series GPUs (automatic 16-bit operation)

<p align="center">
    <img src="7.png" width="1200"/>
</p>

以上是FP8 scale&mixed 直出工作流(所有例图工作流开放[Civitai](https://civitai.com/models/958009?modelVersionId=2661885))

精度混合方案来自 https://civitai.com/models/2172944/z-image-fp8


The art style leans towards realism
Retains ZIB's creative ability and reduces the collapse of Human anatomy.

Thanks to @anyMODE([Civitai](https://civitai.com/models/2359857?modelVersionId=2663070)) for exporting ZID LoRAs

<p align="center">
    <img src="3.png" width="1200"/>
</p>

<p align="center">
    <img src="4.png" width="1200"/>
</p>

## Z-Image-Distilled V1 2026/1/30

This model is a **direct distillation-accelerated version** based on the original **Z-Image** (non-Turbo) source. Its purpose is to test LoRA training effects on the Z-Image (non-turbo) version while significantly improving inference/test speed. The model **does not incorporate any weights or style from Z-Image-Turbo** at all — it is a **pure-blood version** based purely on Z-Image, effectively retaining the original Z-Image's adaptability, random diversity in outputs, and overall image style.

Compared to the official Z-Image, inference is much faster (good results achievable in just 10–20 steps); compared to the official Z-Image-Turbo, this model preserves stronger diversity, better LoRA compatibility, and greater fine-tuning potential, though it is slightly slower than Turbo (still far faster than the original Z-Image's 28–50 steps).

The model is mainly suitable for:
- Users who want to train/test LoRAs on the Z-Image non-Turbo base
- Scenarios needing faster generation than the original without sacrificing too much diversity and stylistic freedom
- Artistic, illustration, concept design, and other generation tasks that require a certain level of randomness and style variety
- Compatible with ComfyUI inference (layer prefix == model.diffusion_model)

<p align="center">
    <img src="0.png" width="1200"/>
</p>

### Usage Instructions:

Basic workflow: please refer to the Z-Image-Turbo official workflow (fully compatible with the official Z-Image-Turbo workflow)

Recommended inference parameters:
- inference **cfg**: 1.0–2.5 (recommended range: 1.0~1.8; higher values enhance prompt adherence)
- inference **steps**: 10–20 (10 steps for quick previews, 15–20 steps for more stable quality)
- sampler / scheduler: **Euler / simple**, or **res_m**, or any other compatible sampler

LoRA compatibility is good; recommended weight: 0.6~1.0, adjust as needed.

Also on: [Civitai](https://civitai.com/models/958009/redcraft-or-redzimage-or-updated-jan30-or-latest-redzib-dx1) | [Modelscope AIGC](https://modelscope.cn/models/AiMETATRON/Z-Image-Distilled)
#### RedCraft | 红潮造相 ⚡️ REDZimage | Updated-JAN30 | Latest - RedZiB ⚡️ DX1 Distilled Acceleration

### Current Limitations & Future Directions

**Current main limitations:**
- The distillation process causes some damage to **text (especially very small-sized text)**, with rendering clarity and completeness inferior to the original Z-Image
- Overall color tone remains consistent with the original ZI, but **certain samplers** can produce color cast issues (particularly noticeable excessive blue tint)

**Next optimization directions:**
- Further stabilize generation quality under **CFG=1** within **10 steps or fewer**, striving to achieve more usable results that are closer to the original style even at very low step counts
- Optimize negative prompt adherence when **CFG > 1**, improving control over negative descriptions and reducing interference from unwanted elements
- Continue improving clarity and readability in small text areas while maintaining the speed advantages brought by distillation

We welcome feedback and generated examples from all users — let's collaborate to advance this pure-blood acceleration direction!

### Model License:

Please follow the **Apache-2.0** license of the Z-Image model.

Please follow the **Apache-2.0** open source license for the Z-Image model.