bubbliiiing commited on
Commit ·
a3e5080
1
Parent(s): 1fc23a9
Update 2601
Browse files- README.md +101 -6
- Z-Image-Turbo-Fun-Controlnet-Tile-2.1-2601-8steps.safetensors +3 -0
- Z-Image-Turbo-Fun-Controlnet-Tile-2.1-lite-2601-8steps.safetensors +3 -0
- Z-Image-Turbo-Fun-Controlnet-Union-2.1-2601-8steps.safetensors +3 -0
- Z-Image-Turbo-Fun-Controlnet-Union-2.1-lite-2601-8steps.safetensors +3 -0
- results/canny.png +2 -2
- results/canny_lite.png +3 -0
- results/depth.png +2 -2
- results/depth_lite.png +3 -0
- results/hed.png +2 -2
- results/hed_2_1.png +3 -0
- results/hed_2_1_2601.png +3 -0
- results/high_res.png +2 -2
- results/inpaint.png +3 -0
- results/mask_2_1.png +3 -0
- results/mask_2_1_2601.png +3 -0
- results/pose.png +2 -2
- results/pose2.png +2 -2
- results/pose2_lite.png +3 -0
- results/pose3.png +2 -2
- results/pose_inpaint.png +2 -2
- results/pose_lite.png +3 -0
README.md
CHANGED
|
@@ -8,10 +8,21 @@ library_name: videox_fun
|
|
| 8 |
[](https://github.com/aigc-apps/VideoX-Fun)
|
| 9 |
|
| 10 |
## Update
|
|
|
|
| 11 |
- During testing, we found that applying ControlNet to Z-Image-Turbo caused the model to lose its acceleration capability and become blurry. We performed 8-step distillation on the version 2.1 model, and the distilled model demonstrates better performance when using 8-step prediction. Additionally, we have uploaded a tile model that can be used for super-resolution generation. [2025.12.22]
|
| 12 |
- Due to a typo in version 2.0, `control_layers` was used instead of `control_noise_refiner` to process refiner latents during training. Although the model converged normally, the model inference speed was slow because `control_layers` forward pass was performed twice. In version 2.1, we made an urgent fix and the speed has returned to normal. [2025.12.17]
|
| 13 |
|
| 14 |
## Model Card
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
| Name | Description |
|
| 16 |
|--|--|
|
| 17 |
| Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps.safetensors | Based on version 2.1, the model was distilled using an 8-step distillation algorithm. 8-step prediction is recommended. Compared to version 2.1, when using 8-step prediction, the images are clearer and the composition is more reasonable. |
|
|
@@ -20,7 +31,7 @@ library_name: videox_fun
|
|
| 20 |
| Z-Image-Turbo-Fun-Controlnet-Union-2.0.safetensors | ControlNet weights for Z-Image-Turbo. Compared to version 1.0, it adds modifications to more layers and was trained for a longer time. However, due to a typo in the code, the layer blocks were forwarded twice, resulting in slower speed. The model supports multiple control conditions such as Canny, Depth, Pose, MLSD, etc. Additionally, the model lost some of its acceleration capability after training, thus requiring more steps. |
|
| 21 |
|
| 22 |
## Model Features
|
| 23 |
-
- This ControlNet is added on 15 layer blocks and 2 refiner layer blocks. It supports multiple control conditions—including Canny, HED, Depth, Pose and MLSD can be used like a standard ControlNet.
|
| 24 |
- Inpainting mode is also supported.
|
| 25 |
- Training Process:
|
| 26 |
- 2.0: The model was trained from scratch for 70,000 steps on a dataset of 1 million high-quality images covering both general and human-centric content. Training was performed at 1328 resolution using BFloat16 precision, with a batch size of 64, a learning rate of 2e-5, and a text dropout ratio of 0.10.
|
|
@@ -32,11 +43,36 @@ library_name: videox_fun
|
|
| 32 |
- You can adjust control_context_scale for stronger control and better detail preservation. For better stability, we highly recommend using a detailed prompt. The optimal range for control_context_scale is from 0.65 to 0.90.
|
| 33 |
- During testing, in versions 2.0 and 2.1, we found that applying ControlNet to Z-Image-Turbo caused the model to lose its acceleration capability and produce blurry images. For detailed information on strength and step count testing, please refer to Scale Test Results. These results were generated using version 2.0. For strength and step testing, please refer to [Scale Test Results](#scale-test-results). This was obtained by generating with version 2.0.
|
| 34 |
|
| 35 |
-
## TODO
|
| 36 |
-
- [ ] Train on better data.
|
| 37 |
-
|
| 38 |
## Results
|
| 39 |
-
### Difference between 2.1 and 2.1-8steps.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
8 steps results:
|
| 42 |
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
|
@@ -50,7 +86,66 @@ library_name: videox_fun
|
|
| 50 |
</tr>
|
| 51 |
</table>
|
| 52 |
|
| 53 |
-
### Generation Results
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 55 |
<tr>
|
| 56 |
<td>Pose + Inpaint</td>
|
|
|
|
| 8 |
[](https://github.com/aigc-apps/VideoX-Fun)
|
| 9 |
|
| 10 |
## Update
|
| 11 |
+
- A new lite model has been added with Control Latents applied on 5 layers (only 1.9GB). The previous Control model had two issues: insufficient mask randomness causing the model to learn mask patterns and auto-fill during inpainting, and overfitting between control and tile distillation causing artifacts at large control_context_scale values. Both Control and Tile models have been retrained with enriched mask varieties and improved training schedules. Additionally, the dataset has been restructured with multi-resolution control images (512~1536) instead of single resolution (512) for better robustness. [2026.01.12]
|
| 12 |
- During testing, we found that applying ControlNet to Z-Image-Turbo caused the model to lose its acceleration capability and become blurry. We performed 8-step distillation on the version 2.1 model, and the distilled model demonstrates better performance when using 8-step prediction. Additionally, we have uploaded a tile model that can be used for super-resolution generation. [2025.12.22]
|
| 13 |
- Due to a typo in version 2.0, `control_layers` was used instead of `control_noise_refiner` to process refiner latents during training. Although the model converged normally, the model inference speed was slow because `control_layers` forward pass was performed twice. In version 2.1, we made an urgent fix and the speed has returned to normal. [2025.12.17]
|
| 14 |
|
| 15 |
## Model Card
|
| 16 |
+
|
| 17 |
+
### a. 2601 Models
|
| 18 |
+
| Name | Description |
|
| 19 |
+
|--|--|
|
| 20 |
+
| Z-Image-Turbo-Fun-Controlnet-Union-2.1-2601-8steps.safetensors | Compared to the old version of the model, a more diverse variety of masks and a more reasonable training schedule have been adopted. This reduces bright spots/artifacts and mask information leakage. Additionally, the dataset has been restructured with multi-resolution control images (512~1536) instead of single resolution (512) for better robustness. |
|
| 21 |
+
| Z-Image-Turbo-Fun-Controlnet-Tile-2.1-2601-8steps.safetensors | Compared to the old version of the model, a higher resolution was used for training, and a more reasonable training schedule was employed during distillation, which reduces bright spots/artifacts. |
|
| 22 |
+
| Z-Image-Turbo-Fun-Controlnet-Union-2.1-lite-2601-8steps.safetensors | Uses the same training scheme as the 2601 version, but compared to the large version of the model, fewer layers have control added, resulting in weaker control conditions. This makes it suitable for larger control_context_scale values, and the generation results appear more natural. It is also suitable for lower-spec machines. |
|
| 23 |
+
| Z-Image-Turbo-Fun-Controlnet-Tile-2.1-lite-2601-8steps.safetensors | Uses the same training scheme as the 2601 version, but compared to the large version of the model, fewer layers have control added, resulting in weaker control conditions. This makes it suitable for larger control_context_scale values, and the generation results appear more natural. It is also suitable for lower-spec machines. |
|
| 24 |
+
|
| 25 |
+
### b. Models Before 2601
|
| 26 |
| Name | Description |
|
| 27 |
|--|--|
|
| 28 |
| Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps.safetensors | Based on version 2.1, the model was distilled using an 8-step distillation algorithm. 8-step prediction is recommended. Compared to version 2.1, when using 8-step prediction, the images are clearer and the composition is more reasonable. |
|
|
|
|
| 31 |
| Z-Image-Turbo-Fun-Controlnet-Union-2.0.safetensors | ControlNet weights for Z-Image-Turbo. Compared to version 1.0, it adds modifications to more layers and was trained for a longer time. However, due to a typo in the code, the layer blocks were forwarded twice, resulting in slower speed. The model supports multiple control conditions such as Canny, Depth, Pose, MLSD, etc. Additionally, the model lost some of its acceleration capability after training, thus requiring more steps. |
|
| 32 |
|
| 33 |
## Model Features
|
| 34 |
+
- This ControlNet is added on 15 layer blocks and 2 refiner layer blocks (Lite models are added on 3 layer blocks and 2 refiner blocks). It supports multiple control conditions—including Canny, HED, Depth, Pose and MLSD can be used like a standard ControlNet.
|
| 35 |
- Inpainting mode is also supported.
|
| 36 |
- Training Process:
|
| 37 |
- 2.0: The model was trained from scratch for 70,000 steps on a dataset of 1 million high-quality images covering both general and human-centric content. Training was performed at 1328 resolution using BFloat16 precision, with a batch size of 64, a learning rate of 2e-5, and a text dropout ratio of 0.10.
|
|
|
|
| 43 |
- You can adjust control_context_scale for stronger control and better detail preservation. For better stability, we highly recommend using a detailed prompt. The optimal range for control_context_scale is from 0.65 to 0.90.
|
| 44 |
- During testing, in versions 2.0 and 2.1, we found that applying ControlNet to Z-Image-Turbo caused the model to lose its acceleration capability and produce blurry images. For detailed information on strength and step count testing, please refer to Scale Test Results. These results were generated using version 2.0. For strength and step testing, please refer to [Scale Test Results](#scale-test-results). This was obtained by generating with version 2.0.
|
| 45 |
|
|
|
|
|
|
|
|
|
|
| 46 |
## Results
|
| 47 |
+
### a. Difference between 2.1-8steps and 2.1-2601-8steps.
|
| 48 |
+
|
| 49 |
+
The old 8-steps model had bright spots/artifacts when the control_context_scale was too large, while the new version does not.
|
| 50 |
+
|
| 51 |
+
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 52 |
+
<tr>
|
| 53 |
+
<td>Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps</td>
|
| 54 |
+
<td>Z-Image-Turbo-Fun-Controlnet-Union-2.1-2601-8steps</td>
|
| 55 |
+
</tr>
|
| 56 |
+
<tr>
|
| 57 |
+
<td><img src="results/hed_2_1.png" width="100%" /></td>
|
| 58 |
+
<td><img src="results/hed_2_1_2601.png" width="100%" /></td>
|
| 59 |
+
</tr>
|
| 60 |
+
</table>
|
| 61 |
+
|
| 62 |
+
The old 8-steps model sometimes learned the mask information and tended to completely fill the mask during removal, while the new version does not.
|
| 63 |
+
|
| 64 |
+
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 65 |
+
<tr>
|
| 66 |
+
<td>Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps</td>
|
| 67 |
+
<td>Z-Image-Turbo-Fun-Controlnet-Union-2.1-2601-8steps</td>
|
| 68 |
+
</tr>
|
| 69 |
+
<tr>
|
| 70 |
+
<td><img src="results/mask_2_1.png" width="100%" /></td>
|
| 71 |
+
<td><img src="results/mask_2_1_2601.png" width="100%" /></td>
|
| 72 |
+
</tr>
|
| 73 |
+
</table>
|
| 74 |
+
|
| 75 |
+
### b. Difference between 2.1 and 2.1-8steps.
|
| 76 |
|
| 77 |
8 steps results:
|
| 78 |
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
|
|
|
| 86 |
</tr>
|
| 87 |
</table>
|
| 88 |
|
| 89 |
+
### c. Generation Results With 2.1-lite-2601-8steps
|
| 90 |
+
|
| 91 |
+
Uses the same training scheme as the 2601 version, but compared to the large version of the model, fewer layers have control added, resulting in weaker control conditions. This makes it suitable for larger control_context_scale values, and the generation results appear more natural. It is also suitable for lower-spec machines.
|
| 92 |
+
|
| 93 |
+
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 94 |
+
<tr>
|
| 95 |
+
<td>Pose</td>
|
| 96 |
+
<td>Output</td>
|
| 97 |
+
</tr>
|
| 98 |
+
<tr>
|
| 99 |
+
<td><img src="asset/pose.jpg" width="100%" /></td>
|
| 100 |
+
<td><img src="results/pose_lite.png" width="100%" /></td>
|
| 101 |
+
</tr>
|
| 102 |
+
</table>
|
| 103 |
+
|
| 104 |
+
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 105 |
+
<tr>
|
| 106 |
+
<td>Pose</td>
|
| 107 |
+
<td>Output</td>
|
| 108 |
+
</tr>
|
| 109 |
+
<tr>
|
| 110 |
+
<td><img src="asset/pose2.jpg" width="100%" /></td>
|
| 111 |
+
<td><img src="results/pose2_lite.png" width="100%" /></td>
|
| 112 |
+
</tr>
|
| 113 |
+
</table>
|
| 114 |
+
|
| 115 |
+
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 116 |
+
<tr>
|
| 117 |
+
<td>Canny</td>
|
| 118 |
+
<td>Output</td>
|
| 119 |
+
</tr>
|
| 120 |
+
<tr>
|
| 121 |
+
<td><img src="asset/canny.jpg" width="100%" /></td>
|
| 122 |
+
<td><img src="results/canny_lite.png" width="100%" /></td>
|
| 123 |
+
</tr>
|
| 124 |
+
</table>
|
| 125 |
+
|
| 126 |
+
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 127 |
+
<tr>
|
| 128 |
+
<td>Depth</td>
|
| 129 |
+
<td>Output</td>
|
| 130 |
+
</tr>
|
| 131 |
+
<tr>
|
| 132 |
+
<td><img src="asset/depth.jpg" width="100%" /></td>
|
| 133 |
+
<td><img src="results/depth_lite.png" width="100%" /></td>
|
| 134 |
+
</tr>
|
| 135 |
+
|
| 136 |
+
### d. Generation Results With 2.1-2601-8steps
|
| 137 |
+
|
| 138 |
+
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 139 |
+
<tr>
|
| 140 |
+
<td>Pose + Inpaint</td>
|
| 141 |
+
<td>Output</td>
|
| 142 |
+
</tr>
|
| 143 |
+
<tr>
|
| 144 |
+
<td><img src="asset/inpaint.jpg" width="100%" /><img src="asset/mask.jpg" width="100%" /></td>
|
| 145 |
+
<td><img src="results/inpaint.png" width="100%" /></td>
|
| 146 |
+
</tr>
|
| 147 |
+
</table>
|
| 148 |
+
|
| 149 |
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 150 |
<tr>
|
| 151 |
<td>Pose + Inpaint</td>
|
Z-Image-Turbo-Fun-Controlnet-Tile-2.1-2601-8steps.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4ca1f20f684be3f0c53b204b2e61a83f1ac28821c8c9a48ea7d8196ce395eb71
|
| 3 |
+
size 6712485600
|
Z-Image-Turbo-Fun-Controlnet-Tile-2.1-lite-2601-8steps.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:880bf452b060abfcbccedee56f8d3dbf8aed2cb0311b599210361b414fc8f2fd
|
| 3 |
+
size 2016627488
|
Z-Image-Turbo-Fun-Controlnet-Union-2.1-2601-8steps.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:53bac221dfae4279f14a3b1e6e311eac86ab39d57bf3d9a226e5aaf067a049bb
|
| 3 |
+
size 6712485600
|
Z-Image-Turbo-Fun-Controlnet-Union-2.1-lite-2601-8steps.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aa428bc857b0095cddb52cd1acd7c0c6ada4c57658ec0ed39cd64280355b39cf
|
| 3 |
+
size 2016627488
|
results/canny.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
results/canny_lite.png
ADDED
|
Git LFS Details
|
results/depth.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
results/depth_lite.png
ADDED
|
Git LFS Details
|
results/hed.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
results/hed_2_1.png
ADDED
|
Git LFS Details
|
results/hed_2_1_2601.png
ADDED
|
Git LFS Details
|
results/high_res.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
results/inpaint.png
ADDED
|
Git LFS Details
|
results/mask_2_1.png
ADDED
|
Git LFS Details
|
results/mask_2_1_2601.png
ADDED
|
Git LFS Details
|
results/pose.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
results/pose2.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
results/pose2_lite.png
ADDED
|
Git LFS Details
|
results/pose3.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
results/pose_inpaint.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
results/pose_lite.png
ADDED
|
Git LFS Details
|