--- license: apache-2.0 library_name: videox_fun --- # Z-Image-Turbo-Fun-Controlnet-Union-2.1 [![Github](https://img.shields.io/badge/🎬%20Code-VideoX_Fun-blue)](https://github.com/aigc-apps/VideoX-Fun) ## Update - A new lite model has been added with Control Latents applied on 5 layers (only 1.9GB). The previous Control model had two issues: insufficient mask randomness causing the model to learn mask patterns and auto-fill during inpainting, and overfitting between control and tile distillation causing artifacts at large control_context_scale values. Both Control and Tile models have been retrained with enriched mask varieties and improved training schedules. Additionally, the dataset has been restructured with multi-resolution control images (512~1536) instead of single resolution (512) for better robustness. [2026.01.12] - During testing, we found that applying ControlNet to Z-Image-Turbo caused the model to lose its acceleration capability and become blurry. We performed 8-step distillation on the version 2.1 model, and the distilled model demonstrates better performance when using 8-step prediction. Additionally, we have uploaded a tile model that can be used for super-resolution generation. [2025.12.22] - Due to a typo in version 2.0, `control_layers` was used instead of `control_noise_refiner` to process refiner latents during training. Although the model converged normally, the model inference speed was slow because `control_layers` forward pass was performed twice. In version 2.1, we made an urgent fix and the speed has returned to normal. [2025.12.17] ## Model Card ### a. 2601 Models | Name | Description | |--|--| | Z-Image-Turbo-Fun-Controlnet-Union-2.1-2601-8steps.safetensors | Compared to the old version of the model, a more diverse variety of masks and a more reasonable training schedule have been adopted. This reduces bright spots/artifacts and mask information leakage. Additionally, the dataset has been restructured with multi-resolution control images (512~1536) instead of single resolution (512) for better robustness. | | Z-Image-Turbo-Fun-Controlnet-Tile-2.1-2601-8steps.safetensors | Compared to the old version of the model, a higher resolution was used for training, and a more reasonable training schedule was employed during distillation, which reduces bright spots/artifacts. | | Z-Image-Turbo-Fun-Controlnet-Union-2.1-lite-2601-8steps.safetensors | Uses the same training scheme as the 2601 version, but compared to the large version of the model, fewer layers have control added, resulting in weaker control conditions. This makes it suitable for larger control_context_scale values, and the generation results appear more natural. It is also suitable for lower-spec machines. | | Z-Image-Turbo-Fun-Controlnet-Tile-2.1-lite-2601-8steps.safetensors | Uses the same training scheme as the 2601 version, but compared to the large version of the model, fewer layers have control added, resulting in weaker control conditions. This makes it suitable for larger control_context_scale values, and the generation results appear more natural. It is also suitable for lower-spec machines. | ### b. Models Before 2601 | Name | Description | |--|--| | Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps.safetensors | Based on version 2.1, the model was distilled using an 8-step distillation algorithm. 8-step prediction is recommended. Compared to version 2.1, when using 8-step prediction, the images are clearer and the composition is more reasonable. | | Z-Image-Turbo-Fun-Controlnet-Tile-2.1-8steps.safetensors | A Tile model trained on high-definition datasets that can be used for super-resolution, with a maximum training resolution of 2048x2048. The model was distilled using an 8-step distillation algorithm, and 8-step prediction is recommended. | | Z-Image-Turbo-Fun-Controlnet-Union-2.1.safetensors | A retrained model after fixing the typo in version 2.0, with faster single-step speed. Similar to version 2.0, the model lost some of its acceleration capability after training, thus requiring more steps. | | Z-Image-Turbo-Fun-Controlnet-Union-2.0.safetensors | ControlNet weights for Z-Image-Turbo. Compared to version 1.0, it adds modifications to more layers and was trained for a longer time. However, due to a typo in the code, the layer blocks were forwarded twice, resulting in slower speed. The model supports multiple control conditions such as Canny, Depth, Pose, MLSD, etc. Additionally, the model lost some of its acceleration capability after training, thus requiring more steps. | ## Model Features - This ControlNet is added on 15 layer blocks and 2 refiner layer blocks (Lite models are added on 3 layer blocks and 2 refiner blocks). It supports multiple control conditionsβ€”including Canny, HED, Depth, Pose and MLSD can be used like a standard ControlNet. - Inpainting mode is also supported. - Training Process: - 2.0: The model was trained from scratch for 70,000 steps on a dataset of 1 million high-quality images covering both general and human-centric content. Training was performed at 1328 resolution using BFloat16 precision, with a batch size of 64, a learning rate of 2e-5, and a text dropout ratio of 0.10. - 2.1: Version 2.1 is based on the version 2.0 weights and continued training for an additional 11,000 steps after the typo fix, using the same parameters and dataset. - 2.1-8-steps: Version 2.1-8-steps was obtained by training for 5,500 steps using an 8-step distillation algorithm based on version 2.1. - Note on Steps: - 2.0 and 2.1: As you increase the control strength (higher control_context_scale values), it's recommended to appropriately increase the number of inference steps to achieve better results and maintain generation quality. This is likely because the control model has not been distilled. - 2.1-8-steps: Just use 8 steps in inference. - You can adjust control_context_scale for stronger control and better detail preservation. For better stability, we highly recommend using a detailed prompt. The optimal range for control_context_scale is from 0.65 to 0.90. - During testing, in versions 2.0 and 2.1, we found that applying ControlNet to Z-Image-Turbo caused the model to lose its acceleration capability and produce blurry images. For detailed information on strength and step count testing, please refer to Scale Test Results. These results were generated using version 2.0. For strength and step testing, please refer to [Scale Test Results](#scale-test-results). This was obtained by generating with version 2.0. ## Results ### a. Difference between 2.1-8steps and 2.1-2601-8steps. The old 8-steps model had bright spots/artifacts when the control_context_scale was too large, while the new version does not.
Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps Z-Image-Turbo-Fun-Controlnet-Union-2.1-2601-8steps
The old 8-steps model sometimes learned the mask information and tended to completely fill the mask during removal, while the new version does not.
Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps Z-Image-Turbo-Fun-Controlnet-Union-2.1-2601-8steps
### b. Difference between 2.1 and 2.1-8steps. 8 steps results:
Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps Z-Image-Turbo-Fun-Controlnet-Union-2.1
### c. Generation Results With 2.1-lite-2601-8steps Uses the same training scheme as the 2601 version, but compared to the large version of the model, fewer layers have control added, resulting in weaker control conditions. This makes it suitable for larger control_context_scale values, and the generation results appear more natural. It is also suitable for lower-spec machines.
Pose Output
Pose Output
Canny Output
### d. Generation Results With 2.1-2601-8steps
Depth Output
Pose + Inpaint Output
Pose + Inpaint Output
Pose Output
Pose Output
Pose Output
Canny Output
HED Output
Depth Output
Low Resolution High Resolution
## Inference Go to the VideoX-Fun repository for more details. Please clone the VideoX-Fun repository and create the required directories: ```sh # Clone the code git clone https://github.com/aigc-apps/VideoX-Fun.git # Enter VideoX-Fun's directory cd VideoX-Fun # Create model directories mkdir -p models/Diffusion_Transformer mkdir -p models/Personalized_Model ``` Then download the weights into models/Diffusion_Transformer and models/Personalized_Model. ``` πŸ“¦ models/ β”œβ”€β”€ πŸ“‚ Diffusion_Transformer/ β”‚ └── πŸ“‚ Z-Image-Turbo/ β”œβ”€β”€ πŸ“‚ Personalized_Model/ β”‚ β”œβ”€β”€ πŸ“¦ Z-Image-Turbo-Fun-Controlnet-Union-2.1.safetensors β”‚ β”œβ”€β”€ πŸ“¦ Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps.safetensors β”‚ └── πŸ“¦ Z-Image-Turbo-Fun-Controlnet-Tile-2.1-8steps.safetensors ``` Then run the file `examples/z_image_fun/predict_t2i_control_2.1.py` and `examples/z_image_fun/predict_i2i_inpaint_2.1.py`.
(Obsolete) Scale Test Results: ## Scale Test Results The table below shows the generation results under different combinations of Diffusion steps and Control Scale strength: | Diffusion Steps | Scale 0.65 | Scale 0.70 | Scale 0.75 | Scale 0.8 | Scale 0.9 | Scale 1.0 | |:---------------:|:----------:|:----------:|:----------:|:---------:|:---------:|:---------:| | **9** | ![](results/scale_test/9_scale_0.65.png) | ![](results/scale_test/9_scale_0.70.png) | ![](results/scale_test/9_scale_0.75.png) | ![](results/scale_test/9_scale_0.8.png) | ![](results/scale_test/9_scale_0.9.png) | ![](results/scale_test/9_scale_1.0.png) | | **10** | ![](results/scale_test/10_scale_0.65.png) | ![](results/scale_test/10_scale_0.70.png) | ![](results/scale_test/10_scale_0.75.png) | ![](results/scale_test/10_scale_0.8.png) | ![](results/scale_test/10_scale_0.9.png) | ![](results/scale_test/10_scale_1.0.png) | | **20** | ![](results/scale_test/20_scale_0.65.png) | ![](results/scale_test/20_scale_0.70.png) | ![](results/scale_test/20_scale_0.75.png) | ![](results/scale_test/20_scale_0.8.png) | ![](results/scale_test/20_scale_0.9.png) | ![](results/scale_test/20_scale_1.0.png) | | **30** | ![](results/scale_test/30_scale_0.65.png) | ![](results/scale_test/30_scale_0.70.png) | ![](results/scale_test/30_scale_0.75.png) | ![](results/scale_test/30_scale_0.8.png) | ![](results/scale_test/30_scale_0.9.png) | ![](results/scale_test/30_scale_1.0.png) | | **40** | ![](results/scale_test/40_scale_0.65.png) | ![](results/scale_test/40_scale_0.70.png) | ![](results/scale_test/40_scale_0.75.png) | ![](results/scale_test/40_scale_0.8.png) | ![](results/scale_test/40_scale_0.9.png) | ![](results/scale_test/40_scale_1.0.png) | Parameter Description: Diffusion Steps: Number of iteration steps for the diffusion model (9, 10, 20, 30, 40) Control Scale: Control strength coefficient (0.65 - 1.0)