A Distill LoRA for Z-Image that distills both steps and CFG. It requires only 2 steps instead of 8. Due to the random timesteps strategy, it is better adapted to sigmas below 0.500. The recommended sigma for the second step is between 0.800 and 0.500. A larger LoRA strength is recommended.
ComfyUI version of Z-Image-Fun-Lora-Distill-2-Steps-2603.safetensors
Z-Image-Fun-Lora-Distill-4-Steps-2603.safetensors
A Distill LoRA for Z-Image that distills both steps and CFG. It requires only 4 steps instead of 8 steps. Due to the addition of a random timesteps strategy, it is better adapted to cases where sigmas are less than 0.500.
ComfyUI version of Z-Image-Fun-Lora-Distill-4-Steps-2603.safetensors
Z-Image-Fun-Lora-Distill-8-Steps-2603.safetensors
A Distill LoRA for Z-Image that distills both steps and CFG. Compared to Z-Image-Fun-Lora-Distill-8-Steps-2602.safetensors, due to the addition of a random timesteps strategy, it is better adapted to cases where sigmas are less than 0.500.
ComfyUI version of Z-Image-Fun-Lora-Distill-8-Steps-2603.safetensors
b. 2602 Models && Models Before 2602
Name
Description
Z-Image-Fun-Lora-Distill-4-Steps-2602.safetensors
A Distill LoRA for Z-Image that distills both steps and CFG. Compared to Z-Image-Fun-Lora-Distill-8-Steps.safetensors, it requires only 4 steps instead of 8 steps, its colors are more consistent with the original model, and the skin texture is better.
ComfyUI version of Z-Image-Fun-Lora-Distill-4-Steps-2602.safetensors
Z-Image-Fun-Lora-Distill-8-Steps-2602.safetensors
A Distill LoRA for Z-Image that distills both steps and CFG. Compared to Z-Image-Fun-Lora-Distill-8-Steps.safetensors, its colors are more consistent with the original model, and the skin texture is better.
ComfyUI version of Z-Image-Fun-Lora-Distill-8-Steps-2602.safetensors
Z-Image-Fun-Lora-Distill-8-Steps.safetensors
This is a Distill LoRA for Z-Image that distills both steps and CFG. This model does not require CFG and uses 8 steps for inference.
Model Features
This is a Distill LoRA for Z-Image that distills both steps and CFG. It does not use any Z-Image-Turbo related weights and is trained from scratch. It is compatible with other Z-Image LoRAs and Controls.
This model will slightly reduce the output quality and change the output composition of the model. For specific comparisons, please refer to the Results section.
The purpose of this model is to provide fast generation compatibility for Z-Image derivative models, not to replace Z-Image-Turbo.
Results
The difference between the 2603 version model and the 2602 version model
The 2602 model tends to produce blurry images with sigmas below 0.500, as the distillation model was not trained on certain steps. The 2603 model introduces a random timesteps strategy, making it better adapted to sigmas below 0.500.
As shown below, when using kl_optimal, many sigmas fall below 0.500. The 2603 model handles these cases correctly, while the 2602 model does not. Note that although kl_optimal is used in the figure, we still recommend using the simple scheduler for inference.
Z-Image-Fun-Lora-Distill-8-Steps-2602
Z-Image-Fun-Lora-Distill-8-Steps-2603
The difference between the 2602 version model and the previous model
Z-Image-Fun-Lora-Distill-8-Steps-2602
Z-Image-Fun-Lora-Distill-4-Steps-2602
Z-Image-Fun-Lora-Distill-8-Steps
Work itself
Output 25 steps
Output 8-Steps-2602
Output 4-Steps-2602
Output 25 steps
Output 8-Steps-2602
Output 4-Steps-2602
Output 25 steps
Output 8-Steps-2602
Output 4-Steps-2602
Output 25 steps
Output 8-Steps-2602
Output 4-Steps-2602
Work with Controlnet
Pose + Inpaint
Output 25 steps
Output 8-Steps-2602
Output 4-Steps-2602
Pose + Inpaint
Output 25 steps
Output 8-Steps-2602
Output 4-Steps-2602
Pose
Output 25 steps
Output 8-Steps-2602
Output 4-Steps-2602
Canny
Output
Output 8-Steps-2602
Output 4-Steps-2602
Depth
Output
Output 8-Steps-2602
Output 4-Steps-2602
Inference
Go to the VideoX-Fun repository for more details.
Please clone the VideoX-Fun repository and create the required directories:
# Clone the code
git clone https://github.com/aigc-apps/VideoX-Fun.git
# Enter VideoX-Fun's directorycd VideoX-Fun
# Create model directoriesmkdir -p models/Diffusion_Transformer
mkdir -p models/Personalized_Model
Then download the weights into models/Diffusion_Transformer and models/Personalized_Model.