Z-Image-Fun
Collection
4 items
β’
Updated
β’
5
| Name | Description |
|---|---|
| Z-Image-Fun-Lora-Distill-8-Steps.safetensors | This is a Distill LoRA for Z-Image that distills both steps and CFG. This model does not require CFG and uses 8 steps for inference. |
| Output 25 steps | Output 8 steps |
![]() |
![]() |
| Output 25 steps | Output 8 steps |
![]() |
![]() |
| Output 25 steps | Output 8 steps |
![]() |
![]() |
| Output 25 steps | Output 8 steps |
![]() |
![]() |
| Pose + Inpaint | Output 25 steps | Output 8 steps |
![]() ![]() |
![]() |
![]() |
| Pose + Inpaint | Output 25 steps | Output 8 steps |
![]() ![]() ![]() |
![]() |
![]() |
| Pose | Output 25 steps | Output 8 steps |
![]() |
![]() |
![]() |
| Pose | Output 25 steps | Output 8 steps |
![]() |
![]() |
![]() |
| Canny | Output | Output 8 steps |
![]() |
![]() |
![]() |
| Depth | Output | Output 8 steps |
![]() |
![]() |
![]() |
Go to the VideoX-Fun repository for more details.
Please clone the VideoX-Fun repository and create the required directories:
# Clone the code
git clone https://github.com/aigc-apps/VideoX-Fun.git
# Enter VideoX-Fun's directory
cd VideoX-Fun
# Create model directories
mkdir -p models/Diffusion_Transformer
mkdir -p models/Personalized_Model
Then download the weights into models/Diffusion_Transformer and models/Personalized_Model.
π¦ models/
βββ π Diffusion_Transformer/
β βββ π Z-Image/
βββ π Personalized_Model/
β βββ π¦ Z-Image-Fun-Lora-Distill-8-Steps.safetensors
β βββ π¦ Z-Image-Fun-Controlnet-Union-2.1.safetensors
β βββ π¦ Z-Image-Fun-Controlnet-Union-2.1-lite.safetensors
To run the model, first set the lora_path in examples/z_image/predict_t2i.py to:
Personalized_Model/Z-Image-Fun-Lora-Distill-8-Steps.safetensors
Then, run the file:
examples/z_image/predict_t2i.py
The following scripts are also supported:
Recommended Settings: