| | --- |
| | base_model: |
| | - lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill |
| | base_model_relation: quantized |
| | library_name: gguf |
| | tags: |
| | - text-to-video |
| | - image-to-video |
| | - video-to-video |
| | - quantized |
| | language: |
| | - en |
| | license: apache-2.0 |
| | --- |
| | |
| | This is a GGUF conversion of an addon of [lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill](https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill) and [Wan-AI/Wan2.1-VACE-14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) scopes. |
| |
|
| | The process involved extracting VACE scopes and injecting into the target models, using scripts provided by [wsbagnsv1](https://huggingface.co/wsbagnsv1). |
| |
|
| | All quantized versions were created from the FP16 model using the conversion scripts provided by city96, available at the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF/tree/main/tools) GitHub repository. |
| |
|
| |
|
| | ## Usage |
| |
|
| | The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders: |
| |
|
| | | Type | Name | Location | Download | |
| | | ------------ | -------------------------------------------------- | ------------------------------ | ---------------- | |
| | | Main Model | Wan2.1_T2V_14B_LightX2V_Step_Cfg_Distill_VACE-GGUF | `ComfyUI/models/unet` | GGUF (this repo) | |
| | | Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) | |
| | | VAE | Wan2_1_VAE_bf16 | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) | |
| | |
| | [**ComfyUI example workflow**](https://docs.comfy.org/tutorials/video/wan/vace) |
| | |
| | ### Notes |
| | |
| | *All original licenses and restrictions from the base models still apply.* |
| | |
| | ## Reference |
| | |
| | - For an overview of quantization types, please see the [GGUF quantization types](https://huggingface.co/docs/hub/gguf#quantization-types). |