Instructions to use Wan-AI/Wan2.1-VACE-14B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Wan-AI/Wan2.1-VACE-14B with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-VACE-14B", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
Running the full model?
Hello,
I have noticed that the 14B version is split into 7 separate parts, how to I connect all of them into one for use in ComfyUI?
You generally use a quantized or finetuned model. Comfyui has them, as does Kijai. There are also GGUF's available. Look at the model card for where to find them, most comfyui workflows also have a download link to the models.
Yes, I know about quantized and fine tuned models, I wanted to test out the full capabilities of the model because I built a high end machine that would be capable of working the full model.
After running the model on my private side for a little bit, it started asking me for a DASHSCOPE_API_KEY Alibaba seems to not release these? Can I not roll back to a previous version of the interface to avoid using their storage space? I'm not completely clear on why their service now has an API key thrown in there.