Instructions to use Wan-AI/Wan2.2-Animate-14B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Wan-AI/Wan2.2-Animate-14B with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.2-Animate-14B", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
How to train Lora?
#3
by Jeynix - opened
Will the model be like Wan2.2-Animate-A14B-Diffusers? Or what other model can you take for proper training Lora to work with Wan2.2-Animate-A14B? Thank`s
Animate comes with a Relight Lora specific to the new model. per the white paper it was trained on two frames with variant lighting. It’s unclear to me if that means the model can only be trained on image pairs. The documentation also indicates Animate shares similar inference logic with I2V, I just don’t know the extent to which that impacts dataset structure.
It would be really fucking cool if you could train it on a dataset of [input image, input video, output video].