How to use styly-agents/Wan2-2-pixel-animate with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.2-I2V-14B-480P", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("styly-agents/Wan2-2-pixel-animate") prompt = "A man with short gray hair plays a red electric guitar." input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png") image = pipe(image=input_image, prompt=prompt).frames[0] export_to_video(output, "output.mp4")
Hi, thank you for your great work. Do you have any plans to open-source the training dataset?
· Sign up or log in to comment