Wan-2.2
Collection
3 items • Updated
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image, export_to_video
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.2-T2V-A14B", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("joyfox/Wan2.2-T2V-EVA")
prompt = "mrx, 一个女人穿着红色战斗服坐在桌子前吃饭"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png")
image = pipe(image=input_image, prompt=prompt).frames[0]
export_to_video(output, "output.mp4")This LoRA is trained on the Wan2.2-T2V-A14B model.
The key trigger phrase is: mrx
For best results, use this prompt structure:
Base model
Wan-AI/Wan2.2-T2V-A14B