Wan2.1 14B 480p I2V LoRAs
Collection
A collection of Remade's Wan2.1 14B 480p I2V LoRAs
β’ 49 items β’ Updated β’ 210
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image, export_to_video
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-I2V-14B-480P,Wan-AI/Wan2.1-I2V-14B-480P-Diffusers", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("Remade-AI/Selfie-With-Younger-Self")
prompt = "The video starts with a man with a beard smiling at the camera, then s31lf13 taking a selfie with their younger self, and the younger self appears next to him with similar facial features and eye color. The younger self wears a white t-shirt and has a cream white jacket. The younger self is smiling slightly."
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png")
image = pipe(image=input_image, prompt=prompt).frames[0]
export_to_video(output, "output.mp4")This LoRA is trained on the Wan2.1 14B I2V 480p model and allows you to take a selfie with your younger self!
The key trigger phrase is: s31lf13 taking a selfie with their younger self
For prompting, check out the example prompts; this way of prompting seems to work very well.
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!
Base model
Wan-AI/Wan2.1-I2V-14B-480P