Wan2.1 14B T2V LoRAs
Collection
A collection of Remade's Wan2.1 14B T2V LoRAs
• 20 items • Updated • 35
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import export_to_video
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-T2V-14B", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("Remade-AI/Abandoned-Places")
prompt = "abandoned places A steady zoom-out tall abandoned building covered in vines and trees stands in the middle of an abandoned city. The sky is overcast and the air is thick with fog. The city is mostly obscured by the fog, but you can see some other buildings in the distance. There is a sense of decay and abandonment in the image."
output = pipe(prompt=prompt).frames[0]
export_to_video(output, "output.mp4")This LoRA is trained on the Wan2.1 14B T2V model and allows you to generate videos of abandoned places!
The key trigger phrase is: abandoned places
For prompting, check out the example prompts; this way of prompting seems to work very well.
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!
Base model
Wan-AI/Wan2.1-T2V-14B