Wan2.1 14B T2V LoRAs
Collection
A collection of Remade's Wan2.1 14B T2V LoRAs
• 20 items • Updated • 35
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import export_to_video
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-T2V-14B", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("Remade-AI/POV-Driving")
prompt = "p0v_dr1v1n6, video shows a person driving a car through a burning hellscape. The driver is holding the steering wheel with both hands. Rivers of lava flow on both sides of the cracked road, and firestorms rage in the distance. The driver is looking straight ahead. The car has a digital dashboard and a touchscreen display flickering with demonic symbols."
output = pipe(prompt=prompt).frames[0]
export_to_video(output, "output.mp4")This LoRA is trained on the Wan2.1 14B T2V model and allows you to generate POV driving videos in any scene or landscape you desire!
The key trigger phrase is: p0v_dr1v1n6
For prompting, check out the example prompts; this way of prompting seems to work very well.
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!
Base model
Wan-AI/Wan2.1-T2V-14B