How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import export_to_video

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-T2V-1.3B", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("AlekseyCalvin/Wan1.3B_CausVid_LoRA_MisubiDiffusers_Conversion")

prompt = "A man with short gray hair plays a red electric guitar."

output = pipe(prompt=prompt).frames[0]
export_to_video(output, "output.mp4")

CausVid Accelerator Misubi-format Conversion

Text/Image to Video Low Rank Adapter (LoRA)

For Wan2.1 1.3B family models (Base, Skyreels V2, FunINP1.3B, etc...)

Based on Kijal's original ComfyUI-oriented LoRA extraction of the CausVid accelerated mod of Wan2.1 1.3B.
This conversion might work better than Kijal's source for those using CausVid outside of a ComfyUI environment.
Try this version if: Kijal's original LoRA does not work at all for you and crashes/fails to infer.
For example: This version works in DrawThings, whereas Kijal's original does not (as of the time I write this) & simply crashes.
This might (or might not) extend to other given non-Comfy Wan-inference environments/backends/frameworks.
Please share any other confirmed use cases in the "Community" tab!

Downloads last month
52
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AlekseyCalvin/Wan1.3B_CausVid_LoRA_MisubiDiffusers_Conversion

Adapter
(23)
this model