File size: 2,192 Bytes
698ea69
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: apache-2.0
language:
- en
base_model:
- Wan-AI/Wan2.1-T2V-14B
tags:
- text-to-video
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
    [origami] a crafted grasshopper moving on the jungle floor, dead leaves all around, huge trees in the background.
  output:
    url: videos/1742855529510.mp4
- text: >-
    [origami] a crafted grasshopper moving on the jungle floor, dead leaves all around, huge trees in the background.
  output:
    url: videos/1742861776754.mp4
- text: >-
    [origami] a monkey swinging on a branch of a tree, huge monkeys around them.
  output:
    url: videos/1742862552292.mp4

---
# Origami Lora for WanVideo2.1
<Gallery />

## Trigger words

You should use `origami` to trigger the video generation.

## Using with Diffusers
```py
pip install git+https://github.com/huggingface/diffusers.git
```

```py
import torch
from diffusers.utils import export_to_video
from diffusers import AutoencoderKLWan, WanPipeline
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler

# Available models: Wan-AI/Wan2.1-T2V-14B-Diffusers, Wan-AI/Wan2.1-T2V-1.3B-Diffusers
model_id = "Wan-AI/Wan2.1-T2V-14B-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
flow_shift = 5.0  # 5.0 for 720P, 3.0 for 480P
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
pipe.to("cuda")

pipe.load_lora_weights("shauray/Origami_WanLora")

pipe.enable_model_cpu_offload() #for low-vram environments

prompt = "origami style bull charging towards a man"

output = pipe(
    prompt=prompt,
    height=480,
    width=720,
    num_frames=81,
    guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```

## Download model

Weights for this model are available in Safetensors format.

[Download](/shauray/Origami_WanLora/tree/main) them in the Files & versions tab.
---
license: apache-2.0
---
---
_this Lora is not perfect has a little like towards the bottom of every generation cause the dataset had those (I fucked up cleaning those)_