shauray8 commited on
Commit
5bb8e15
·
1 Parent(s): b5e3f6e

model and rea readme

Browse files
README.md CHANGED
@@ -1,3 +1,76 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
3
  ---
 
1
  ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - Wan-AI/Wan2.1-T2V-14B
7
+ tags:
8
+ - text-to-video
9
+ - lora
10
+ - diffusers
11
+ - template:diffusion-lora
12
+ widget:
13
+ - text: >-
14
+ [origami] a crafted grasshopper moving on the jungle floor, dead leaves all around, huge trees in the background.
15
+ output:
16
+ url: videos/1742855529510.mp4
17
+ - text: >-
18
+ [origami] a crafted grasshopper moving on the jungle floor, dead leaves all around, huge trees in the background.
19
+ output:
20
+ url: videos/1742861776754.mp4
21
+ - text: >-
22
+ [origami] a monkey swinging on a branch of a tree, huge monkeys around them.
23
+ output:
24
+ url: videos/1742862552292.mp4
25
+
26
+ ---
27
+ # Origami Lora for WanVideo2.1
28
+ <Gallery />
29
+
30
+ ## Trigger words
31
+
32
+ You should use `origami` to trigger the video generation.
33
+
34
+ ## Using with Diffusers
35
+ ```py
36
+ pip install git+https://github.com/huggingface/diffusers.git
37
+ ```
38
+
39
+ ```py
40
+ import torch
41
+ from diffusers.utils import export_to_video
42
+ from diffusers import AutoencoderKLWan, WanPipeline
43
+ from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
44
+
45
+ # Available models: Wan-AI/Wan2.1-T2V-14B-Diffusers, Wan-AI/Wan2.1-T2V-1.3B-Diffusers
46
+ model_id = "Wan-AI/Wan2.1-T2V-14B-Diffusers"
47
+ vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
48
+ pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
49
+ flow_shift = 5.0 # 5.0 for 720P, 3.0 for 480P
50
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
51
+ pipe.to("cuda")
52
+
53
+ pipe.load_lora_weights("shauray/Origami_WanLora")
54
+
55
+ pipe.enable_model_cpu_offload() #for low-vram environments
56
+
57
+ prompt = "origami style bull charging towards a man"
58
+
59
+ output = pipe(
60
+ prompt=prompt,
61
+ height=480,
62
+ width=720,
63
+ num_frames=81,
64
+ guidance_scale=5.0,
65
+ ).frames[0]
66
+ export_to_video(output, "output.mp4", fps=16)
67
+ ```
68
+
69
+ ## Download model
70
+
71
+ Weights for this model are available in Safetensors format.
72
+
73
+ [Download](/shauray/Origami_WanLora/tree/main) them in the Files & versions tab.
74
+ ---
75
  license: mit
76
  ---
origami_000000500.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e4f299f5e68b61c25253a71cc4107b44d1b447cfeb050f3779f294d159a2140
3
+ size 306807608
videos/1742855529510.mp4 ADDED
Binary file (343 kB). View file
 
videos/1742861776754.mp4 ADDED
Binary file (372 kB). View file
 
videos/1742862552292.mp4 ADDED
Binary file (634 kB). View file