How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image, export_to_video

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-I2V-14B-480P", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("lvfengchun/wan21_i2v_boiling_point")

prompt = "p3n91,鼻孔和耳朵冒出白烟,人物皱眉"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png")

image = pipe(image=input_image, prompt=prompt).frames[0]
export_to_video(output, "output.mp4")

wan21_i2v_boiling_point

Prompt
p3n91,鼻孔和耳朵冒出白烟,人物皱眉
Negative Prompt
色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走
Prompt
p3n91,鼻孔和耳朵冒出白烟,人物皱眉
Negative Prompt
色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走
Prompt
p3n91,鼻孔和耳朵冒出白烟,人物皱眉
Negative Prompt
色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走

Trigger words

You should use p3n91 to trigger the image generation.

You should use 鼻孔和耳朵冒出白烟 to trigger the image generation.

Download model

Download them in the Files & versions tab.

Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lvfengchun/wan21_i2v_boiling_point

Adapter
(114)
this model