How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("bytedance-research/HuMo", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("drozbay/HuMoveLora")

prompt = "-"
image = pipe(prompt).images[0]

HuMoveLora

Prompt
-

Model description

A LoRA that tries to combine track controlled motion of Wan-Move with the human motion and speech sync features of HuMo.

Usage

  • Base model: Wan HuMo-17B Model

  • In ComfyUI: Likely needs WanExperiments to allow for I2V capabilities. Chain the HuMo node with the WanMove node. Recommended track strength: 1.5.

  • It is not recommended to use HuMo reference or start image along with the WanMove start image.

  • Example ComfyUI WF

    Version History

    Version Notes
    v0.1 Proof of concept release. Further tuning should improve simultaneous speech and motion control capabilities.

Download model

Download them in the Files & versions tab.

Downloads last month
4
Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for drozbay/HuMoveLora

Adapter
(1)
this model