HuMoveLora / README.md
drozbay's picture
Update README.md
0bb4e4b verified
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/WanMoveLoraTesting_00003_.webp
text: '-'
base_model: 'bytedance-research/HuMo'
instance_prompt: null
license: apache-2.0
---
# HuMoveLora
<Gallery />
## Model description
<video controls width="640" src="https://huggingface.co/drozbay/HuMoveLora/resolve/main/images/WanMoveLoraTesting_00036-audio.mp4"></video>
A LoRA that tries to combine track controlled motion of [Wan-Move](https:&#x2F;&#x2F;github.com&#x2F;ali-vilab&#x2F;Wan-Move) with the human motion and speech sync features of [HuMo](https:&#x2F;&#x2F;huggingface.co&#x2F;bytedance-research&#x2F;HuMo).
## Usage
- **Base model:** Wan HuMo-17B Model
- **In ComfyUI:** Likely needs [WanExperiments](https:&#x2F;&#x2F;github.com&#x2F;drozbay&#x2F;WanExperiments) to allow for I2V capabilities. Chain the HuMo node with the WanMove node. **Recommended track strength: 1.5.**
- It is *not recommended* to use HuMo reference or start image along with the WanMove start image.
- [**Example ComfyUI WF**](./images/your_workflow_name.json)
## Version History
| Version | Notes |
|---------|-------|
| v0.1 | Proof of concept release. Further tuning should improve simultaneous speech and motion control capabilities. |
## Download model
[Download](/drozbay/HuMoveLora/tree/main) them in the Files & versions tab.