update README
Browse files- README.md +14 -3
- README_CN.md +14 -4
README.md
CHANGED
|
@@ -42,9 +42,11 @@ HunyuanVideo-1.5 is a video generation model that delivers top-tier quality with
|
|
| 42 |
<a href=https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5 target="_blank"><img src= https://img.shields.io/badge/Page-bb8a2e.svg?logo=github height=22px></a>
|
| 43 |
<a href="https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5/blob/report/HunyuanVideo_1_5.pdf" target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>
|
| 44 |
<a href=https://x.com/TencentHunyuan target="_blank"><img src=https://img.shields.io/badge/Hunyuan-black.svg?logo=x height=22px></a>
|
| 45 |
-
<a href="https://
|
| 46 |
<a href="./ComfyUI/README.md" target="_blank"><img src=https://img.shields.io/badge/ComfyUI-blue.svg?logo=book height=22px></a>
|
| 47 |
<a href="https://github.com/ModelTC/LightX2V" target="_blank"><img src=https://img.shields.io/badge/LightX2V-yellow.svg?logo=book height=22px></a>
|
|
|
|
|
|
|
| 48 |
|
| 49 |
</div>
|
| 50 |
|
|
@@ -56,6 +58,7 @@ HunyuanVideo-1.5 is a video generation model that delivers top-tier quality with
|
|
| 56 |
|
| 57 |
## 🔥🔥🔥 News
|
| 58 |
👋 Nov 20, 2025: We release the inference code and model weights of HunyuanVideo-1.5.
|
|
|
|
| 59 |
|
| 60 |
|
| 61 |
## 🎥 Demo
|
|
@@ -168,6 +171,7 @@ pip install -i https://mirrors.tencent.com/pypi/simple/ --upgrade tencentcloud-s
|
|
| 168 |
```bash
|
| 169 |
git clone https://github.com/Tencent-Hunyuan/flex-block-attn.git
|
| 170 |
cd flex-block-attn
|
|
|
|
| 171 |
python3 setup.py install
|
| 172 |
```
|
| 173 |
|
|
@@ -191,7 +195,7 @@ Download the pretrained models before generating videos. Detailed instructions a
|
|
| 191 |
### Prompt Writing Handbook
|
| 192 |
Prompt enhancement plays a crucial role in enabling our model to generate high-quality videos. By writing longer and more detailed prompts, the generated video will be significantly improved. We encourage you to craft comprehensive and descriptive prompts to achieve the best possible video quality. we recommend community partners consulting our official guide on how to write effective prompts.
|
| 193 |
|
| 194 |
-
**Reference:** **[HunyuanVideo-1.5 Prompt Handbook](https://
|
| 195 |
|
| 196 |
### System Prompts for Automatic Prompt Enhancement
|
| 197 |
For users seeking to optimize prompts for other large models, it is recommended to consult the definition of `t2v_rewrite_system_prompt` in the file `hyvideo/utils/rewrite/t2v_prompt.py` to guide text-to-video rewriting. Similarly, for image-to-video rewriting, refer to the definition of `i2v_rewrite_system_prompt` in `hyvideo/utils/rewrite/i2v_prompt.py`.
|
|
@@ -229,9 +233,10 @@ OUTPUT_PATH=./outputs/output.mp4
|
|
| 229 |
N_INFERENCE_GPU=8 # Parallel inference GPU count
|
| 230 |
CFG_DISTILLED=true # Inference with CFG distilled model, 2x speedup
|
| 231 |
SPARSE_ATTN=false # Inference with sparse attention (only 720p models are equipped with sparse attention). Please ensure flex-block-attn is installed
|
| 232 |
-
SAGE_ATTN=
|
| 233 |
REWRITE=true # Enable prompt rewriting. Please ensure rewrite vLLM server is deployed and configured.
|
| 234 |
OVERLAP_GROUP_OFFLOADING=true # Only valid when group offloading is enabled, significantly increases CPU memory usage but speeds up inference
|
|
|
|
| 235 |
MODEL_PATH=ckpts # Path to pretrained model
|
| 236 |
|
| 237 |
torchrun --nproc_per_node=$N_INFERENCE_GPU generate.py \
|
|
@@ -243,6 +248,7 @@ torchrun --nproc_per_node=$N_INFERENCE_GPU generate.py \
|
|
| 243 |
--cfg_distilled $CFG_DISTILLED \
|
| 244 |
--sparse_attn $SPARSE_ATTN \
|
| 245 |
--use_sageattn $SAGE_ATTN \
|
|
|
|
| 246 |
--rewrite $REWRITE \
|
| 247 |
--output_path $OUTPUT_PATH \
|
| 248 |
--overlap_group_offloading $OVERLAP_GROUP_OFFLOADING \
|
|
@@ -287,6 +293,11 @@ torchrun --nproc_per_node=$N_INFERENCE_GPU generate.py \
|
|
| 287 |
| `--use_sageattn` | bool | No | `false` | Enable SageAttention (use `--use_sageattn` or `--use_sageattn true/1` to enable, `--use_sageattn false/0` to disable) |
|
| 288 |
| `--sage_blocks_range` | str | No | `0-53` | SageAttention blocks range (e.g., `0-5` or `0,1,2,3,4,5`) |
|
| 289 |
| `--enable_torch_compile` | bool | No | `false` | Enable torch compile for transformer (use `--enable_torch_compile` or `--enable_torch_compile true/1` to enable, `--enable_torch_compile false/0` to disable) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 290 |
|
| 291 |
**Note:** Use `--nproc_per_node` to specify the number of GPUs. For example, `--nproc_per_node=8` uses 8 GPUs.
|
| 292 |
|
|
|
|
| 42 |
<a href=https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5 target="_blank"><img src= https://img.shields.io/badge/Page-bb8a2e.svg?logo=github height=22px></a>
|
| 43 |
<a href="https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5/blob/report/HunyuanVideo_1_5.pdf" target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>
|
| 44 |
<a href=https://x.com/TencentHunyuan target="_blank"><img src=https://img.shields.io/badge/Hunyuan-black.svg?logo=x height=22px></a>
|
| 45 |
+
<a href="https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5/blob/main/assets/HunyuanVideo_1_5_Prompt_Handbook_EN.md" target="_blank"><img src=https://img.shields.io/badge/📚-PromptHandBook-blue.svg?logo=book height=22px></a> <br/>
|
| 46 |
<a href="./ComfyUI/README.md" target="_blank"><img src=https://img.shields.io/badge/ComfyUI-blue.svg?logo=book height=22px></a>
|
| 47 |
<a href="https://github.com/ModelTC/LightX2V" target="_blank"><img src=https://img.shields.io/badge/LightX2V-yellow.svg?logo=book height=22px></a>
|
| 48 |
+
<a href="https://tusi.cn/models/933574988890423836" target="_blank"><img src=https://img.shields.io/badge/吐司-purple.svg?logo=book height=22px></a>
|
| 49 |
+
<a href="https://tensor.art/models/933574988890423836" target="_blank"><img src=https://img.shields.io/badge/TensorArt-cyan.svg?logo=book height=22px></a>
|
| 50 |
|
| 51 |
</div>
|
| 52 |
|
|
|
|
| 58 |
|
| 59 |
## 🔥🔥🔥 News
|
| 60 |
👋 Nov 20, 2025: We release the inference code and model weights of HunyuanVideo-1.5.
|
| 61 |
+
🚀 Latest: We now support cache inference, achieving approximately 2x speedup! Pull the latest code to experience it.
|
| 62 |
|
| 63 |
|
| 64 |
## 🎥 Demo
|
|
|
|
| 171 |
```bash
|
| 172 |
git clone https://github.com/Tencent-Hunyuan/flex-block-attn.git
|
| 173 |
cd flex-block-attn
|
| 174 |
+
git submodule update --init --recursive
|
| 175 |
python3 setup.py install
|
| 176 |
```
|
| 177 |
|
|
|
|
| 195 |
### Prompt Writing Handbook
|
| 196 |
Prompt enhancement plays a crucial role in enabling our model to generate high-quality videos. By writing longer and more detailed prompts, the generated video will be significantly improved. We encourage you to craft comprehensive and descriptive prompts to achieve the best possible video quality. we recommend community partners consulting our official guide on how to write effective prompts.
|
| 197 |
|
| 198 |
+
**Reference:** **[HunyuanVideo-1.5 Prompt Handbook](https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5/blob/main/assets/HunyuanVideo_1_5_Prompt_Handbook_EN.md)**
|
| 199 |
|
| 200 |
### System Prompts for Automatic Prompt Enhancement
|
| 201 |
For users seeking to optimize prompts for other large models, it is recommended to consult the definition of `t2v_rewrite_system_prompt` in the file `hyvideo/utils/rewrite/t2v_prompt.py` to guide text-to-video rewriting. Similarly, for image-to-video rewriting, refer to the definition of `i2v_rewrite_system_prompt` in `hyvideo/utils/rewrite/i2v_prompt.py`.
|
|
|
|
| 233 |
N_INFERENCE_GPU=8 # Parallel inference GPU count
|
| 234 |
CFG_DISTILLED=true # Inference with CFG distilled model, 2x speedup
|
| 235 |
SPARSE_ATTN=false # Inference with sparse attention (only 720p models are equipped with sparse attention). Please ensure flex-block-attn is installed
|
| 236 |
+
SAGE_ATTN=true # Inference with SageAttention
|
| 237 |
REWRITE=true # Enable prompt rewriting. Please ensure rewrite vLLM server is deployed and configured.
|
| 238 |
OVERLAP_GROUP_OFFLOADING=true # Only valid when group offloading is enabled, significantly increases CPU memory usage but speeds up inference
|
| 239 |
+
ENABLE_CACHE=true # Enable feature cache during inference. Significantly speeds up inference.
|
| 240 |
MODEL_PATH=ckpts # Path to pretrained model
|
| 241 |
|
| 242 |
torchrun --nproc_per_node=$N_INFERENCE_GPU generate.py \
|
|
|
|
| 248 |
--cfg_distilled $CFG_DISTILLED \
|
| 249 |
--sparse_attn $SPARSE_ATTN \
|
| 250 |
--use_sageattn $SAGE_ATTN \
|
| 251 |
+
--enable_cache $ENABLE_CACHE \
|
| 252 |
--rewrite $REWRITE \
|
| 253 |
--output_path $OUTPUT_PATH \
|
| 254 |
--overlap_group_offloading $OVERLAP_GROUP_OFFLOADING \
|
|
|
|
| 293 |
| `--use_sageattn` | bool | No | `false` | Enable SageAttention (use `--use_sageattn` or `--use_sageattn true/1` to enable, `--use_sageattn false/0` to disable) |
|
| 294 |
| `--sage_blocks_range` | str | No | `0-53` | SageAttention blocks range (e.g., `0-5` or `0,1,2,3,4,5`) |
|
| 295 |
| `--enable_torch_compile` | bool | No | `false` | Enable torch compile for transformer (use `--enable_torch_compile` or `--enable_torch_compile true/1` to enable, `--enable_torch_compile false/0` to disable) |
|
| 296 |
+
| `--enable_cache` | bool | No | `false` | Enable cache for transformer (use `--enable_cache` or `--enable_cache true/1` to enable, `--enable_cache false/0` to disable) |
|
| 297 |
+
| `--cache_start_step` | int | No | `11` | Start step to skip when using cache |
|
| 298 |
+
| `--cache_end_step` | int | No | `45` | End step to skip when using cache |
|
| 299 |
+
| `--total_steps` | int | No | `50` | Total inference steps |
|
| 300 |
+
| `--cache_step_interval` | int | No | `4` | Step interval to skip when using cache |
|
| 301 |
|
| 302 |
**Note:** Use `--nproc_per_node` to specify the number of GPUs. For example, `--nproc_per_node=8` uses 8 GPUs.
|
| 303 |
|
README_CN.md
CHANGED
|
@@ -26,10 +26,11 @@ HunyuanVideo-1.5作为一款轻量级视频生成模型,仅需83亿参数即
|
|
| 26 |
<a href=https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5 target="_blank"><img src= https://img.shields.io/badge/Page-bb8a2e.svg?logo=github height=22px></a>
|
| 27 |
<a href="https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5/blob/report/HunyuanVideo_1_5.pdf" target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>
|
| 28 |
<a href=https://x.com/TencentHunyuan target="_blank"><img src=https://img.shields.io/badge/Hunyuan-black.svg?logo=x height=22px></a>
|
| 29 |
-
<a href="https://
|
| 30 |
<a href="./ComfyUI/README.md" target="_blank"><img src=https://img.shields.io/badge/ComfyUI-blue.svg?logo=book height=22px></a>
|
| 31 |
<a href="https://github.com/ModelTC/LightX2V" target="_blank"><img src=https://img.shields.io/badge/LightX2V-yellow.svg?logo=book height=22px></a>
|
| 32 |
-
|
|
|
|
| 33 |
</div>
|
| 34 |
|
| 35 |
|
|
@@ -40,6 +41,7 @@ HunyuanVideo-1.5作为一款轻量级视频生成模型,仅需83亿参数即
|
|
| 40 |
|
| 41 |
## 🔥🔥🔥 最新动态
|
| 42 |
👋 2025年11月20日: 我们开源了 HunyuanVideo-1.5的代码和推理权重
|
|
|
|
| 43 |
|
| 44 |
## 🎥 演示视频
|
| 45 |
<div align="center">
|
|
@@ -151,6 +153,7 @@ pip install -i https://mirrors.tencent.com/pypi/simple/ --upgrade tencentcloud-s
|
|
| 151 |
```bash
|
| 152 |
git clone https://github.com/Tencent-Hunyuan/flex-block-attn.git
|
| 153 |
cd flex-block-attn
|
|
|
|
| 154 |
python3 setup.py install
|
| 155 |
```
|
| 156 |
|
|
@@ -175,7 +178,7 @@ pip install -i https://mirrors.tencent.com/pypi/simple/ --upgrade tencentcloud-s
|
|
| 175 |
提示词增强在我们的模型生成高质量视频方面起着至关重要的作用。通过撰写更长、更详细的提示词,生成的视频质量将得到显著改善。我们鼓励您编写全面且描述性的提示词,以获得最佳的视频质量。我们建议社区伙伴参考我们的官方指南,了解如何撰写有效的提示词。
|
| 176 |
|
| 177 |
|
| 178 |
-
**参考:** **[HunyuanVideo-1.5 提示词手册](https://
|
| 179 |
|
| 180 |
|
| 181 |
### 自动提示词增强的系统提示词
|
|
@@ -216,9 +219,10 @@ OUTPUT_PATH=./outputs/output.mp4
|
|
| 216 |
N_INFERENCE_GPU=8 # 并行推理 GPU 数量
|
| 217 |
CFG_DISTILLED=true # 使用 CFG 蒸馏模型进行推理,2倍加速
|
| 218 |
SPARSE_ATTN=false # 使用稀疏注意力进行推理(仅 720p 模型配备了稀疏注意力)。请确保 flex-block-attn 已安装
|
| 219 |
-
SAGE_ATTN=
|
| 220 |
REWRITE=true # 启用提示词重写。请确保 rewrite vLLM server 已部署和配置。
|
| 221 |
OVERLAP_GROUP_OFFLOADING=true # 仅在组卸载启用时有效,会显著增加 CPU 内存占用,但能够提速
|
|
|
|
| 222 |
MODEL_PATH=ckpts # 预训练模型路径
|
| 223 |
|
| 224 |
torchrun --nproc_per_node=$N_INFERENCE_GPU generate.py \
|
|
@@ -230,6 +234,7 @@ torchrun --nproc_per_node=$N_INFERENCE_GPU generate.py \
|
|
| 230 |
--cfg_distilled $CFG_DISTILLED \
|
| 231 |
--sparse_attn $SPARSE_ATTN \
|
| 232 |
--use_sageattn $SAGE_ATTN \
|
|
|
|
| 233 |
--rewrite $REWRITE \
|
| 234 |
--output_path $OUTPUT_PATH \
|
| 235 |
--overlap_group_offloading $OVERLAP_GROUP_OFFLOADING \
|
|
@@ -273,6 +278,11 @@ torchrun --nproc_per_node=$N_INFERENCE_GPU generate.py \
|
|
| 273 |
| `--use_sageattn` | bool | 否 | `false` | 启用 SageAttention(使用 `--use_sageattn` 或 `--use_sageattn true/1` 来启用,`--use_sageattn false/0` 来禁用) |
|
| 274 |
| `--sage_blocks_range` | str | 否 | `0-53` | SageAttention 块范围(例如:`0-5` 或 `0,1,2,3,4,5`) |
|
| 275 |
| `--enable_torch_compile` | bool | 否 | `false` | 启用 torch compile 以优化 transformer(使用 `--enable_torch_compile` 或 `--enable_torch_compile true/1` 来启用,`--enable_torch_compile false/0` 来禁用) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 276 |
|
| 277 |
**注意:** 使用 `--nproc_per_node` 指定使用的 GPU 数量。例如,`--nproc_per_node=8` 表示使用 8 个 GPU。
|
| 278 |
|
|
|
|
| 26 |
<a href=https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5 target="_blank"><img src= https://img.shields.io/badge/Page-bb8a2e.svg?logo=github height=22px></a>
|
| 27 |
<a href="https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5/blob/report/HunyuanVideo_1_5.pdf" target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>
|
| 28 |
<a href=https://x.com/TencentHunyuan target="_blank"><img src=https://img.shields.io/badge/Hunyuan-black.svg?logo=x height=22px></a>
|
| 29 |
+
<a href="https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5/blob/main/assets/HunyuanVideo_1_5_Prompt_Handbook_EN.md" target="_blank"><img src=https://img.shields.io/badge/📚-PromptHandBook-blue.svg?logo=book height=22px></a> <br/>
|
| 30 |
<a href="./ComfyUI/README.md" target="_blank"><img src=https://img.shields.io/badge/ComfyUI-blue.svg?logo=book height=22px></a>
|
| 31 |
<a href="https://github.com/ModelTC/LightX2V" target="_blank"><img src=https://img.shields.io/badge/LightX2V-yellow.svg?logo=book height=22px></a>
|
| 32 |
+
<a href="https://tusi.cn/models/933574988890423836" target="_blank"><img src=https://img.shields.io/badge/吐司-purple.svg?logo=book height=22px></a>
|
| 33 |
+
<a href="https://tensor.art/models/933574988890423836" target="_blank"><img src=https://img.shields.io/badge/TensorArt-cyan.svg?logo=book height=22px></a>
|
| 34 |
</div>
|
| 35 |
|
| 36 |
|
|
|
|
| 41 |
|
| 42 |
## 🔥🔥🔥 最新动态
|
| 43 |
👋 2025年11月20日: 我们开源了 HunyuanVideo-1.5的代码和推理权重
|
| 44 |
+
🚀 最新: 我们现已支持 cache 推理,可实现约两倍加速!请 pull 最新代码体验。
|
| 45 |
|
| 46 |
## 🎥 演示视频
|
| 47 |
<div align="center">
|
|
|
|
| 153 |
```bash
|
| 154 |
git clone https://github.com/Tencent-Hunyuan/flex-block-attn.git
|
| 155 |
cd flex-block-attn
|
| 156 |
+
git submodule update --init --recursive
|
| 157 |
python3 setup.py install
|
| 158 |
```
|
| 159 |
|
|
|
|
| 178 |
提示词增强在我们的模型生成高质量视频方面起着至关重要的作用。通过撰写更长、更详细的提示词,生成的视频质量将得到显著改善。我们鼓励您编写全面且描述性的提示词,以获得最佳的视频质量。我们建议社区伙伴参考我们的官方指南,了解如何撰写有效的提示词。
|
| 179 |
|
| 180 |
|
| 181 |
+
**参考:** **[HunyuanVideo-1.5 提示词手册](https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5/blob/main/assets/HunyuanVideo_1_5_Prompt_Handbook_EN.md)**
|
| 182 |
|
| 183 |
|
| 184 |
### 自动提示词增强的系统提示词
|
|
|
|
| 219 |
N_INFERENCE_GPU=8 # 并行推理 GPU 数量
|
| 220 |
CFG_DISTILLED=true # 使用 CFG 蒸馏模型进行推理,2倍加速
|
| 221 |
SPARSE_ATTN=false # 使用稀疏注意力进行推理(仅 720p 模型配备了稀疏注意力)。请确保 flex-block-attn 已安装
|
| 222 |
+
SAGE_ATTN=true # 使用 SageAttention 进行推理
|
| 223 |
REWRITE=true # 启用提示词重写。请确保 rewrite vLLM server 已部署和配置。
|
| 224 |
OVERLAP_GROUP_OFFLOADING=true # 仅在组卸载启用时有效,会显著增加 CPU 内存占用,但能够提速
|
| 225 |
+
ENABLE_CACHE=true # 启用特征缓存进行推理。显著提升推理速度
|
| 226 |
MODEL_PATH=ckpts # 预训练模型路径
|
| 227 |
|
| 228 |
torchrun --nproc_per_node=$N_INFERENCE_GPU generate.py \
|
|
|
|
| 234 |
--cfg_distilled $CFG_DISTILLED \
|
| 235 |
--sparse_attn $SPARSE_ATTN \
|
| 236 |
--use_sageattn $SAGE_ATTN \
|
| 237 |
+
--enable_cache $ENABLE_CACHE \
|
| 238 |
--rewrite $REWRITE \
|
| 239 |
--output_path $OUTPUT_PATH \
|
| 240 |
--overlap_group_offloading $OVERLAP_GROUP_OFFLOADING \
|
|
|
|
| 278 |
| `--use_sageattn` | bool | 否 | `false` | 启用 SageAttention(使用 `--use_sageattn` 或 `--use_sageattn true/1` 来启用,`--use_sageattn false/0` 来禁用) |
|
| 279 |
| `--sage_blocks_range` | str | 否 | `0-53` | SageAttention 块范围(例如:`0-5` 或 `0,1,2,3,4,5`) |
|
| 280 |
| `--enable_torch_compile` | bool | 否 | `false` | 启用 torch compile 以优化 transformer(使用 `--enable_torch_compile` 或 `--enable_torch_compile true/1` 来启用,`--enable_torch_compile false/0` 来禁用) |
|
| 281 |
+
| `--enable_cache` | bool | 否 | `false` | 启用 transformer 缓存(使用 `--enable_cache` 或 `--enable_cache true/1` 来启用,`--enable_cache false/0` 来禁用) |
|
| 282 |
+
| `--cache_start_step` | int | 否 | `11` | 使用缓存时跳过的起始步数 |
|
| 283 |
+
| `--cache_end_step` | int | 否 | `45` | 使用缓存时跳过的结束步数 |
|
| 284 |
+
| `--total_steps` | int | 否 | `50` | 总推理步数 |
|
| 285 |
+
| `--cache_step_interval` | int | 否 | `4` | 使用缓存时跳过的步数间隔 |
|
| 286 |
|
| 287 |
**注意:** 使用 `--nproc_per_node` 指定使用的 GPU 数量。例如,`--nproc_per_node=8` 表示使用 8 个 GPU。
|
| 288 |
|