Update README.md
Browse files
README.md
CHANGED
|
@@ -12,20 +12,29 @@ pipeline_tags:
|
|
| 12 |
library_name: diffusers
|
| 13 |
pipeline_tag: image-to-video
|
| 14 |
---
|
|
|
|
| 15 |
# Hy1.5-Quantized-Models
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
## π Model List
|
| 20 |
|
| 21 |
### DIT (Diffusion Transformer) Models
|
| 22 |
|
| 23 |
-
|
| 24 |
-
|
| 25 |
|
| 26 |
### Encoder Models
|
| 27 |
|
| 28 |
-
|
| 29 |
|
| 30 |
## π Quick Start
|
| 31 |
|
|
@@ -195,9 +204,9 @@ pipe.generate(
|
|
| 195 |
|
| 196 |
These models use **FP8-E4M3** quantization with the **SGL (SGLang) kernel** scheme (`fp8-sgl`). This quantization format provides:
|
| 197 |
|
| 198 |
-
|
| 199 |
-
|
| 200 |
-
|
| 201 |
|
| 202 |
### Requirements
|
| 203 |
|
|
@@ -220,22 +229,22 @@ For more details on quantization schemes, please refer to the [LightX2V Quantiza
|
|
| 220 |
|
| 221 |
Using quantized models provides:
|
| 222 |
|
| 223 |
-
|
| 224 |
-
|
| 225 |
-
|
| 226 |
|
| 227 |
## π Related Resources
|
| 228 |
|
| 229 |
-
|
| 230 |
-
|
| 231 |
-
|
| 232 |
-
|
| 233 |
|
| 234 |
## π Notes
|
| 235 |
|
| 236 |
-
|
| 237 |
-
|
| 238 |
-
|
| 239 |
|
| 240 |
## π€ Citation
|
| 241 |
|
|
@@ -254,4 +263,4 @@ If you use these quantized models in your research, please cite:
|
|
| 254 |
|
| 255 |
## π License
|
| 256 |
|
| 257 |
-
This model is released under the Apache 2.0 License, same as the original HunyuanVideo-1.5 model.
|
|
|
|
| 12 |
library_name: diffusers
|
| 13 |
pipeline_tag: image-to-video
|
| 14 |
---
|
| 15 |
+
|
| 16 |
# Hy1.5-Quantized-Models
|
| 17 |
|
| 18 |
+
<img src="https://raw.githubusercontent.com/ModelTC/LightX2V/main/assets/img_lightx2v.png" width="75%" />
|
| 19 |
+
|
| 20 |
+
---
|
| 21 |
+
|
| 22 |
+
π€ [HuggingFace](https://huggingface.co/lightx2v/Hy1.5-Quantized-Models) | [GitHub](https://github.com/ModelTC/LightX2V) | [License](https://opensource.org/licenses/Apache-2.0)
|
| 23 |
+
|
| 24 |
+
---
|
| 25 |
+
|
| 26 |
+
This repository contains quantized models for HunyuanVideo-1.5 optimized for use with LightX2V. These quantized models significantly reduce memory usage while maintaining high-quality video generation performance.
|
| 27 |
|
| 28 |
## π Model List
|
| 29 |
|
| 30 |
### DIT (Diffusion Transformer) Models
|
| 31 |
|
| 32 |
+
* **`hy15_720p_i2v_fp8_e4m3_lightx2v.safetensors`** - 720p Image-to-Video quantized DIT model
|
| 33 |
+
* **`hy15_720p_t2v_fp8_e4m3_lightx2v.safetensors`** - 720p Text-to-Video quantized DIT model
|
| 34 |
|
| 35 |
### Encoder Models
|
| 36 |
|
| 37 |
+
* **`hy15_qwen25vl_llm_encoder_fp8_e4m3_lightx2v.safetensors`** - Quantized text encoder (Qwen2.5-VL LLM Encoder)
|
| 38 |
|
| 39 |
## π Quick Start
|
| 40 |
|
|
|
|
| 204 |
|
| 205 |
These models use **FP8-E4M3** quantization with the **SGL (SGLang) kernel** scheme (`fp8-sgl`). This quantization format provides:
|
| 206 |
|
| 207 |
+
* **Significant memory reduction**: Up to 50% reduction in VRAM usage
|
| 208 |
+
* **Maintained quality**: Minimal quality degradation compared to full precision models
|
| 209 |
+
* **Faster inference**: Optimized kernels for accelerated computation
|
| 210 |
|
| 211 |
### Requirements
|
| 212 |
|
|
|
|
| 229 |
|
| 230 |
Using quantized models provides:
|
| 231 |
|
| 232 |
+
* **Lower VRAM Requirements**: Enables running on GPUs with less memory (e.g., RTX 4090 24GB)
|
| 233 |
+
* **Faster Inference**: Optimized quantized kernels accelerate computation
|
| 234 |
+
* **Quality Preservation**: FP8 quantization maintains high visual quality
|
| 235 |
|
| 236 |
## π Related Resources
|
| 237 |
|
| 238 |
+
* [LightX2V GitHub Repository](https://github.com/ModelTC/LightX2V)
|
| 239 |
+
* [LightX2V Documentation](https://lightx2v-en.readthedocs.io/en/latest/)
|
| 240 |
+
* [HunyuanVideo-1.5 Original Model](https://huggingface.co/tencent/HunyuanVideo-1.5)
|
| 241 |
+
* [LightX2V Examples](https://github.com/ModelTC/LightX2V/tree/main/examples)
|
| 242 |
|
| 243 |
## π Notes
|
| 244 |
|
| 245 |
+
* **Important**: All advanced configurations (including `enable_quantize()`) must be called **before** `create_generator()`, otherwise they will not take effect.
|
| 246 |
+
* The original HunyuanVideo-1.5 model weights are still required. These quantized models are used in conjunction with the original model structure.
|
| 247 |
+
* For best performance, we recommend using SageAttention 2 (`sage_attn2`) as the attention mode.
|
| 248 |
|
| 249 |
## π€ Citation
|
| 250 |
|
|
|
|
| 263 |
|
| 264 |
## π License
|
| 265 |
|
| 266 |
+
This model is released under the Apache 2.0 License, same as the original HunyuanVideo-1.5 model.
|