lightx2v commited on
Commit
6b227b8
Β·
verified Β·
1 Parent(s): 976c1ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -18
README.md CHANGED
@@ -12,20 +12,29 @@ pipeline_tags:
12
  library_name: diffusers
13
  pipeline_tag: image-to-video
14
  ---
 
15
  # Hy1.5-Quantized-Models
16
 
17
- This repository contains quantized models for [HunyuanVideo-1.5](https://huggingface.co/tencent/HunyuanVideo-1.5) optimized for use with [LightX2V](https://github.com/ModelTC/LightX2V). These quantized models significantly reduce memory usage while maintaining high-quality video generation performance.
 
 
 
 
 
 
 
 
18
 
19
  ## πŸ“‹ Model List
20
 
21
  ### DIT (Diffusion Transformer) Models
22
 
23
- - **`hy15_720p_i2v_fp8_e4m3_lightx2v.safetensors`** - 720p Image-to-Video quantized DIT model
24
- - **`hy15_720p_t2v_fp8_e4m3_lightx2v.safetensors`** - 720p Text-to-Video quantized DIT model
25
 
26
  ### Encoder Models
27
 
28
- - **`hy15_qwen25vl_llm_encoder_fp8_e4m3_lightx2v.safetensors`** - Quantized text encoder (Qwen2.5-VL LLM Encoder)
29
 
30
  ## πŸš€ Quick Start
31
 
@@ -195,9 +204,9 @@ pipe.generate(
195
 
196
  These models use **FP8-E4M3** quantization with the **SGL (SGLang) kernel** scheme (`fp8-sgl`). This quantization format provides:
197
 
198
- - **Significant memory reduction**: Up to 50% reduction in VRAM usage
199
- - **Maintained quality**: Minimal quality degradation compared to full precision models
200
- - **Faster inference**: Optimized kernels for accelerated computation
201
 
202
  ### Requirements
203
 
@@ -220,22 +229,22 @@ For more details on quantization schemes, please refer to the [LightX2V Quantiza
220
 
221
  Using quantized models provides:
222
 
223
- - **Lower VRAM Requirements**: Enables running on GPUs with less memory (e.g., RTX 4090 24GB)
224
- - **Faster Inference**: Optimized quantized kernels accelerate computation
225
- - **Quality Preservation**: FP8 quantization maintains high visual quality
226
 
227
  ## πŸ”— Related Resources
228
 
229
- - [LightX2V GitHub Repository](https://github.com/ModelTC/LightX2V)
230
- - [LightX2V Documentation](https://lightx2v-en.readthedocs.io/en/latest/)
231
- - [HunyuanVideo-1.5 Original Model](https://huggingface.co/tencent/HunyuanVideo-1.5)
232
- - [LightX2V Examples](https://github.com/ModelTC/LightX2V/tree/main/examples)
233
 
234
  ## πŸ“ Notes
235
 
236
- - **Important**: All advanced configurations (including `enable_quantize()`) must be called **before** `create_generator()`, otherwise they will not take effect.
237
- - The original HunyuanVideo-1.5 model weights are still required. These quantized models are used in conjunction with the original model structure.
238
- - For best performance, we recommend using SageAttention 2 (`sage_attn2`) as the attention mode.
239
 
240
  ## 🀝 Citation
241
 
@@ -254,4 +263,4 @@ If you use these quantized models in your research, please cite:
254
 
255
  ## πŸ“„ License
256
 
257
- This model is released under the Apache 2.0 License, same as the original HunyuanVideo-1.5 model.
 
12
  library_name: diffusers
13
  pipeline_tag: image-to-video
14
  ---
15
+
16
  # Hy1.5-Quantized-Models
17
 
18
+ <img src="https://raw.githubusercontent.com/ModelTC/LightX2V/main/assets/img_lightx2v.png" width="75%" />
19
+
20
+ ---
21
+
22
+ πŸ€— [HuggingFace](https://huggingface.co/lightx2v/Hy1.5-Quantized-Models) | [GitHub](https://github.com/ModelTC/LightX2V) | [License](https://opensource.org/licenses/Apache-2.0)
23
+
24
+ ---
25
+
26
+ This repository contains quantized models for HunyuanVideo-1.5 optimized for use with LightX2V. These quantized models significantly reduce memory usage while maintaining high-quality video generation performance.
27
 
28
  ## πŸ“‹ Model List
29
 
30
  ### DIT (Diffusion Transformer) Models
31
 
32
+ * **`hy15_720p_i2v_fp8_e4m3_lightx2v.safetensors`** - 720p Image-to-Video quantized DIT model
33
+ * **`hy15_720p_t2v_fp8_e4m3_lightx2v.safetensors`** - 720p Text-to-Video quantized DIT model
34
 
35
  ### Encoder Models
36
 
37
+ * **`hy15_qwen25vl_llm_encoder_fp8_e4m3_lightx2v.safetensors`** - Quantized text encoder (Qwen2.5-VL LLM Encoder)
38
 
39
  ## πŸš€ Quick Start
40
 
 
204
 
205
  These models use **FP8-E4M3** quantization with the **SGL (SGLang) kernel** scheme (`fp8-sgl`). This quantization format provides:
206
 
207
+ * **Significant memory reduction**: Up to 50% reduction in VRAM usage
208
+ * **Maintained quality**: Minimal quality degradation compared to full precision models
209
+ * **Faster inference**: Optimized kernels for accelerated computation
210
 
211
  ### Requirements
212
 
 
229
 
230
  Using quantized models provides:
231
 
232
+ * **Lower VRAM Requirements**: Enables running on GPUs with less memory (e.g., RTX 4090 24GB)
233
+ * **Faster Inference**: Optimized quantized kernels accelerate computation
234
+ * **Quality Preservation**: FP8 quantization maintains high visual quality
235
 
236
  ## πŸ”— Related Resources
237
 
238
+ * [LightX2V GitHub Repository](https://github.com/ModelTC/LightX2V)
239
+ * [LightX2V Documentation](https://lightx2v-en.readthedocs.io/en/latest/)
240
+ * [HunyuanVideo-1.5 Original Model](https://huggingface.co/tencent/HunyuanVideo-1.5)
241
+ * [LightX2V Examples](https://github.com/ModelTC/LightX2V/tree/main/examples)
242
 
243
  ## πŸ“ Notes
244
 
245
+ * **Important**: All advanced configurations (including `enable_quantize()`) must be called **before** `create_generator()`, otherwise they will not take effect.
246
+ * The original HunyuanVideo-1.5 model weights are still required. These quantized models are used in conjunction with the original model structure.
247
+ * For best performance, we recommend using SageAttention 2 (`sage_attn2`) as the attention mode.
248
 
249
  ## 🀝 Citation
250
 
 
263
 
264
  ## πŸ“„ License
265
 
266
+ This model is released under the Apache 2.0 License, same as the original HunyuanVideo-1.5 model.