drbaph commited on
Commit
c819954
·
verified ·
1 Parent(s): 67f1cbf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -15
README.md CHANGED
@@ -1,24 +1,26 @@
1
- ---
2
- license: apache-2.0
3
- language:
4
- - en
5
- pipeline_tag: text-to-image
6
- library_name: diffusers
7
- tags:
8
- - comfyui
9
- - comfy
10
- - z-img
11
- - fp8
12
- - quantized
13
- ---
14
 
15
  # 🔢 FP8 Quantized Version - ComfyUI Compatible
16
 
17
  This is the **fp8_e4m3fn** and **fp8_e5m2** quantized version of the Z-Image model, optimized for ComfyUI workflows. These quantized formats significantly reduce VRAM requirements while maintaining high image quality, making the model more accessible for consumer-grade GPUs.
18
 
19
  **Quantization Formats:**
20
- - `fp8_e4m3fn`: 4-bit exponent, 3-bit mantissa format
21
- - `fp8_e5m2`: 5-bit exponent, 2-bit mantissa format
 
 
22
 
23
  **Benefits:**
24
  - Reduced memory footprint (~50% VRAM savings)
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-to-image
6
+ library_name: diffusers
7
+ tags:
8
+ - comfyui
9
+ - comfy
10
+ - z-img
11
+ - fp8
12
+ - quantized
13
+ ---
14
 
15
  # 🔢 FP8 Quantized Version - ComfyUI Compatible
16
 
17
  This is the **fp8_e4m3fn** and **fp8_e5m2** quantized version of the Z-Image model, optimized for ComfyUI workflows. These quantized formats significantly reduce VRAM requirements while maintaining high image quality, making the model more accessible for consumer-grade GPUs.
18
 
19
  **Quantization Formats:**
20
+ - `fp8-e4m3fn-scaled`
21
+ - `fp8-e4m3fn`
22
+ - `fp8_e5m2-scaled`
23
+ - `fp8_e5m2`
24
 
25
  **Benefits:**
26
  - Reduced memory footprint (~50% VRAM savings)