File size: 631 Bytes
100104e
 
7886bc6
 
 
 
 
 
 
100104e
7886bc6
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---

license: apache-2.0
language:
- en
- zh
base_model: Qwen/Qwen-Image-Layered
base_model_relation: quantized
pipeline_tag: image-text-to-image
library_name: diffusers
---

# Qwen-Image-Layered (FP8 E5M2 & E4M3FN)

This is a quantization of [Qwen/Qwen-Image-Layered](https://huggingface.co/Qwen/Qwen-Image-Layered) to **FP8 E5M2** and **FP8 E4M3FN**.

Sensitive layers (norms, embeddings, biases) were kept in BF16.

**License & Usage:**
This model strictly follows the original licensing terms and usage restrictions. Please refer to the [original model card](https://huggingface.co/Qwen/Qwen-Image-Layered) for details.