File size: 9,064 Bytes
1a265b6
 
 
 
 
 
 
 
 
19a9f5c
 
1a265b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19a9f5c
1a265b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19a9f5c
1a265b6
 
 
19a9f5c
1a265b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19a9f5c
 
1a265b6
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
---
license: other
license_name: quantfunc-model-license
tags:
  - image-editing
  - image-to-image
  - diffusion
  - quantized
  - quantfunc
language:
  - en
---

# QuantFunc

<div align="center" style="margin-top: 50px;">
  <img src="assets/logo.webp" width="300" alt="Logo">
</div>

# Qwen-Image-Edit-Series

Pre-quantized **Qwen-Image-Edit-2511** image editing model series by [QuantFunc](https://github.com/user/quantfunc), with both Lighting and SVDQ backend inference support.

## Overview

Qwen-Image-Edit-2511 is an image editing diffusion model distilled from Alibaba Qwen team's image editing model. It can edit input images according to text instructions and supports multi-reference image inputs.

With the latest QuantFunc ComfyUI plugin, inference achieves **2x–11x speedup** over mainstream frameworks β€” e.g. Qwen-Image-Edit 1K image inference in ComfyUI reduced from 9.6s to 1.6s (tested on RTX 4090).

## Hardware Requirements

- Supports NVIDIA RTX 30 series and above
- RTX 20 series does not support BF16, which causes significant precision loss in Qwen series model quantization scenarios. Therefore, the 20 series currently only supports Z-Image models.

## Compatibility

- The base models in this repository are compatible with **any version** of Qwen-Image-Edit transformer weights
- The QuantFunc code plugin and ComfyUI plugin are **100% compatible** with previous versions of Qwen-Image-Edit models

## Directory Structure

```
Qwen-Image-Edit-Series/
β”œβ”€β”€ qwen-image-edit-series-50x-above-base-model/    # Base model, optimized for RTX 50 series and above
β”‚   β”œβ”€β”€ text_encoder/          # Qwen2.5-VL text encoder (pre-quantized)
β”‚   β”œβ”€β”€ vision_encoder/        # Qwen2.5-VL vision encoder (pre-quantized)
β”‚   β”œβ”€β”€ vae/                   # VAE encoder + decoder (~242MB)
β”‚   β”œβ”€β”€ tokenizer/             # Tokenizer
β”‚   β”œβ”€β”€ processor/             # Image preprocessor
β”‚   β”œβ”€β”€ scheduler/             # Scheduler config
β”‚   β”œβ”€β”€ model_index.json
β”‚   └── quantfunc_config.json
β”œβ”€β”€ qwen-image-edit-series-50x-below-base-model/    # Base model, optimized for RTX 50 series and below
β”‚   └── (same structure as above)
β”œβ”€β”€ transformer/
β”‚   β”œβ”€β”€ config.json
β”‚   β”œβ”€β”€ qwen-image-2511-50x-above-lighting-4steps.safetensors           # RTX 50+ Lighting 4-step
β”‚   β”œβ”€β”€ qwen-image-2511-50x-above-lighting-4steps-prequant.safetensors  # RTX 50+ Lighting pre-quantized
β”‚   β”œβ”€β”€ qwen-image-2511-50x-above-svdq-4steps.safetensors               # RTX 50+ SVDQ 4-step
β”‚   β”œβ”€β”€ qwen-image-2511-50x-above-svdq.safetensors                      # RTX 50+ SVDQ full-step
β”‚   β”œβ”€β”€ qwen-image-2511-50x-below-lighting-4steps.safetensors           # RTX 30/40 Lighting 4-step
β”‚   └── qwen-image-2511-50x-below-lighting-4steps-prequant.safetensors  # RTX 30/40 Lighting pre-quantized
β”œβ”€β”€ prequant/                                                # Pre-quantized modulation weights
β”‚   β”œβ”€β”€ qwen-image-edit-2511-50x-above.safetensors           # RTX 50+ mod weights
β”‚   β”œβ”€β”€ qwen-image-edit-2511-50x-below.safetensors           # RTX 30/40 mod weights
β”‚   └── qwen-image-edit-2509-50x-above.safetensors           # Legacy 2509 mod weights
└── precision-config/                                        # Lighting precision config samples
    β”œβ”€β”€ 50x-above-fp4-sample.json                            # FP4 config for RTX 50+
    └── 50x-below-int4-sample.json                           # INT4 config for RTX 30/40
```

## Model Variants

### By GPU Generation

| Variant | Target GPU | Description |
|---------|-----------|-------------|
| **50x-above** | RTX 50 series and above | Optimized for Blackwell architecture |
| **50x-below** | RTX 30/40 series | Broadly compatible |

### By Inference Backend

| Backend | File Suffix | Features |
|---------|------------|----------|
| **Lighting 4-step** | `*-lighting-4steps.safetensors` | Fastest inference with fused operators |
| **SVDQ 4-step** | `*-svdq-4steps.safetensors` | 4-step distilled + SVDQ quantization, runtime LoRA support |
| **SVDQ full-step** | `*-svdq.safetensors` | Default step count inference, runtime LoRA support |

> The base-model and transformer must use the **same variant** (both above or both below).

## Quick Start

### Download

```bash
pip install modelscope
```

```python
from modelscope import snapshot_download
model_dir = snapshot_download('QuantFunc/Qwen-Image-Edit-Series')
```

### Lighting Backend Inference

```bash
quantfunc \
  --model-dir Qwen-Image-Edit-Series/qwen-image-edit-series-50x-above-base-model \
  --transformer Qwen-Image-Edit-Series/transformer/qwen-image-2511-50x-above-lighting-4steps.safetensors \
  --auto-optimize --model-backend lighting \
  --ref-image input.png \
  --prompt "make the sky more purple and add stars" \
  --output output.png --steps 4
```

### SVDQ Backend Inference

```bash
quantfunc \
  --model-dir Qwen-Image-Edit-Series/qwen-image-edit-series-50x-above-base-model \
  --transformer Qwen-Image-Edit-Series/transformer/qwen-image-2511-50x-above-svdq-4steps.safetensors \
  --auto-optimize --model-backend svdq \
  --ref-image input.png \
  --prompt "change the background to a beach scene" \
  --output output.png --steps 4
```

### SVDQ + LoRA

```bash
quantfunc \
  --model-dir Qwen-Image-Edit-Series/qwen-image-edit-series-50x-above-base-model \
  --transformer Qwen-Image-Edit-Series/transformer/qwen-image-2511-50x-above-svdq-4steps.safetensors \
  --auto-optimize --model-backend svdq \
  --lora /path/to/style_lora.safetensors:0.8 \
  --ref-image input.png \
  --prompt "apply anime style to the image" \
  --output output.png --steps 4
```

## SVDQ && Lighting Backend

This repository provides both **Lighting** and **SVDQ** backend pre-quantized models:

| Feature | Lighting | SVDQ |
|---------|----------|------|
| **Quantization** | Per-layer mixed precision (FP4/INT4/FP8/INT8) | Nunchaku-based holistic pre-quantization + Rotation quantization |
| **LoRA Integration** | Real-time quantization β€” build a custom model in 5 minutes with zero speed loss, integrating any number of LoRAs | Runtime low-rank pathway |
| **Ecosystem** | QuantFunc native | Compatible with the widely-adopted Nunchaku ecosystem, enhanced with Rotation quantization and Auto Rank dynamic rank optimization |
| **Flexibility** | Per-layer precision control | Precision fixed at export time |
| **Use Cases** | Rapid personal model customization, batch LoRA integration | Leverage Nunchaku ecosystem, runtime dynamic LoRA |

## Pre-quantized Modulation Weights (prequant/)

The `prequant/` directory contains **pre-quantized modulation (mod) weights** extracted from SVDQ models. These are used with the Lighting backend to provide high-quality modulation without runtime quantization overhead.

**Usage with Lighting backend:**

```bash
quantfunc \
  --model-dir Qwen-Image-Edit-Series/qwen-image-edit-series-50x-above-base-model \
  --model-backend lighting \
  --precision-config Qwen-Image-Edit-Series/precision-config/50x-above-fp4-sample.json \
  --mod-weights Qwen-Image-Edit-Series/prequant/qwen-image-edit-2511-50x-above.safetensors \
  --rotation-block-size 256 \
  --ref-image input.png --prompt "edit instruction" \
  --steps 4 --auto-optimize
```

Alternatively, use the **pre-quantized Lighting transformer** for instant loading (no runtime quantization):

```bash
quantfunc \
  --model-dir Qwen-Image-Edit-Series/qwen-image-edit-series-50x-above-base-model \
  --transformer Qwen-Image-Edit-Series/transformer/qwen-image-2511-50x-above-lighting-4steps-prequant.safetensors \
  --model-backend lighting \
  --ref-image input.png --prompt "edit instruction" \
  --steps 4 --auto-optimize
```

## Precision Config (precision-config/)

Sample per-layer precision configurations for the Lighting backend:

| File | Target GPU | Precision |
|------|-----------|-----------|
| `50x-above-fp4-sample.json` | RTX 50+ | FP4 attention + AF8WF4 MLP fc2 + INT8 modulation |
| `50x-below-int4-sample.json` | RTX 30/40 | INT4 all layers + INT8 modulation |

These configs control the quantization precision of each transformer sub-layer. Customize them for your speed/quality trade-off.

## Related Repositories

- [QuantFunc/Qwen-Image-Series](https://modelscope.cn/models/QuantFunc/Qwen-Image-Series) β€” Qwen-Image text-to-image (60 layers)
- [QuantFunc/Z-Image-Series](https://modelscope.cn/models/QuantFunc/Z-Image-Series) β€” Z-Image-Turbo text-to-image (lightweight, fast)

## License

The pre-quantized model weights in this repository are derived from the original models. Users must comply with the original model's license agreement. The QuantFunc inference engine and its plugins (including the ComfyUI plugin) are licensed separately β€” see official QuantFunc channels for details.

For models quantized from commercially licensed models, users are responsible for obtaining the necessary commercial licenses from the original model providers.