Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: flux-1-dev-non-commercial-license
|
| 4 |
+
tags:
|
| 5 |
+
- image-to-image
|
| 6 |
+
- SVDQuant
|
| 7 |
+
- INT4
|
| 8 |
+
- FLUX.1
|
| 9 |
+
- Diffusion
|
| 10 |
+
- Quantization
|
| 11 |
+
- inpainting
|
| 12 |
+
- image-generation
|
| 13 |
+
- text-to-image
|
| 14 |
+
- ICLR2025
|
| 15 |
+
- FLUX.1-Fill-dev
|
| 16 |
+
language:
|
| 17 |
+
- en
|
| 18 |
+
base_model:
|
| 19 |
+
- black-forest-labs/FLUX.1-Fill-dev
|
| 20 |
+
base_model_relation: quantized
|
| 21 |
+
pipeline_tag: image-to-image
|
| 22 |
+
datasets:
|
| 23 |
+
- mit-han-lab/svdquant-datasets
|
| 24 |
+
library_name: diffusers
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
<p align="center" style="border-radius: 10px">
|
| 28 |
+
<img src="https://github.com/mit-han-lab/nunchaku/raw/refs/heads/main/assets/logo.svg" width="50%" alt="logo"/>
|
| 29 |
+
</p>
|
| 30 |
+
<h4 style="display: flex; justify-content: center; align-items: center; text-align: center;">Quantization Library: <a href='https://github.com/mit-han-lab/deepcompressor'>DeepCompressor</a>   Inference Engine: <a href='https://github.com/mit-han-lab/nunchaku'>Nunchaku</a>
|
| 31 |
+
</h4>
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
<div style="display: flex; justify-content: center; align-items: center; text-align: center;">
|
| 35 |
+
<a href="https://arxiv.org/abs/2411.05007">[Paper]</a> 
|
| 36 |
+
<a href='https://github.com/mit-han-lab/nunchaku'>[Code]</a> 
|
| 37 |
+
<a href='https://svdquant.mit.edu'>[Demo]</a> 
|
| 38 |
+
<a href='https://hanlab.mit.edu/projects/svdquant'>[Website]</a> 
|
| 39 |
+
<a href='https://hanlab.mit.edu/blog/svdquant'>[Blog]</a>
|
| 40 |
+
</div>
|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
`svdq-int4-flux.1-fill-dev` is an INT4-quantized version of [`FLUX.1-Fill-dev`](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev), which can fill areas in existing images based on a text description. It offers approximately 4× memory savings while also running 2–3× faster than the original BF16 model.
|
| 44 |
+
|
| 45 |
+
## Method
|
| 46 |
+
#### Quantization Method -- SVDQuant
|
| 47 |
+
|
| 48 |
+

|
| 49 |
+
Overview of SVDQuant. Stage1: Originally, both the activation ***X*** and weights ***W*** contain outliers, making 4-bit quantization challenging. Stage 2: We migrate the outliers from activations to weights, resulting in the updated activation and weight. While the activation becomes easier to quantize, the weight now becomes more difficult. Stage 3: SVDQuant further decomposes the weight into a low-rank component and a residual with SVD. Thus, the quantization difficulty is alleviated by the low-rank branch, which runs at 16-bit precision.
|
| 50 |
+
|
| 51 |
+
#### Nunchaku Engine Design
|
| 52 |
+
|
| 53 |
+
 (a) Naïvely running low-rank branch with rank 32 will introduce 57% latency overhead due to extra read of 16-bit inputs in *Down Projection* and extra write of 16-bit outputs in *Up Projection*. Nunchaku optimizes this overhead with kernel fusion. (b) *Down Projection* and *Quantize* kernels use the same input, while *Up Projection* and *4-Bit Compute* kernels share the same output. To reduce data movement overhead, we fuse the first two and the latter two kernels together.
|
| 54 |
+
|
| 55 |
+
## Model Description
|
| 56 |
+
|
| 57 |
+
- **Developed by:** MIT, NVIDIA, CMU, Princeton, UC Berkeley, SJTU and Pika Labs
|
| 58 |
+
- **Model type:** INT W4A4 model
|
| 59 |
+
- **Model size:** 6.64GB
|
| 60 |
+
- **Model resolution:** The number of pixels need to be a multiple of 65,536.
|
| 61 |
+
- **License:** Apache-2.0
|
| 62 |
+
|
| 63 |
+
## Usage
|
| 64 |
+
|
| 65 |
+
### Diffusers
|
| 66 |
+
|
| 67 |
+
Please follow the instructions in [mit-han-lab/nunchaku](https://github.com/mit-han-lab/nunchaku) to set up the environment. Also, install some ControlNet dependencies:
|
| 68 |
+
|
| 69 |
+
```shell
|
| 70 |
+
pip install git+https://github.com/asomoza/image_gen_aux.git
|
| 71 |
+
pip install controlnet_aux mediapipe
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
Then you can run the model with
|
| 75 |
+
|
| 76 |
+
```python
|
| 77 |
+
import torch
|
| 78 |
+
from diffusers import FluxFillPipeline
|
| 79 |
+
from diffusers.utils import load_image
|
| 80 |
+
|
| 81 |
+
from nunchaku.models.transformer_flux import NunchakuFluxTransformer2dModel
|
| 82 |
+
|
| 83 |
+
image = load_image("https://huggingface.co/mit-han-lab/svdq-int4-flux.1-fill-dev/resolve/main/example.png")
|
| 84 |
+
mask = load_image("https://huggingface.co/mit-han-lab/svdq-int4-flux.1-fill-dev/resolve/main/mask.png")
|
| 85 |
+
|
| 86 |
+
transformer = NunchakuFluxTransformer2dModel.from_pretrained("mit-han-lab/svdq-int4-flux.1-fill-dev")
|
| 87 |
+
pipe = FluxFillPipeline.from_pretrained(
|
| 88 |
+
"black-forest-labs/FLUX.1-Fill-dev", transformer=transformer, torch_dtype=torch.bfloat16
|
| 89 |
+
).to("cuda")
|
| 90 |
+
image = pipe(
|
| 91 |
+
prompt="A wooden basket of several individual cartons of blueberries.",
|
| 92 |
+
image=image,
|
| 93 |
+
mask_image=mask,
|
| 94 |
+
height=1024,
|
| 95 |
+
width=1024,
|
| 96 |
+
guidance_scale=30,
|
| 97 |
+
num_inference_steps=50,
|
| 98 |
+
max_sequence_length=512,
|
| 99 |
+
).images[0]
|
| 100 |
+
image.save("flux.1-fill-dev.png")
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
### Comfy UI
|
| 104 |
+
|
| 105 |
+
Work in progress. Stay tuned!
|
| 106 |
+
|
| 107 |
+
## Limitations
|
| 108 |
+
|
| 109 |
+
- The model is only runnable on NVIDIA GPUs with architectures sm_86 (Ampere: RTX 3090, A6000), sm_89 (Ada: RTX 4090), and sm_80 (A100). See this [issue](https://github.com/mit-han-lab/nunchaku/issues/1) for more details.
|
| 110 |
+
- You may observe some slight differences from the BF16 models in detail.
|
| 111 |
+
|
| 112 |
+
### Citation
|
| 113 |
+
|
| 114 |
+
If you find this model useful or relevant to your research, please cite
|
| 115 |
+
|
| 116 |
+
```bibtex
|
| 117 |
+
@inproceedings{
|
| 118 |
+
li2024svdquant,
|
| 119 |
+
title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models},
|
| 120 |
+
author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song},
|
| 121 |
+
booktitle={The Thirteenth International Conference on Learning Representations},
|
| 122 |
+
year={2025}
|
| 123 |
+
}
|
| 124 |
+
```
|