File size: 1,972 Bytes
8b2f73b
 
e3701cb
 
 
 
 
 
8b2f73b
 
e3701cb
 
 
 
d1f752b
 
 
 
e3701cb
 
 
0915655
e3701cb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
library_name: diffusers
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
pipeline_tag: image-to-image
---

The Flux Kontext model with **NF4** transformer and T5 encoder.


# Usage
```
pip install bitsandbytes
```

```python
from diffusers import FluxKontextPipeline
import torch
pipeline = FluxKontextPipeline.from_pretrained("eramth/flux-kontext-4bit",torch_dtype=torch.float16).to("cuda")
# This allows you to generate higher resolution images without much extra VRAM usage.
pipeline.vae.enable_tiling()

```

# You can create this quantization model yourself by
```python
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
from diffusers import FluxKontextPipeline,FluxTransformer2DModel
from transformers import T5EncoderModel
import torch

token = ""
repo_id = ""

quant_config = TransformersBitsAndBytesConfig(load_in_4bit=True,bnb_4bit_compute_dtype=torch.float16,bnb_4bit_quant_type="nf4")

text_encoder_2_4bit = T5EncoderModel.from_pretrained(
    "black-forest-labs/FLUX.1-Kontext-dev",
    subfolder="text_encoder_2",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
    token=token
)

quant_config = DiffusersBitsAndBytesConfig(load_in_4bit=True,bnb_4bit_compute_dtype=torch.float16,bnb_4bit_quant_type="nf4")

transformer_4bit = FluxTransformer2DModel.from_pretrained(
    "black-forest-labs/FLUX.1-Kontext-dev",
    subfolder="transformer",
    quantization_config=quant_config,
    torch_dtype=torch.float16,
    token=token
)

pipe = FluxKontextPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-Kontext-dev",
    transformer=transformer_4bit,
    text_encoder_2=text_encoder_2_4bit,
    torch_dtype=torch.float16,
    token=token
)

pipe.push_to_hub(repo_id,token=token)
```