ovedrive commited on
Commit
4104233
·
verified ·
1 Parent(s): ff94f1e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -1
README.md CHANGED
@@ -1,11 +1,67 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
5
  - zh
6
  library_name: diffusers
7
  pipeline_tag: image-to-image
 
 
 
 
8
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  <p align="center">
10
  <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/qwen_image_edit_logo.png" width="400"/>
11
  <p>
 
1
  ---
2
+ license: cc-by-nc-sa-4.0
3
  language:
4
  - en
5
  - zh
6
  library_name: diffusers
7
  pipeline_tag: image-to-image
8
+ quantized_by: Abhishek Dujari
9
+ base_model:
10
+ - Qwen/Qwen-Image-Edit-2511
11
+ base_model_relation: quantized
12
  ---
13
+
14
+ This is an NF4 quantized model of Qwen-image-edit-2511 so it can run on GPUs using less than 20GB VRAM. You can run it on lower VRAM like 16GB.
15
+ There were other NF4 models but they made the mistake of blindly quantizing all layers in the transformer.
16
+ This one does not. We retain some layers at full precision in order to ensure that we get quality output.
17
+
18
+ You can use the original Qwen-Image-Edit parameters.
19
+
20
+
21
+ Model tested: Working perfectly even with 10 steps.
22
+ Contact: support@[JustLab.ai](https://justlab.ai) for commercial support, modifications and licensing.
23
+
24
+
25
+ ### sample code
26
+
27
+ ```python
28
+ import os
29
+ from PIL import Image
30
+ import torch
31
+
32
+ from diffusers import QwenImageEditPlusPipeline
33
+
34
+ model_path = "ovedrive/Qwen-Image-Edit-2511-4bit"
35
+ pipeline = QwenImageEditPlusPipeline.from_pretrained(model_path, torch_dtype=torch.bfloat16)
36
+ print("pipeline loaded") # not true but whatever. do not move to cuda
37
+
38
+ pipeline.set_progress_bar_config(disable=None)
39
+ pipeline.enable_model_cpu_offload() #if you have enough VRAM replace this line with `pipeline.to("cuda")` which is 20GB VRAM
40
+ image = Image.open("./example.png").convert("RGB")
41
+ prompt = "Remove the lady head with white hair"
42
+ inputs = {
43
+ "image": image,
44
+ "prompt": prompt,
45
+ "generator": torch.manual_seed(0),
46
+ "true_cfg_scale": 4.0,
47
+ "negative_prompt": " ",
48
+ "num_inference_steps": 20, # even 10 steps should be enough in many cases
49
+ }
50
+
51
+ with torch.inference_mode():
52
+ output = pipeline(**inputs)
53
+
54
+ output_image = output.images[0]
55
+ output_image.save("output_image_edit.png")
56
+ print("image saved at", os.path.abspath("output_image_edit.png"))
57
+ ```
58
+
59
+
60
+ The original license and attributions are below.
61
+
62
+
63
+
64
+
65
  <p align="center">
66
  <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/qwen_image_edit_logo.png" width="400"/>
67
  <p>