vorstcavry commited on
Commit
d8f29ca
·
verified ·
1 Parent(s): b9ed9ea

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -0
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen-Image-Edit
5
+ language:
6
+ - en
7
+ - zh
8
+ library_name: diffusers
9
+ pipeline_tag: image-to-image
10
+ datasets:
11
+ - OPPOer/X2Edit-Dataset
12
+ ---
13
+ <div align="center">
14
+ <h1>Qwen-Image-Edit-Pruning</h1>
15
+ <a href='https://arxiv.org/abs/2511.16156'><img src='https://img.shields.io/badge/arXiv-2508.07607-b31b1b.svg'></a> &nbsp;
16
+ <a href='https://github.com/OPPO-Mente-Lab/Qwen-Image-Pruning'><img src="https://img.shields.io/badge/GitHub-OPPOer-blue.svg?logo=github" alt="GitHub"></a>
17
+ </div>
18
+
19
+ ## Update
20
+ - 2025/10/09: We release **[Qwen-Image-Edit-2509-Pruning-13B-4steps](https://huggingface.co/OPPOer/Qwen-Image-Edit-2509-Pruning)**
21
+ - 2025/09/29: We release **[Qwen-Image-Edit-2509-Pruning-14B](https://huggingface.co/OPPOer/Qwen-Image-Edit-2509-Pruning)**
22
+ - 2025/09/28: We release **[Qwen-Image-Edit-Pruning-13B-4steps](https://huggingface.co/OPPOer/Qwen-Image-Edit-Pruning)**
23
+
24
+
25
+ ## Introduction
26
+ This open-source project is based on Qwen-Image-Edit and has attempted model pruning, removing 20 layers while retaining the weights of 40 layers, resulting in a model size of 13.6B parameters. The pruned version will continue to be iterated upon. Please stay tuned.
27
+
28
+ <div align="center">
29
+ <img src="bench-2509.png">
30
+ </div>
31
+
32
+ ## Quick Start
33
+
34
+ Install the latest version of diffusers and pytorch
35
+ ```
36
+ pip install torch
37
+ pip install git+https://github.com/huggingface/diffusers
38
+ ```
39
+
40
+ ### Qwen-Image-Edit-2509-14B Inference
41
+ ```python
42
+ import os
43
+ import torch
44
+ from PIL import Image
45
+ from diffusers import QwenImageEditPlusPipeline
46
+ model_name = f"OPPOer/Qwen-Image-Edit-2509-Pruning"
47
+ pipeline = QwenImageEditPlusPipeline.from_pretrained(model_name, torch_dtype=torch.bfloat16)
48
+ print("pipeline loaded")
49
+ pipeline.to('cuda')
50
+ pipeline.set_progress_bar_config(disable=None)
51
+ image1 = Image.open("input1.jpg")
52
+ image2 = Image.open("input2.jpg")
53
+ prompt = "Let the ancient costume beauty in the second picture sit on the sofa in the first picture"
54
+ inputs = {
55
+ "image": [image1, image2],
56
+ "prompt": prompt,
57
+ "generator": torch.manual_seed(0),
58
+ "true_cfg_scale": 4.0,
59
+ "negative_prompt": " ",
60
+ "num_inference_steps": 40,
61
+ "guidance_scale": 1.0,
62
+ "num_images_per_prompt": 1,
63
+ }
64
+ with torch.inference_mode():
65
+ output = pipeline(**inputs)
66
+ output_image = output.images[0]
67
+ output_image.save("output_image_edit_plus.png")
68
+ print("image saved at", os.path.abspath("output_image_edit_plus.png"))
69
+ ```
70
+
71
+ ### Qwen-Image-Edit-2509-13B Inference
72
+ ```python
73
+ import os
74
+ import torch
75
+ from PIL import Image
76
+ from diffusers import QwenImageEditPlusPipeline
77
+ model_name = f"OPPOer/Qwen-Image-Edit-2509-Pruning/Qwen-Image-Edit-2509-13B-4steps"
78
+ pipeline = QwenImageEditPlusPipeline.from_pretrained(model_name, torch_dtype=torch.bfloat16)
79
+ print("pipeline loaded")
80
+ pipeline.to('cuda')
81
+ pipeline.set_progress_bar_config(disable=None)
82
+ image1 = Image.open("input1.jpg")
83
+ image2 = Image.open("input2.jpg")
84
+ prompt = "Let the ancient costume beauty in the second picture sit on the sofa in the first picture"
85
+ inputs = {
86
+ "image": [image1, image2],
87
+ "prompt": prompt,
88
+ "generator": torch.manual_seed(0),
89
+ "true_cfg_scale": 1.0,
90
+ "negative_prompt": " ",
91
+ "num_inference_steps": 4,
92
+ "guidance_scale": 1.0,
93
+ "num_images_per_prompt": 1,
94
+ }
95
+ with torch.inference_mode():
96
+ output = pipeline(**inputs)
97
+ output_image = output.images[0]
98
+ output_image.save("output_image_edit_plus.png")
99
+ print("image saved at", os.path.abspath("output_image_edit_plus.png"))
100
+ ```
101
+
102
+
103
+ ## Citation
104
+
105
+ 🌟 If you find our work helpful, please consider citing our paper and leaving valuable stars
106
+
107
+ ```
108
+ @misc{ma2025pluggablepruningcontiguouslayer,
109
+ title={Pluggable Pruning with Contiguous Layer Distillation for Diffusion Transformers},
110
+ author={Jian Ma and Qirong Peng and Xujie Zhu and Peixing Xie and Chen Chen and Haonan Lu},
111
+ year={2025},
112
+ eprint={2511.16156},
113
+ archivePrefix={arXiv},
114
+ primaryClass={cs.CV},
115
+ url={https://arxiv.org/abs/2511.16156},
116
+ }
117
+ ```