Improve model card for Lego-Edit: Add metadata, links, abstract, and usage

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +235 -3
README.md CHANGED
@@ -1,3 +1,235 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ pipeline_tag: image-to-image
4
+ library_name: transformers
5
+ tags:
6
+ - vision-language
7
+ - image-editing
8
+ - multimodal
9
+ - qwen2_5
10
+ ---
11
+
12
+ <p align="center">
13
+ <img src="https://github.com/xiaomi-research/lego-edit/raw/main/resources/lego_pic.png" alt="Lego-Edit" width="240"/>
14
+ </p>
15
+
16
+ <p align="center">
17
+ <a href="https://xiaomi-research.github.io/lego-edit/">
18
+ <img
19
+ src="https://img.shields.io/static/v1?label=Project%20Page&message=Web&color=green"
20
+ alt="Lego-Edit Website"
21
+ />
22
+ </a>
23
+ <a href="https://huggingface.co/papers/2509.12883">
24
+ <img
25
+ src="https://img.shields.io/static/v1?label=Paper&message=HuggingFace&color=red"
26
+ alt="Lego-Edit Paper on arXiv"
27
+ />
28
+ </a>
29
+ <a href="https://github.com/xiaomi-research/lego-edit">
30
+ <img
31
+ src="https://img.shields.io/static/v1?label=Code&message=GitHub&color=blue"
32
+ alt="Lego-Edit Github"
33
+ />
34
+ </a>
35
+ <a href="https://editdemo.ai.xiaomi.net/">
36
+ <img
37
+ src="https://img.shields.io/badge/Demo-Live-orange"
38
+ alt="Lego-Edit Demo"
39
+ />
40
+ </a>
41
+ </p>
42
+
43
+ # Lego-Edit: A General Image Editing Framework with Model-Level Bricks and MLLM Builder
44
+
45
+ Instruction-based image editing has garnered significant attention due to its direct interaction with users. However, real-world user instructions are immensely diverse, and existing methods often fail to generalize effectively to instructions outside their training domain, limiting their practical application. To address this, we propose Lego-Edit, which leverages the generalization capability of Multi-modal Large Language Model (MLLM) to organize a suite of model-level editing tools to tackle this challenge. Lego-Edit incorporates two key designs: (1) a model-level toolkit comprising diverse models efficiently trained on limited data and several image manipulation functions, enabling fine-grained composition of editing actions by the MLLM; and (2) a three-stage progressive reinforcement learning approach that uses feedback on unannotated, open-domain instructions to train the MLLM, equipping it with generalized reasoning capabilities for handling real-world instructions. Experiments demonstrate that Lego-Edit achieves state-of-the-art performance on GEdit-Bench and ImgBench. It exhibits robust reasoning capabilities for open-domain instructions and can utilize newly introduced editing tools without additional fine-tuning. The figure below showcases Lego-Edit's qualitative performance.
46
+
47
+ <p align="center"><img src="https://github.com/xiaomi-research/lego-edit/raw/main/resources/case_pic.png" width="95%"></p>
48
+
49
+ ## ✨ Features
50
+
51
+ Lego-Edit supports local editing, global editing, and multi-step editing as demonstrated in our tests, with corresponding results shown above. We discuss its feedback responsiveness and tool-extension capabilities in our paper.
52
+
53
+ Additionally, Lego-Edit accepts mask inputs for precise editing region control. Example applications are provided here:
54
+
55
+ <p align="center"><img src="https://github.com/xiaomi-research/lego-edit/raw/main/resources/maskcase1.png" width="95%"></p>
56
+
57
+ <p align="center"><img src="https://github.com/xiaomi-research/lego-edit/raw/main/resources/maskcase2.png" width="95%"></p>
58
+
59
+ You can try it and find more usages of this framework.
60
+
61
+ ## πŸ“’ News
62
+
63
+ - **Sep 17, 2025:** We released the [demo](https://editdemo.ai.xiaomi.net/), [model](https://huggingface.co/xiaomi-research/lego-edit), and [report](https://arxiv.org/abs/2509.12883) for Lego-Edit.
64
+
65
+ ## πŸ”₯ Quick Start
66
+
67
+ 1️⃣ Set up environment
68
+ ```bash
69
+ conda create -n legoedit python==3.11
70
+ conda activate legoedit
71
+ pip install -r ./requirements.txt
72
+ Install flash-attention (you can install the corresponding version of flash-attention at https://github.com/Dao-AILab/flash-attention/releases)
73
+ Modify ~/yourconda/envs/lego-edit/lib/python3.11/site-packages/transformers/modeling_utils.py, line 5105, map_location="meta" to map_location="cpu"
74
+ ```
75
+
76
+ 2️⃣ Download pretrained checkpoint and custom nodes
77
+
78
+ Custom Nodes:
79
+ ```bash
80
+ cd custom_nodes
81
+ git clone https://github.com/chflame163/ComfyUI_LayerStyle.git
82
+ git clone https://github.com/Fannovel16/comfyui_controlnet_aux.git
83
+ ```
84
+
85
+ Base Model:
86
+
87
+ 1. Download the [FLUX.1-Fill-dev](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev/blob/main/flux1-fill-dev.safetensors) and [FLUX.1-Canny-dev](https://huggingface.co/black-forest-labs/FLUX.1-Canny-dev/blob/main/flux1-canny-dev.safetensors) and copy them to './models/unet/'.
88
+
89
+ 2. Download the [vae](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev/blob/main/ae.safetensors) and copy it to './models/vae/'.
90
+
91
+ 3. Download the [clip_l](https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/clip_l.safetensors) and [t5xxl](https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/t5xxl_fp8_e4m3fn.safetensors) and copy them to './models/clip/'.
92
+
93
+ 4. Download the [lama](https://drive.google.com/file/d/11RbsVSav3O-fReBsPHBE1nn8kcFIMnKp/view?usp=drive_link), unzip and copy it to './lama'.
94
+
95
+
96
+ Our Model:
97
+
98
+ 1. Download all the models (Builder, mimo_lora, CVSOS, CVRES, loras) from [lego-edit](https://huggingface.co/xiaomi-research/lego-edit/).
99
+
100
+
101
+ Your model structure should match the following:
102
+ ```bash
103
+ β”œβ”€β”€ README.md
104
+ β”œβ”€β”€ requirements.txt
105
+ β”œβ”€β”€ legodemo.py
106
+ β”œβ”€β”€ Builder/
107
+ β”œβ”€β”€ mimo_lora/
108
+ β”œβ”€β”€ models/
109
+ β”‚ β”œβ”€β”€ unet/
110
+ β”‚ β”œβ”€β”€ vae/
111
+ β”‚ β”œβ”€β”€ clip/
112
+ β”‚ └── loras/
113
+ β”œβ”€β”€ CVSOS/
114
+ β”œβ”€β”€ CVRES/
115
+ β”œβ”€β”€ lama/
116
+ β”‚ β”œβ”€β”€ big-lama/
117
+ ```
118
+
119
+
120
+ 3️⃣ Use Gradio WebUI to start playing with Lego-Edit!
121
+ ```bash
122
+ python legodemo.py
123
+ ```
124
+
125
+ ## πŸ’Ό New Tools Integration
126
+
127
+ Our Lego-Edit supports the integration of new tools. You can follow the steps below to add custom tools, and The Builder will be able to use them during image editing.
128
+
129
+ 1️⃣ Add the custom tools in system_prompt.txt
130
+
131
+ In system_prompt.txt, you can see many tools such as FASTINPAINT, FLUX-FILL, and more. You can add new tools to perform your desired editing tasks. For example, after the FLUX-POSE tool, you could define a new FLUX-SR tool to handle image super-resolution. In system_prompt.txt, you only need to add the model's description, inputs, outputs and constraints, as shown below:
132
+ ```bash
133
+ ...
134
+ 10.FLUX-POSE (Change the object's posture, expression, etc.)
135
+ Input: {Image[image], Str[prompt]}
136
+ Output: {Image[image]}
137
+ Constraint: The input prompt must provide a detailed description of the external characteristics of the modification target, such as gender, clothing, accessories, etc and don't use any PREDICT model in advance.
138
+ 11.FLUX-BRIGHT (Input image and ratio, adjust image brightness according to ratio)
139
+ Input: {Image[image], Float[ratio]}
140
+ Output: {Image[image]}
141
+ Constraint: The range of input ratio is > 0, where 0 represents the darkest, 0.5 represents it remains unchanged and > 0.5 represents the brighter.
142
+ **Actual example1:**
143
+ ...
144
+ ```
145
+
146
+
147
+ 2️⃣ Add the tool function in legodemo.py
148
+
149
+ In the initialize_model_mapping function within legodemo.py, add the function name of your new tool.
150
+ ```python
151
+ def initialize_model_mapping(self) -> Dict[str, Any]:
152
+ return {
153
+ "CMI-PRED": self.dummy_captionmask_pred,
154
+ "RES": self.dummy_res,
155
+ "MASK-SEG": self.dummy_mask_seg,
156
+ "FASTINPAINT": self.dummy_fastinpaint,
157
+ "FLUX-FILL": self.dummy_flux_fill,
158
+ "FLUX-INPAINT": self.dummy_flux_inpaint,
159
+ "INVERSE": self.dummy_inverse,
160
+ "COMPOSE": self.dummy_compose,
161
+ "RESIZE": self.dummy_resize,
162
+ "BBOX": self.dummy_bbox,
163
+ "SOS": self.dummy_sos,
164
+ "FLUX-CBG": self.dummy_flux_cbg,
165
+ "ADD-PRED": self.dummy_add_pred,
166
+ "FLUX-STYLE": self.dummy_flux_style,
167
+ "FLUX-RCM": self.dummy_flux_rcm,
168
+ "FLUX-ENV": self.dummy_flux_env,
169
+ "FLUX-POSE": self.dummy_flux_pose,
170
+ "FLUX-BRIGHT": self.dummy_flux_bright
171
+ }
172
+ ```
173
+
174
+ Complete your function (dummy_flux_sr) in legodemo.py.
175
+ ```python
176
+ def dummy_flux_bright(self, inputs: Dict[str, DataObject]) -> Dict[str, DataObject]:
177
+ image_ori = inputs['image'].copy()
178
+ ratio = inputs['ratio']
179
+ input_value = max(0.0, min(1.0, ratio))
180
+ ratio = 2 * input_value
181
+ image_pil = Image.fromarray(image_ori[:,:,::-1])
182
+ enhancer = ImageEnhance.Brightness(image_pil)
183
+ image_new = enhancer.enhance(ratio)
184
+ image_new = np.array(image_new)[:,:,::-1]
185
+ return {
186
+ "image": image_new
187
+ }
188
+ ```
189
+
190
+ 3️⃣ Restart Gradio WebUI
191
+
192
+ <p align="center"><img src="https://github.com/xiaomi-research/lego-edit/raw/main/resources/bright.png" width="95%"></p>
193
+
194
+
195
+ ## πŸ“ More Usages
196
+
197
+ Some editing models are trained at a resolution of 768 via the ICEdit method, prioritizing higher output quality over the standard 512 resolution. We provide the corresponding trained [Single-Task-LoRA](https://huggingface.co/xiaomi-research/lego-edit/tree/main/loras). Based on our testing, these models deliver superior performance within their specific functional domains.
198
+
199
+ <p align="center"><img src="https://github.com/xiaomi-research/lego-edit/raw/main/resources/lora_effect.png" width="95%"></p>
200
+
201
+ You can refer to the usage instructions at [ICEdit](https://github.com/River-Zhang/ICEdit) to use these LoRAs independently.
202
+
203
+
204
+ ## πŸ“„ Disclaimer
205
+
206
+ We open-source this project for academic research. The vast majority of images
207
+ used in this project are either generated or licensed. If you have any concerns,
208
+ please contact us, and we will promptly remove any inappropriate content.
209
+ Our code is released under the Apache 2.0 License, while our models are under
210
+ the CC BY-NC 4.0 License. Any models related to <a href="https://huggingface.co/black-forest-labs/FLUX.1-dev" target="_blank">FLUX.1-dev</a>
211
+ base model must adhere to the original licensing terms.
212
+ <br><br>This research aims to advance the field of generative AI. Users are free to
213
+ create images using this tool, provided they comply with local laws and exercise
214
+ responsible usage. The developers are not liable for any misuse of the tool by users.
215
+
216
+ ## ✍️ Citation
217
+
218
+ If this repo is helpful, please help to ⭐ it.
219
+
220
+ If you find this project useful for your research, please consider citing our paper:
221
+
222
+ ```bibtex
223
+ @article{jia2025legoedit,
224
+ title = {Lego-Edit: A General Image Editing Framework with Model-Level Bricks and MLLM Builder},
225
+ author = {Qifei Jia and Yu Liu and Yajie Chai and Xintong Yao and Qiming Lu and Yasen Zhang and Runyu Shi and Ying Huang and Guoquan Zhang},
226
+ journal = {arXiv preprint arXiv:2509.12883},
227
+ year = {2025},
228
+ url = {https://arxiv.org/abs/2509.12883}
229
+ }
230
+ ```
231
+
232
+
233
+ ## πŸ™ Acknowledgments
234
+
235
+ - Built on the [MiMo-VL](https://github.com/XiaomiMiMo/MiMo-VL), [ComfyUI](https://github.com/comfyanonymous/ComfyUI), [FLUX](https://github.com/black-forest-labs/flux), [ICEdit](https://github.com/River-Zhang/ICEdit), [EVF-SAM](https://github.com/hustvl/EVF-SAM) and [LaMa](https://github.com/advimman/lama)