Improve model card for Lego-Edit: Add metadata, links, features, and quick start

#5
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +233 -3
README.md CHANGED
@@ -1,3 +1,233 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ pipeline_tag: image-to-image
4
+ library_name: transformers
5
+ tags:
6
+ - image-editing
7
+ - multimodal
8
+ - mllm
9
+ ---
10
+
11
+ <p align="center">
12
+ <img src="https://github.com/xiaomi-research/lego-edit/raw/main/resources/lego_pic.png" alt="Lego-Edit" width="240"/>
13
+ </p>
14
+
15
+ <p align="center">
16
+ <a href="https://xiaomi-research.github.io/lego-edit/">
17
+ <img
18
+ src="https://img.shields.io/static/v1?label=Project%20Page&message=Web&color=green"
19
+ alt="Lego-Edit Website"
20
+ />
21
+ </a>
22
+ <a href="https://huggingface.co/papers/2509.12883">
23
+ <img
24
+ src="https://img.shields.io/static/v1?label=Paper&message=HF%20Papers&color=red"
25
+ alt="Lego-Edit Paper on HF Papers"
26
+ />
27
+ </a>
28
+ <a href="https://github.com/xiaomi-research/lego-edit">
29
+ <img
30
+ src="https://img.shields.io/static/v1?label=Code&message=GitHub&color=blue"
31
+ alt="Lego-Edit Code on GitHub"
32
+ />
33
+ </a>
34
+ <a href="https://editdemo.ai.xiaomi.net/">
35
+ <img
36
+ src="https://img.shields.io/badge/Demo-Live-orange"
37
+ alt="Lego-Edit Demo"
38
+ />
39
+ </a>
40
+ </p>
41
+
42
+ # Lego-Edit: A General Image Editing Framework with Model-Level Bricks and MLLM Builder
43
+
44
+ This model was presented in the paper [Lego-Edit: A General Image Editing Framework with Model-Level Bricks and MLLM Builder](https://huggingface.co/papers/2509.12883).
45
+
46
+ ## Abstract
47
+ Instruction-based image editing has garnered significant attention due to its direct interaction with users. However, real-world user instructions are immensely diverse, and existing methods often fail to generalize effectively to instructions outside their training domain, limiting their practical application. To address this, we propose Lego-Edit, which leverages the generalization capability of Multi-modal Large Language Model (MLLM) to organize a suite of model-level editing tools to tackle this challenge. Lego-Edit incorporates two key designs: (1) a model-level toolkit comprising diverse models efficiently trained on limited data and several image manipulation functions, enabling fine-grained composition of editing actions by the MLLM; and (2) a three-stage progressive reinforcement learning approach that uses feedback on unannotated, open-domain instructions to train the MLLM, equipping it with generalized reasoning capabilities for handling real-world instructions. Experiments demonstrate that Lego-Edit achieves state-of-the-art performance on GEdit-Bench and ImgBench. It exhibits robust reasoning capabilities for open-domain instructions and can utilize newly introduced editing tools without additional fine-tuning.
48
+
49
+ <p align="center"><img src="https://github.com/xiaomi-research/lego-edit/raw/main/resources/case_pic.png" width="95%"></p>
50
+
51
+ ## ✨ Features
52
+
53
+ Lego-Edit supports local editing, global editing, and multi-step editing as demonstrated in our tests, with corresponding results shown above. We discuss its feedback responsiveness and tool-extension capabilities in our paper.
54
+
55
+ Additionally, Lego-Edit accepts mask inputs for precise editing region control. Example applications are provided here:
56
+
57
+ <p align="center"><img src="https://github.com/xiaomi-research/lego-edit/raw/main/resources/maskcase1.png" width="95%"></p>
58
+
59
+ <p align="center"><img src="https://github.com/xiaomi-research/lego-edit/raw/main/resources/maskcase2.png" width="95%"></p>
60
+
61
+ You can try it and find more usages of this framework.
62
+
63
+ ## πŸ”₯ Quick Start
64
+
65
+ 1️⃣ Set up environment
66
+ ```bash
67
+ conda create -n legoedit python==3.11
68
+ conda activate legoedit
69
+ pip install -r ./requirements.txt
70
+ Install flash-attention (you can install the corresponding version of flash-attention at https://github.com/Dao-AILab/flash-attention/releases)
71
+ Modify ~/yourconda/envs/lego-edit/lib/python3.11/site-packages/transformers/modeling_utils.py, line 5105, map_location="meta" to map_location="cpu"
72
+ ```
73
+
74
+ 2️⃣ Download pretrained checkpoint and custom nodes
75
+
76
+ Custom Nodes:
77
+ ```bash
78
+ cd custom_nodes
79
+ git clone https://github.com/chflame163/ComfyUI_LayerStyle.git
80
+ git clone https://github.com/Fannovel16/comfyui_controlnet_aux.git
81
+ ```
82
+
83
+ Base Model:
84
+
85
+ 1. Download the [FLUX.1-Fill-dev](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev/blob/main/flux1-fill-dev.safetensors) and [FLUX.1-Canny-dev](https://huggingface.co/black-forest-labs/FLUX.1-Canny-dev/blob/main/flux1-canny-dev.safetensors) and copy them to './models/unet/'.
86
+
87
+ 2. Download the [vae](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev/blob/main/ae.safetensors) and copy it to './models/vae/'.
88
+
89
+ 3. Download the [clip_l](https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/clip_l.safetensors) and [t5xxl](https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/t5xxl_fp8_e4m3fn.safetensors) and copy them to './models/clip/'.
90
+
91
+ 4. Download the [lama](https://drive.google.com/file/d/11RbsVSav3O-fReBsPHBE1nn8kcFIMnKp/view?usp=drive_link), unzip and copy it to './lama'.
92
+
93
+
94
+ Our Model:
95
+
96
+ 1. Download all the models (Builder, mimo_lora, CVSOS, CVRES, loras) from [lego-edit](https://huggingface.co/xiaomi-research/lego-edit/).
97
+
98
+
99
+ Your model structure should match the following:
100
+ ```bash
101
+ β”œβ”€β”€ README.md
102
+ β”œβ”€β”€ requirements.txt
103
+ β”œβ”€β”€ legodemo.py
104
+ β”œβ”€β”€ Builder/
105
+ β”œβ”€β”€ mimo_lora/
106
+ β”œβ”€β”€ models/
107
+ β”‚ β”œβ”€β”€ unet/
108
+ β”‚ β”œβ”€β”€ vae/
109
+ β”‚ β”œβ”€β”€ clip/
110
+ β”‚ └── loras/
111
+ β”œβ”€β”€ CVSOS/
112
+ β”œβ”€β”€ CVRES/
113
+ β”œβ”€β”€ lama/
114
+ β”‚ β”œβ”€β”€ big-lama/
115
+ ```
116
+
117
+
118
+ 3️⃣ Use Gradio WebUI to start playing with Lego-Edit!
119
+ ```bash
120
+ python legodemo.py
121
+ ```
122
+
123
+ ## πŸ’Ό New Tools Integration
124
+
125
+ Our Lego-Edit supports the integration of new tools. You can follow the steps below to add custom tools, and The Builder will be able to use them during image editing.
126
+
127
+ 1️⃣ Add the custom tools in system_prompt.txt
128
+
129
+ In system_prompt.txt, you can see many tools such as FASTINPAINT, FLUX-FILL, and more. You can add new tools to perform your desired editing tasks. For example, after the FLUX-POSE tool, you could define a new FLUX-SR tool to handle image super-resolution. In system_prompt.txt, you only need to add the model's description, inputs, outputs and constraints, as shown below:
130
+ ```bash
131
+ ...
132
+ 10.FLUX-POSE (Change the object's posture, expression, etc.)
133
+ Input: {Image[image], Str[prompt]}
134
+ Output: {Image[image]}
135
+ Constraint: The input prompt must provide a detailed description of the external characteristics of the modification target, such as gender, clothing, accessories, etc and don't use any PREDICT model in advance.
136
+ 11.FLUX-BRIGHT (Input image and ratio, adjust image brightness according to ratio)
137
+ Input: {Image[image], Float[ratio]}
138
+ Output: {Image[image]}
139
+ Constraint: The range of input ratio is > 0, where 0 represents the darkest, 0.5 represents it remains unchanged and > 0.5 represents the brighter.
140
+ **Actual example1:**
141
+ ...
142
+ ```
143
+
144
+
145
+ 2️⃣ Add the tool function in legodemo.py
146
+
147
+ In the initialize_model_mapping function within legodemo.py, add the function name of your new tool.
148
+ ```python
149
+ def initialize_model_mapping(self) -> Dict[str, Any]:
150
+ return {
151
+ "CMI-PRED": self.dummy_captionmask_pred,
152
+ "RES": self.dummy_res,
153
+ "MASK-SEG": self.dummy_mask_seg,
154
+ "FASTINPAINT": self.dummy_fastinpaint,
155
+ "FLUX-FILL": self.dummy_flux_fill,
156
+ "FLUX-INPAINT": self.dummy_flux_inpaint,
157
+ "INVERSE": self.dummy_inverse,
158
+ "COMPOSE": self.dummy_compose,
159
+ "RESIZE": self.dummy_resize,
160
+ "BBOX": self.dummy_bbox,
161
+ "SOS": self.dummy_sos,
162
+ "FLUX-CBG": self.dummy_flux_cbg,
163
+ "ADD-PRED": self.dummy_add_pred,
164
+ "FLUX-STYLE": self.dummy_flux_style,
165
+ "FLUX-RCM": self.dummy_flux_rcm,
166
+ "FLUX-ENV": self.dummy_flux_env,
167
+ "FLUX-POSE": self.dummy_flux_pose,
168
+ "FLUX-BRIGHT": self.dummy_flux_bright
169
+ }
170
+ ```
171
+
172
+ Complete your function (dummy_flux_sr) in legodemo.py.
173
+ ```python
174
+ def dummy_flux_bright(self, inputs: Dict[str, DataObject]) -> Dict[str, DataObject]:
175
+ image_ori = inputs['image'].copy()
176
+ ratio = inputs['ratio']
177
+ input_value = max(0.0, min(1.0, ratio))
178
+ ratio = 2 * input_value
179
+ image_pil = Image.fromarray(image_ori[:,:,::-1])
180
+ enhancer = ImageEnhance.Brightness(image_pil)
181
+ image_new = enhancer.enhance(ratio)
182
+ image_new = np.array(image_new)[:,:,::-1]
183
+ return {
184
+ "image": image_new
185
+ }
186
+ ```
187
+
188
+ 3️⃣ Restart Gradio WebUI
189
+
190
+ <p align="center"><img src="https://github.com/xiaomi-research/lego-edit/raw/main/resources/bright.png" width="95%"></p>
191
+
192
+
193
+ ## πŸ“ More Usages
194
+
195
+ Some editing models are trained at a resolution of 768 via the ICEdit method, prioritizing higher output quality over the standard 512 resolution. We provide the corresponding trained [Single-Task-LoRA](https://huggingface.co/xiaomi-research/lego-edit/tree/main/loras). Based on our testing, these models deliver superior performance within their specific functional domains.
196
+
197
+ <p align="center"><img src="https://github.com/xiaomi-research/lego-edit/raw/main/resources/lora_effect.png" width="95%"></p>
198
+
199
+ You can refer to the usage instructions at [ICEdit](https://github.com/River-Zhang/ICEdit) to use these LoRAs independently.
200
+
201
+
202
+ ## πŸ“„ Disclaimer
203
+
204
+ We open-source this project for academic research. The vast majority of images
205
+ used in this project are either generated or licensed. If you have any concerns,
206
+ please contact us, and we will promptly remove any inappropriate content.
207
+ Our code is released under the Apache 2.0 License, while our models are under
208
+ the CC BY-NC 4.0 License. Any models related to <a href="https://huggingface.co/black-forest-labs/FLUX.1-dev" target="_blank">FLUX.1-dev</a>
209
+ base model must adhere to the original licensing terms.
210
+ <br><br>This research aims to advance the field of generative AI. Users are free to
211
+ create images using this tool, provided they comply with local laws and exercise
212
+ responsible usage. The developers are not liable for any misuse of the tool by users.
213
+
214
+ ## ✍️ Citation
215
+
216
+ If this repo is helpful, please help to ⭐ it.
217
+
218
+ If you find this project useful for your research, please consider citing our paper:
219
+
220
+ ```bibtex
221
+ @article{jia2025legoedit,
222
+ title = {Lego-Edit: A General Image Editing Framework with Model-Level Bricks and MLLM Builder},
223
+ author = {Qifei Jia and Yu Liu and Yajie Chai and Xintong Yao and Qiming Lu and Yasen Zhang and Runyu Shi and Ying Huang and Guoquan Zhang},
224
+ journal = {arXiv preprint arXiv:2509.12883},
225
+ year = {2025},
226
+ url = {https://arxiv.org/abs/2509.12883}
227
+ }
228
+ ```
229
+
230
+
231
+ ## πŸ™ Acknowledgments
232
+
233
+ - Built on the [MiMo-VL](https://github.com/XiaomiMiMo/MiMo-VL), [ComfyUI](https://github.com/comfyanonymous/ComfyUI), [FLUX](https://github.com/black-forest-labs/flux), [ICEdit](https://github.com/River-Zhang/ICEdit), [EVF-SAM](https://github.com/hustvl/EVF-SAM) and [LaMa](https://github.com/advimman/lama)