qiaochanghao commited on
Commit
886b3d1
·
1 Parent(s): 992ac05
Files changed (2) hide show
  1. app.py +329 -0
  2. requirements.txt +8 -0
app.py ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import numpy as np
3
+ import random
4
+ import torch
5
+ import spaces
6
+
7
+ from PIL import Image
8
+ from diffusers import QwenImageEditPlusPipeline
9
+
10
+ import os
11
+ import base64
12
+ import json
13
+
14
+ from huggingface_hub import login
15
+ login(token=os.environ.get('hf'))
16
+
17
+ SYSTEM_PROMPT = '''
18
+ # Edit Prompt Enhancer
19
+ You are a professional edit prompt enhancer. Your task is to generate a direct and specific edit prompt based on the user-provided instruction and the image input conditions.
20
+
21
+ Please strictly follow the enhancing rules below:
22
+
23
+ ## 1. General Principles
24
+ - Keep the enhanced prompt **direct and specific**.
25
+ - If the instruction is contradictory, vague, or unachievable, prioritize reasonable inference and correction, and supplement details when necessary.
26
+ - Keep the core intention of the original instruction unchanged, only enhancing its clarity, rationality, and visual feasibility.
27
+ - All added objects or modifications must align with the logic and style of the edited input image’s overall scene.
28
+
29
+ ## 2. Task-Type Handling Rules
30
+ ### 1. Add, Delete, Replace Tasks
31
+ - If the instruction is clear (already includes task type, target entity, position, quantity, attributes), preserve the original intent and only refine the grammar.
32
+ - If the description is vague, supplement with minimal but sufficient details (category, color, size, orientation, position, etc.). For example:
33
+ > Original: "Add an animal"
34
+ > Rewritten: "Add a light-gray cat in the bottom-right corner, sitting and facing the camera"
35
+ - Remove meaningless instructions: e.g., "Add 0 objects" should be ignored or flagged as invalid.
36
+ - For replacement tasks, specify "Replace Y with X" and briefly describe the key visual features of X.
37
+
38
+ ### 2. Text Editing Tasks
39
+ - All text content must be enclosed in English double quotes `" "`. Keep the original language of the text, and keep the capitalization.
40
+ - Both adding new text and replacing existing text are text replacement tasks, For example:
41
+ - Replace "xx" to "yy"
42
+ - Replace the mask / bounding box to "yy"
43
+ - Replace the visual object to "yy"
44
+ - Specify text position, color, and layout only if user has required.
45
+ - If font is specified, keep the original language of the font.
46
+
47
+ ### 3. Human (ID) Editing Tasks
48
+ - Emphasize maintaining the person’s core visual consistency (ethnicity, gender, age, hairstyle, expression, outfit, etc.).
49
+ - If modifying appearance (e.g., clothes, hairstyle), ensure the new element is consistent with the original style.
50
+ - **For expression changes / beauty / make up changes, they must be natural and subtle, never exaggerated.**
51
+ - Example:
52
+ > Original: "Change the person’s hat"
53
+ > Rewritten: "Replace the man’s hat with a dark brown beret; keep smile, short hair, and gray jacket unchanged"
54
+
55
+ ### 4. Style Conversion or Enhancement Tasks
56
+ - If a style is specified, describe it concisely using key visual features. For example:
57
+ > Original: "Disco style"
58
+ > Rewritten: "1970s disco style: flashing lights, disco ball, mirrored walls, colorful tones"
59
+ - For style reference, analyze the original image and extract key characteristics (color, composition, texture, lighting, artistic style, etc.), integrating them into the instruction.
60
+ - **Colorization tasks (including old photo restoration) must use the fixed template:**
61
+ "Restore and colorize the photo."
62
+ - Clearly specify the object to be modified. For example:
63
+ > Original: Modify the subject in Picture 1 to match the style of Picture 2.
64
+ > Rewritten: Change the girl in Picture 1 to the ink-wash style of Picture 2 — rendered in black-and-white watercolor with soft color transitions.
65
+
66
+ - If there are other changes, place the style description at the end.
67
+
68
+ ### 5. Content Filling Tasks
69
+ - For inpainting tasks, always use the fixed template: "Perform inpainting on this image. The original caption is: ".
70
+ - For outpainting tasks, always use the fixed template: ""Extend the image beyond its boundaries using outpainting. The original caption is: ".
71
+
72
+ ### 6. Multi-Image Tasks
73
+ - Rewritten prompts must clearly point out which image’s element is being modified. For example:
74
+ > Original: "Replace the subject of picture 1 with the subject of picture 2"
75
+ > Rewritten: "Replace the girl of picture 1 with the boy of picture 2, keeping picture 2’s background unchanged"
76
+ - For stylization tasks, describe the reference image’s style in the rewritten prompt, while preserving the visual content of the source image.
77
+
78
+ ## 3. Rationale and Logic Checks
79
+ - Resolve contradictory instructions: e.g., "Remove all trees but keep all trees" should be logically corrected.
80
+ - Add missing key information: e.g., if position is unspecified, choose a reasonable area based on composition (near subject, empty space, center/edge, etc.).
81
+
82
+ # Output Format Example
83
+ ```json
84
+ {
85
+ "Rewritten": "..."
86
+ }
87
+ '''
88
+
89
+ def polish_prompt(prompt, img):
90
+ prompt = f"{SYSTEM_PROMPT}\n\nUser Input: {prompt}\n\nRewritten Prompt:"
91
+ success=False
92
+ while not success:
93
+ try:
94
+ result = api(prompt, [img])
95
+ # print(f"Result: {result}")
96
+ # print(f"Polished Prompt: {polished_prompt}")
97
+ if isinstance(result, str):
98
+ result = result.replace('```json','')
99
+ result = result.replace('```','')
100
+ result = json.loads(result)
101
+ else:
102
+ result = json.loads(result)
103
+
104
+ polished_prompt = result['Rewritten']
105
+ polished_prompt = polished_prompt.strip()
106
+ polished_prompt = polished_prompt.replace("\n", " ")
107
+ success = True
108
+ except Exception as e:
109
+ print(f"[Warning] Error during API call: {e}")
110
+ return polished_prompt
111
+
112
+
113
+ def encode_image(pil_image):
114
+ import io
115
+ buffered = io.BytesIO()
116
+ pil_image.save(buffered, format="PNG")
117
+ return base64.b64encode(buffered.getvalue()).decode("utf-8")
118
+
119
+
120
+
121
+
122
+ def api(prompt, img_list, model="qwen-vl-max-latest", kwargs={}):
123
+ import dashscope
124
+ api_key = os.environ.get('DASH_API_KEY')
125
+ if not api_key:
126
+ raise EnvironmentError("DASH_API_KEY is not set")
127
+ assert model in ["qwen-vl-max-latest"], f"Not implemented model {model}"
128
+ sys_promot = "you are a helpful assistant, you should provide useful answers to users."
129
+ messages = [
130
+ {"role": "system", "content": sys_promot},
131
+ {"role": "user", "content": []}]
132
+ for img in img_list:
133
+ messages[1]["content"].append(
134
+ {"image": f"data:image/png;base64,{encode_image(img)}"})
135
+ messages[1]["content"].append({"text": f"{prompt}"})
136
+
137
+ response_format = kwargs.get('response_format', None)
138
+
139
+ response = dashscope.MultiModalConversation.call(
140
+ api_key=api_key,
141
+ model=model, # For example, use qwen-plus here. You can change the model name as needed. Model list: https://help.aliyun.com/zh/model-studio/getting-started/models
142
+ messages=messages,
143
+ result_format='message',
144
+ response_format=response_format,
145
+ )
146
+
147
+ if response.status_code == 200:
148
+ return response.output.choices[0].message.content[0]['text']
149
+ else:
150
+ raise Exception(f'Failed to post: {response}')
151
+
152
+ # --- Model Loading ---
153
+ dtype = torch.bfloat16
154
+ device = "cuda" if torch.cuda.is_available() else "cpu"
155
+
156
+ # Load the model pipeline
157
+ # pipe = QwenImageEditPlusPipeline.from_pretrained("FireRedTeam/FireRed-Image-Edit-1.0", torch_dtype=dtype).to(device)
158
+ pipe = QwenImageEditPlusPipeline.from_pretrained("Qwen/Qwen-Image-Edit-2511", torch_dtype=dtype).to(device)
159
+
160
+ # --- UI Constants and Helpers ---
161
+ MAX_SEED = np.iinfo(np.int32).max
162
+
163
+ # --- Main Inference Function (with hardcoded negative prompt) ---
164
+ @spaces.GPU(duration=180)
165
+ def infer(
166
+ images,
167
+ prompt,
168
+ seed=42,
169
+ randomize_seed=False,
170
+ true_guidance_scale=1.0,
171
+ num_inference_steps=50,
172
+ height=None,
173
+ width=None,
174
+ rewrite_prompt=True,
175
+ num_images_per_prompt=1,
176
+ progress=gr.Progress(track_tqdm=True),
177
+ ):
178
+ """
179
+ Generates an image using the local Qwen-Image diffusers pipeline.
180
+ """
181
+ # Hardcode the negative prompt as requested
182
+ negative_prompt = " "
183
+
184
+ if randomize_seed:
185
+ seed = random.randint(0, MAX_SEED)
186
+
187
+ # Set up the generator for reproducibility
188
+ generator = torch.Generator(device=device).manual_seed(seed)
189
+
190
+ # Load input images into PIL Images
191
+ pil_images = []
192
+ if images is not None:
193
+ for item in images:
194
+ try:
195
+ if isinstance(item[0], Image.Image):
196
+ pil_images.append(item[0].convert("RGB"))
197
+ elif isinstance(item[0], str):
198
+ pil_images.append(Image.open(item[0]).convert("RGB"))
199
+ elif hasattr(item, "name"):
200
+ pil_images.append(Image.open(item.name).convert("RGB"))
201
+ except Exception:
202
+ continue
203
+
204
+ if height==256 and width==256:
205
+ height, width = None, None
206
+ print(f"Calling pipeline with prompt: '{prompt}'")
207
+ print(f"Negative Prompt: '{negative_prompt}'")
208
+ print(f"Seed: {seed}, Steps: {num_inference_steps}, Guidance: {true_guidance_scale}, Size: {width}x{height}")
209
+ if rewrite_prompt and len(pil_images) > 0:
210
+ prompt = polish_prompt(prompt, pil_images[0])
211
+ print(f"Rewritten Prompt: {prompt}")
212
+
213
+
214
+ # Generate the image
215
+ image = pipe(
216
+ image=pil_images if len(pil_images) > 0 else None,
217
+ prompt=prompt,
218
+ height=height,
219
+ width=width,
220
+ negative_prompt=negative_prompt,
221
+ num_inference_steps=num_inference_steps,
222
+ generator=generator,
223
+ true_cfg_scale=true_guidance_scale,
224
+ num_images_per_prompt=num_images_per_prompt,
225
+ ).images
226
+
227
+ return image, seed
228
+
229
+ # --- Examples and UI Layout ---
230
+ examples = []
231
+
232
+ css = """
233
+ #col-container {
234
+ margin: 0 auto;
235
+ max-width: 1024px;
236
+ }
237
+ #edit_text{margin-top: -62px !important}
238
+ """
239
+
240
+ with gr.Blocks(css=css) as demo:
241
+ with gr.Column(elem_id="col-container"):
242
+ gr.HTML('<img src="https://github.com/FireRedTeam/FireRed-Image-Edit/raw/main/assets/logo.png" alt="Firered Logo" width="400" style="display: block; margin: 0 auto;">')
243
+ gr.Markdown("[Learn more](https://github.com/FireRedTeam/FireRed-Image-Edit) about the FireRed-Image-Edit series. Try [download model](https://huggingface.co/FireRedTeam/FireRed-Image-Edit) to run locally with ComfyUI or diffusers.")
244
+ with gr.Row():
245
+ with gr.Column():
246
+ input_images = gr.Gallery(label="Input Images", show_label=False, type="pil", interactive=True)
247
+
248
+ # result = gr.Image(label="Result", show_label=False, type="pil")
249
+ result = gr.Gallery(label="Result", show_label=False, type="pil")
250
+ with gr.Row():
251
+ prompt = gr.Text(
252
+ label="Prompt",
253
+ show_label=False,
254
+ placeholder="describe the edit instruction",
255
+ container=False,
256
+ )
257
+ run_button = gr.Button("Edit!", variant="primary")
258
+
259
+ with gr.Accordion("Advanced Settings", open=False):
260
+ # Negative prompt UI element is removed here
261
+
262
+ seed = gr.Slider(
263
+ label="Seed",
264
+ minimum=0,
265
+ maximum=MAX_SEED,
266
+ step=1,
267
+ value=0,
268
+ )
269
+
270
+ randomize_seed = gr.Checkbox(label="Randomize seed", value=True)
271
+
272
+ with gr.Row():
273
+
274
+ true_guidance_scale = gr.Slider(
275
+ label="True guidance scale",
276
+ minimum=1.0,
277
+ maximum=10.0,
278
+ step=0.1,
279
+ value=4.0
280
+ )
281
+
282
+ num_inference_steps = gr.Slider(
283
+ label="Number of inference steps",
284
+ minimum=1,
285
+ maximum=50,
286
+ step=1,
287
+ value=40,
288
+ )
289
+
290
+ height = gr.Slider(
291
+ label="Height",
292
+ minimum=256,
293
+ maximum=2048,
294
+ step=8,
295
+ value=None,
296
+ )
297
+
298
+ width = gr.Slider(
299
+ label="Width",
300
+ minimum=256,
301
+ maximum=2048,
302
+ step=8,
303
+ value=None,
304
+ )
305
+
306
+
307
+ rewrite_prompt = gr.Checkbox(label="Rewrite prompt", value=True)
308
+
309
+ # gr.Examples(examples=examples, inputs=[prompt], outputs=[result, seed], fn=infer, cache_examples=False)
310
+
311
+ gr.on(
312
+ triggers=[run_button.click, prompt.submit],
313
+ fn=infer,
314
+ inputs=[
315
+ input_images,
316
+ prompt,
317
+ seed,
318
+ randomize_seed,
319
+ true_guidance_scale,
320
+ num_inference_steps,
321
+ height,
322
+ width,
323
+ rewrite_prompt,
324
+ ],
325
+ outputs=[result, seed],
326
+ )
327
+
328
+ if __name__ == "__main__":
329
+ demo.launch()
requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ git+https://github.com/huggingface/diffusers.git
2
+ transformers
3
+ accelerate
4
+ safetensors
5
+ sentencepiece
6
+ dashscope
7
+ kernels
8
+ torchvision