reycn Claude Opus 4.6 commited on
Commit
ca5965a
·
0 Parent(s):

BrushNet inpainting app with runtime model downloads

Browse files

Download model checkpoints from HuggingFace at runtime instead of
bundling them in the repo to stay within storage limits:
- SAM vit_h from HCMUE-Research/SAM-vit-h
- Realistic Vision V6.0 from SG161222/Realistic_Vision_V6.0_B1_noVAE
- BrushNet from camenduru/BrushNet

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

.gitattributes ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.jpg filter=lfs diff=lfs merge=lfs -text
37
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
38
+ *.png filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ data/ckpt/
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: BrushNet
3
+ emoji: ⚡
4
+ colorFrom: yellow
5
+ colorTo: indigo
6
+ sdk: gradio
7
+ sdk_version: 3.50.2
8
+ python_version: 3.9
9
+ app_file: app.py
10
+ pinned: false
11
+ license: apache-2.0
12
+ ---
13
+
14
+ # BrushNet
15
+
16
+ This repository contains the gradio demo of the paper "BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion"
17
+
18
+ Keywords: Image Inpainting, Diffusion Models, Image Generation
19
+
20
+ > [Xuan Ju](https://github.com/juxuan27)<sup>12</sup>, [Xian Liu](https://alvinliu0.github.io/)<sup>12</sup>, [Xintao Wang](https://xinntao.github.io/)<sup>1*</sup>, [Yuxuan Bian](https://scholar.google.com.hk/citations?user=HzemVzoAAAAJ&hl=zh-CN&oi=ao)<sup>2</sup>, [Ying Shan](https://www.linkedin.com/in/YingShanProfile/)<sup>1</sup>, [Qiang Xu](https://cure-lab.github.io/)<sup>2*</sup><br>
21
+ > <sup>1</sup>ARC Lab, Tencent PCG <sup>2</sup>The Chinese University of Hong Kong <sup>*</sup>Corresponding Author
22
+
23
+
24
+ <p align="center">
25
+ <a href="https://tencentarc.github.io/BrushNet/">Project Page</a> |
26
+ <a href="https://github.com/TencentARC/BrushNet">Code</a> |
27
+ <a href="https://arxiv.org/abs/2403.06976">Arxiv</a> |
28
+ <a href="https://forms.gle/9TgMZ8tm49UYsZ9s5">Data</a> |
29
+ <a href="https://drive.google.com/file/d/1IkEBWcd2Fui2WHcckap4QFPcCI0gkHBh/view">Video</a> |
30
+ </p>
31
+
32
+
33
+ ## 🤝🏼 Cite Us
34
+
35
+ ```
36
+ @misc{ju2024brushnet,
37
+ title={BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion},
38
+ author={Xuan Ju and Xian Liu and Xintao Wang and Yuxuan Bian and Ying Shan and Qiang Xu},
39
+ year={2024},
40
+ eprint={2403.06976},
41
+ archivePrefix={arXiv},
42
+ primaryClass={cs.CV}
43
+ }
44
+ ```
45
+
46
+
47
+ ## 💖 Acknowledgement
48
+ <span id="acknowledgement"></span>
49
+
50
+ Our code is modified based on [diffusers](https://github.com/huggingface/diffusers), thanks to all the contributors!
app.py ADDED
@@ -0,0 +1,349 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ##!/usr/bin/python3
2
+ # -*- coding: utf-8 -*-
3
+ import os
4
+
5
+ print("Installing correct gradio version...")
6
+ os.system("pip uninstall -y gradio")
7
+ os.system("pip install gradio==3.50.0")
8
+ print("Installing Finished!")
9
+
10
+ import cv2
11
+ from PIL import Image
12
+ import numpy as np
13
+ from segment_anything import SamPredictor, sam_model_registry
14
+ import torch
15
+ from diffusers import StableDiffusionBrushNetPipeline, BrushNetModel, UniPCMultistepScheduler
16
+ import random
17
+ import gradio as gr
18
+ import spaces
19
+
20
+ mobile_sam = sam_model_registry['vit_h'](checkpoint='data/ckpt/sam_vit_h_4b8939.pth')
21
+ mobile_sam.eval()
22
+ mobile_predictor = SamPredictor(mobile_sam)
23
+ colors = [(255, 0, 0), (0, 255, 0)]
24
+ markers = [1, 5]
25
+
26
+ # - - - - - examples - - - - - #
27
+ image_examples = [
28
+ ["examples/brushnet/src/test_image.jpg", "A beautiful cake on the table", "examples/brushnet/src/test_mask.jpg", 0, [], [Image.open("examples/brushnet/src/test_result.png")]],
29
+ ["examples/brushnet/src/example_1.jpg", "A man in Chinese traditional clothes", "examples/brushnet/src/example_1_mask.jpg", 1, [], [Image.open("examples/brushnet/src/example_1_result.png")]],
30
+ ["examples/brushnet/src/example_2.jpg", "a charming woman with dress standing by the sea", "examples/brushnet/src/example_2_mask.jpg", 2, [], [Image.open("examples/brushnet/src/example_2_result.png")]],
31
+ ["examples/brushnet/src/example_3.jpg", "a cut toy on the table", "examples/brushnet/src/example_3_mask.jpg", 3, [], [Image.open("examples/brushnet/src/example_3_result.png")]],
32
+ ["examples/brushnet/src/example_4.jpeg", "a car driving in the wild", "examples/brushnet/src/example_4_mask.jpg", 4, [], [Image.open("examples/brushnet/src/example_4_result.png")]],
33
+ ["examples/brushnet/src/example_5.jpg", "a charming woman wearing dress standing in the dark forest", "examples/brushnet/src/example_5_mask.jpg", 5, [], [Image.open("examples/brushnet/src/example_5_result.png")]],
34
+ ]
35
+
36
+
37
+
38
+ # choose the base model here
39
+ base_model_path = "data/ckpt/realisticVisionV60B1_v51VAE"
40
+ # base_model_path = "runwayml/stable-diffusion-v1-5"
41
+
42
+ # input brushnet ckpt path
43
+ brushnet_path = "data/ckpt/segmentation_mask_brushnet_ckpt"
44
+
45
+ # input source image / mask image path and the text prompt
46
+ image_path="examples/brushnet/src/test_image.jpg"
47
+ mask_path="examples/brushnet/src/test_mask.jpg"
48
+ caption="A cake on the table."
49
+
50
+ # conditioning scale
51
+ paintingnet_conditioning_scale=1.0
52
+
53
+ brushnet = BrushNetModel.from_pretrained(brushnet_path, torch_dtype=torch.float16)
54
+ pipe = StableDiffusionBrushNetPipeline.from_pretrained(
55
+ base_model_path, brushnet=brushnet, torch_dtype=torch.float16, low_cpu_mem_usage=False
56
+ )
57
+
58
+ # speed up diffusion process with faster scheduler and memory optimization
59
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
60
+ # remove following line if xformers is not installed or when using Torch 2.0.
61
+ # pipe.enable_xformers_memory_efficient_attention()
62
+ # memory optimization.
63
+ # pipe.enable_model_cpu_offload()
64
+
65
+ def resize_image(input_image, resolution):
66
+ H, W, C = input_image.shape
67
+ H = float(H)
68
+ W = float(W)
69
+ k = float(resolution) / min(H, W)
70
+ H *= k
71
+ W *= k
72
+ H = int(np.round(H / 64.0)) * 64
73
+ W = int(np.round(W / 64.0)) * 64
74
+ img = cv2.resize(input_image, (W, H), interpolation=cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA)
75
+ return img
76
+
77
+ @spaces.GPU
78
+ def process(input_image,
79
+ original_image,
80
+ original_mask,
81
+ input_mask,
82
+ selected_points,
83
+ prompt,
84
+ negative_prompt,
85
+ blended,
86
+ invert_mask,
87
+ control_strength,
88
+ seed,
89
+ randomize_seed,
90
+ guidance_scale,
91
+ num_inference_steps):
92
+ if original_image is None:
93
+ raise gr.Error('Please upload the input image')
94
+ if (original_mask is None or len(selected_points)==0) and input_mask is None:
95
+ raise gr.Error("Please click the region where you hope unchanged/changed, or upload a white-black Mask image")
96
+
97
+ # load example image
98
+ if isinstance(original_image, int):
99
+ image_name = image_examples[original_image][0]
100
+ original_image = cv2.imread(image_name)
101
+ original_image = cv2.cvtColor(original_image, cv2.COLOR_BGR2RGB)
102
+
103
+ if input_mask is not None:
104
+ H,W=original_image.shape[:2]
105
+ original_mask = cv2.resize(input_mask, (W, H))
106
+ else:
107
+ original_mask = np.clip(255 - original_mask, 0, 255).astype(np.uint8)
108
+
109
+ if invert_mask:
110
+ original_mask=255-original_mask
111
+
112
+ mask = 1.*(original_mask.sum(-1)>255)[:,:,np.newaxis]
113
+ masked_image = original_image * (1-mask)
114
+
115
+ init_image = Image.fromarray(masked_image.astype(np.uint8)).convert("RGB")
116
+ mask_image = Image.fromarray(original_mask.astype(np.uint8)).convert("RGB")
117
+
118
+ generator = torch.Generator().manual_seed(random.randint(0,2147483647) if randomize_seed else seed)
119
+
120
+ image = pipe(
121
+ [prompt]*2,
122
+ init_image,
123
+ mask_image,
124
+ num_inference_steps=num_inference_steps,
125
+ guidance_scale=guidance_scale,
126
+ generator=generator,
127
+ brushnet_conditioning_scale=float(control_strength),
128
+ negative_prompt=[negative_prompt]*2,
129
+ ).images
130
+
131
+ if blended:
132
+ if control_strength<1.0:
133
+ raise gr.Error('Using blurred blending with control strength less than 1.0 is not allowed')
134
+ blended_image=[]
135
+ # blur, you can adjust the parameters for better performance
136
+ mask_blurred = cv2.GaussianBlur(mask*255, (21, 21), 0)/255
137
+ mask_blurred = mask_blurred[:,:,np.newaxis]
138
+ mask = 1-(1-mask) * (1-mask_blurred)
139
+ for image_i in image:
140
+ image_np=np.array(image_i)
141
+ image_pasted=original_image * (1-mask) + image_np*mask
142
+
143
+ image_pasted=image_pasted.astype(image_np.dtype)
144
+ blended_image.append(Image.fromarray(image_pasted))
145
+
146
+ image=blended_image
147
+
148
+ return image
149
+
150
+ block = gr.Blocks(
151
+ theme=gr.themes.Soft(
152
+ radius_size=gr.themes.sizes.radius_none,
153
+ text_size=gr.themes.sizes.text_md
154
+ )
155
+ ).queue()
156
+ with block:
157
+ with gr.Row():
158
+ with gr.Column():
159
+
160
+ gr.HTML(f"""
161
+ <div style="text-align: center;">
162
+ <h1>BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion</h1>
163
+ <div style="display: flex; justify-content: center; align-items: center; text-align: center;">
164
+ <a href=""></a>
165
+ <a href='https://tencentarc.github.io/BrushNet/'><img src='https://img.shields.io/badge/Project_Page-BrushNet-green' alt='Project Page'></a>
166
+ <a href='https://arxiv.org/abs/2403.06976'><img src='https://img.shields.io/badge/Paper-Arxiv-blue'></a>
167
+ </div>
168
+ </br>
169
+ </div>
170
+ """)
171
+
172
+
173
+ with gr.Accordion(label="🧭 Instructions:", open=True, elem_id="accordion"):
174
+ with gr.Row(equal_height=True):
175
+ gr.Markdown("""
176
+ - ⭐️ <b>step1: </b>Upload or select one image from Example
177
+ - ⭐️ <b>step2: </b>Click on Input-image to select the object to be retained (or upload a white-black Mask image, in which white color indicates the region you want to keep unchanged). You can tick the 'Invert Mask' box to switch region unchanged and change.
178
+ - ⭐️ <b>step3: </b>Input prompt for generating new contents
179
+ - ⭐️ <b>step4: </b>Click Run button
180
+ """)
181
+ with gr.Row():
182
+ with gr.Column():
183
+ with gr.Column(elem_id="Input"):
184
+ with gr.Row():
185
+ with gr.Tabs(elem_classes=["feedback"]):
186
+ with gr.TabItem("Input Image"):
187
+ input_image = gr.Image(type="numpy", label="input",scale=2, height=640)
188
+ original_image = gr.State(value=None,label="index")
189
+ original_mask = gr.State(value=None)
190
+ selected_points = gr.State([],label="select points")
191
+ with gr.Row(elem_id="Seg"):
192
+ radio = gr.Radio(['foreground', 'background'], label='Click to seg: ', value='foreground',scale=2)
193
+ undo_button = gr.Button('Undo seg', elem_id="btnSEG",scale=1)
194
+ prompt = gr.Textbox(label="Prompt", placeholder="Please input your prompt",value='',lines=1)
195
+ negative_prompt = gr.Text(
196
+ label="Negative Prompt",
197
+ max_lines=5,
198
+ placeholder="Please input your negative prompt",
199
+ value='ugly, low quality',lines=1
200
+ )
201
+ with gr.Group():
202
+ with gr.Row():
203
+ blending = gr.Checkbox(label="Blurred Blending", value=False)
204
+ invert_mask = gr.Checkbox(label="Invert Mask", value=True)
205
+ run_button = gr.Button("Run",elem_id="btn")
206
+
207
+ with gr.Accordion("More input params (highly-recommended)", open=False, elem_id="accordion1"):
208
+ control_strength = gr.Slider(
209
+ label="Control Strength: ", show_label=True, minimum=0, maximum=1.1, value=1, step=0.01
210
+ )
211
+ with gr.Group():
212
+ seed = gr.Slider(
213
+ label="Seed: ", minimum=0, maximum=2147483647, step=1, value=551793204
214
+ )
215
+ randomize_seed = gr.Checkbox(label="Randomize seed", value=False)
216
+
217
+ with gr.Group():
218
+ with gr.Row():
219
+ guidance_scale = gr.Slider(
220
+ label="Guidance scale",
221
+ minimum=1,
222
+ maximum=12,
223
+ step=0.1,
224
+ value=12,
225
+ )
226
+ num_inference_steps = gr.Slider(
227
+ label="Number of inference steps",
228
+ minimum=1,
229
+ maximum=50,
230
+ step=1,
231
+ value=50,
232
+ )
233
+ with gr.Row(elem_id="Image"):
234
+ with gr.Tabs(elem_classes=["feedback1"]):
235
+ with gr.TabItem("User-specified Mask Image (Optional)"):
236
+ input_mask = gr.Image(type="numpy", label="Mask Image", height=640)
237
+
238
+ with gr.Column():
239
+ with gr.Tabs(elem_classes=["feedback"]):
240
+ with gr.TabItem("Outputs"):
241
+ result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", preview=True)
242
+ with gr.Row():
243
+ def process_example(input_image, prompt, input_mask, original_image, selected_points,result_gallery): #
244
+ return input_image, prompt, input_mask, original_image, [], result_gallery
245
+ example = gr.Examples(
246
+ label="Input Example",
247
+ examples=image_examples,
248
+ inputs=[input_image, prompt, input_mask, original_image, selected_points,result_gallery],
249
+ outputs=[input_image, prompt, input_mask, original_image, selected_points],
250
+ fn=process_example,
251
+ run_on_click=True,
252
+ examples_per_page=10
253
+ )
254
+
255
+ # once user upload an image, the original image is stored in `original_image`
256
+ def store_img(img):
257
+ # image upload is too slow
258
+ if min(img.shape[0], img.shape[1]) > 512:
259
+ img = resize_image(img, 512)
260
+ if max(img.shape[0], img.shape[1])*1.0/min(img.shape[0], img.shape[1])>2.0:
261
+ raise gr.Error('image aspect ratio cannot be larger than 2.0')
262
+ return img, img, [], None # when new image is uploaded, `selected_points` should be empty
263
+
264
+ input_image.upload(
265
+ store_img,
266
+ [input_image],
267
+ [input_image, original_image, selected_points]
268
+ )
269
+
270
+ # user click the image to get points, and show the points on the image
271
+ def segmentation(img, sel_pix):
272
+ # online show seg mask
273
+ points = []
274
+ labels = []
275
+ for p, l in sel_pix:
276
+ points.append(p)
277
+ labels.append(l)
278
+ mobile_predictor=mobile_predictor.to("cuda")
279
+ mobile_predictor.set_image(img if isinstance(img, np.ndarray) else np.array(img))
280
+ with torch.no_grad():
281
+ masks, _, _ = mobile_predictor.predict(point_coords=np.array(points), point_labels=np.array(labels), multimask_output=False)
282
+
283
+ output_mask = np.ones((masks.shape[1], masks.shape[2], 3))*255
284
+ for i in range(3):
285
+ output_mask[masks[0] == True, i] = 0.0
286
+
287
+ mask_all = np.ones((masks.shape[1], masks.shape[2], 3))
288
+ color_mask = np.random.random((1, 3)).tolist()[0]
289
+ for i in range(3):
290
+ mask_all[masks[0] == True, i] = color_mask[i]
291
+ masked_img = img / 255 * 0.3 + mask_all * 0.7
292
+ masked_img = masked_img*255
293
+ ## draw points
294
+ for point, label in sel_pix:
295
+ cv2.drawMarker(masked_img, point, colors[label], markerType=markers[label], markerSize=20, thickness=5)
296
+ return masked_img, output_mask
297
+
298
+ def get_point(img, sel_pix, point_type, evt: gr.SelectData):
299
+ if point_type == 'foreground':
300
+ sel_pix.append((evt.index, 1)) # append the foreground_point
301
+ elif point_type == 'background':
302
+ sel_pix.append((evt.index, 0)) # append the background_point
303
+ else:
304
+ sel_pix.append((evt.index, 1)) # default foreground_point
305
+
306
+ if isinstance(img, int):
307
+ image_name = image_examples[img][0]
308
+ img = cv2.imread(image_name)
309
+ img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
310
+
311
+ # online show seg mask
312
+ masked_img, output_mask = segmentation(img, sel_pix)
313
+ return masked_img.astype(np.uint8), output_mask
314
+
315
+ input_image.select(
316
+ get_point,
317
+ [original_image, selected_points, radio],
318
+ [input_image, original_mask],
319
+ )
320
+
321
+ # undo the selected point
322
+ def undo_points(orig_img, sel_pix):
323
+ # draw points
324
+ output_mask = None
325
+ if len(sel_pix) != 0:
326
+ if isinstance(orig_img, int): # if orig_img is int, the image if select from examples
327
+ temp = cv2.imread(image_examples[orig_img][0])
328
+ temp = cv2.cvtColor(temp, cv2.COLOR_BGR2RGB)
329
+ else:
330
+ temp = orig_img.copy()
331
+ sel_pix.pop()
332
+ # online show seg mask
333
+ if len(sel_pix) !=0:
334
+ temp, output_mask = segmentation(temp, sel_pix)
335
+ return temp.astype(np.uint8), output_mask
336
+ else:
337
+ gr.Error("Nothing to Undo")
338
+
339
+ undo_button.click(
340
+ undo_points,
341
+ [original_image, selected_points],
342
+ [input_image, original_mask]
343
+ )
344
+
345
+ ips=[input_image, original_image, original_mask, input_mask, selected_points, prompt, negative_prompt, blending, invert_mask, control_strength, seed, randomize_seed, guidance_scale, num_inference_steps]
346
+ run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
347
+
348
+
349
+ block.launch()
examples/brushnet/src/example_1.jpg ADDED

Git LFS Details

  • SHA256: e62e0a7a029ec947c73f44bf1f66e368c3df5307329613cc46c9617a5a21cd36
  • Pointer size: 131 Bytes
  • Size of remote file: 125 kB
examples/brushnet/src/example_1_mask.jpg ADDED

Git LFS Details

  • SHA256: 55a44b4bef6a9eaf8baae42377e32630bb16df88746e67b3d0ec96b4f27cf2d8
  • Pointer size: 130 Bytes
  • Size of remote file: 15.4 kB
examples/brushnet/src/example_1_result.png ADDED

Git LFS Details

  • SHA256: ffa50f164f7ecda0c4272b315cb250b1471cc21fc5ef3e2fc8f37da1df371f4c
  • Pointer size: 131 Bytes
  • Size of remote file: 496 kB
examples/brushnet/src/example_2.jpg ADDED

Git LFS Details

  • SHA256: 70b7a3548380f03281f3e7bae7b5ba93ed04b8ef706902be65f595c4b8eea919
  • Pointer size: 131 Bytes
  • Size of remote file: 155 kB
examples/brushnet/src/example_2_mask.jpg ADDED

Git LFS Details

  • SHA256: 28a83d9627ed88edc147a5227e7dcf90e52d1ad2b5c854b2358ac582918b210b
  • Pointer size: 130 Bytes
  • Size of remote file: 32.4 kB
examples/brushnet/src/example_2_result.png ADDED

Git LFS Details

  • SHA256: 8f00e848b481a5e847787831eeb02de79c5a5d1804ab6f8739487ec1ac0a29ba
  • Pointer size: 131 Bytes
  • Size of remote file: 648 kB
examples/brushnet/src/example_3.jpg ADDED

Git LFS Details

  • SHA256: 3c600529f497e578f7511b570aa9360eb21eb8c7726e16dc895921cb54c93d73
  • Pointer size: 130 Bytes
  • Size of remote file: 99 kB
examples/brushnet/src/example_3_mask.jpg ADDED

Git LFS Details

  • SHA256: eeafb4afbd80a0b00a466486dff2e87b4ba40b9fdf22ecf692d2bd1fcffb832c
  • Pointer size: 130 Bytes
  • Size of remote file: 17.7 kB
examples/brushnet/src/example_3_result.png ADDED

Git LFS Details

  • SHA256: 6b1ffc04f36b4c368d8cba2717c060710d713088c9c585ef2827d0bb8b1b8a84
  • Pointer size: 131 Bytes
  • Size of remote file: 338 kB
examples/brushnet/src/example_4.jpeg ADDED

Git LFS Details

  • SHA256: 1fab6370d419292cf574e568fcb77dfb2416244aac028ef718711a20b1ee5a4d
  • Pointer size: 131 Bytes
  • Size of remote file: 120 kB
examples/brushnet/src/example_4_mask.jpg ADDED

Git LFS Details

  • SHA256: f27e8660b9233f355a102a5a461ba5ae38c2e25d693e25b785f7b071c3edd347
  • Pointer size: 130 Bytes
  • Size of remote file: 17.8 kB
examples/brushnet/src/example_4_result.png ADDED

Git LFS Details

  • SHA256: 487b7b94f5971f34bd7d5dad3a611cab9533f27c54460c5f6b91f8d4dab0eada
  • Pointer size: 131 Bytes
  • Size of remote file: 606 kB
examples/brushnet/src/example_5.jpg ADDED

Git LFS Details

  • SHA256: 359a42ec5c234e7258019590b15fd5a52cc7d69e61f68b2c15adf9cc6a885a3d
  • Pointer size: 131 Bytes
  • Size of remote file: 139 kB
examples/brushnet/src/example_5_mask.jpg ADDED

Git LFS Details

  • SHA256: ce860eef3f1084165cd6e3dcfb52afbcf9b4e8aef39f2686f5fcf6e0d44593a8
  • Pointer size: 130 Bytes
  • Size of remote file: 24 kB
examples/brushnet/src/example_5_result.png ADDED

Git LFS Details

  • SHA256: c3da762c9ac4ddfeb5e6d8be5c7658226c9e8a073411183ceb87a03b7d5203eb
  • Pointer size: 131 Bytes
  • Size of remote file: 610 kB
examples/brushnet/src/test_image.jpg ADDED

Git LFS Details

  • SHA256: 6e476ba24ff84825bc4c7b7c50b90ffc97dbb25606aff674e26bec7024eed8cc
  • Pointer size: 131 Bytes
  • Size of remote file: 208 kB
examples/brushnet/src/test_mask.jpg ADDED

Git LFS Details

  • SHA256: d3cff437cc9ff186851b9f627548bd280303eaef73b8b5ddbebc21c3f8e7cf47
  • Pointer size: 130 Bytes
  • Size of remote file: 11.3 kB
examples/brushnet/src/test_result.png ADDED

Git LFS Details

  • SHA256: 5f5bfd5c2e641bdf6d68fb8090f196dbfd1e7719a4aab0de237321d37cd95ac6
  • Pointer size: 131 Bytes
  • Size of remote file: 406 kB
requirements.txt ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ torch
2
+ torchvision
3
+ torchaudio
4
+ transformers>=4.25.1
5
+ gradio==3.50.0
6
+ ftfy
7
+ tensorboard
8
+ datasets
9
+ Pillow==9.5.0
10
+ opencv-python
11
+ imgaug
12
+ accelerate==0.20.3
13
+ image-reward
14
+ hpsv2
15
+ torchmetrics
16
+ open-clip-torch
17
+ clip
18
+ segment_anything
19
+ git+https://github.com/TencentARC/BrushNet.git