ZhengChong commited on
Commit
1d2edaa
·
1 Parent(s): 8676fc8

Update App Code

Browse files
Files changed (36) hide show
  1. .gitignore +8 -12
  2. README.md +21 -9
  3. app.py +364 -0
  4. model/SCHP/modules/__pycache__/__init__.cpython-39.pyc +0 -0
  5. model/SCHP/modules/__pycache__/bn.cpython-39.pyc +0 -0
  6. model/SCHP/modules/__pycache__/dense.cpython-39.pyc +0 -0
  7. model/SCHP/modules/__pycache__/functions.cpython-39.pyc +0 -0
  8. model/SCHP/modules/__pycache__/misc.cpython-39.pyc +0 -0
  9. model/SCHP/modules/__pycache__/residual.cpython-39.pyc +0 -0
  10. model/SCHP/networks/__pycache__/AugmentCE2P.cpython-39.pyc +0 -0
  11. model/SCHP/networks/__pycache__/__init__.cpython-39.pyc +0 -0
  12. model/SCHP/utils/__pycache__/__init__.cpython-39.pyc +0 -0
  13. model/SCHP/utils/__pycache__/transforms.cpython-39.pyc +0 -0
  14. model/cloth_masker.py +5 -4
  15. resource/demo/example/condition/.DS_Store +0 -0
  16. resource/demo/example/condition/overall/21744571_51588794_1000.jpg +3 -0
  17. resource/demo/example/condition/overall/22153949_52376342_1000.jpg +3 -0
  18. resource/demo/example/condition/overall/23962182_54027982_1000.jpg +3 -0
  19. resource/demo/example/condition/overall/24047235_54199143_1000.jpg +3 -0
  20. resource/demo/example/condition/person/baumu30483223c3_1719437121402_2-0._QL90_UX564_V12524t6_.jpg +3 -0
  21. resource/demo/example/condition/person/jbeeq301271a569_1719429971246_2-0._QL90_UX564_V12524t6_.jpg +3 -0
  22. resource/demo/example/condition/person/mison407622250d_1719258948458_2-0._QL90_UX564_V12524t6_.jpg +3 -0
  23. resource/demo/example/condition/person/mothr22044226e8_1718142523286_2-0._QL90_UX564_V12524t6_.jpg +3 -0
  24. resource/demo/example/condition/upper/21514384_52353349_1000.jpg +3 -0
  25. resource/demo/example/condition/upper/22790049_53294275_1000.jpg +3 -0
  26. resource/demo/example/condition/upper/23255574_53383833_1000.jpg +3 -0
  27. resource/demo/example/condition/upper/24083449_54173465_2048.jpg +3 -0
  28. resource/demo/example/person/.DS_Store +0 -0
  29. resource/demo/example/person/men/Simon_1.png +3 -0
  30. resource/demo/example/person/men/Yifeng_0.png +3 -0
  31. resource/demo/example/person/men/model_5.png +3 -0
  32. resource/demo/example/person/men/model_7.png +3 -0
  33. resource/demo/example/person/women/1-model_3.png +3 -0
  34. resource/demo/example/person/women/2-model_4.png +3 -0
  35. resource/demo/example/person/women/Eva_0.png +3 -0
  36. resource/demo/example/person/women/Yaqi_0.png +3 -0
.gitignore CHANGED
@@ -1,12 +1,8 @@
1
- model/__pycache__/attn_processor.cpython-39.pyc
2
- model/__pycache__/attn_processor.cpython-310.pyc
3
- model/__pycache__/cloth_masker.cpython-39.pyc
4
- model/__pycache__/pipeline.cpython-39.pyc
5
- model/__pycache__/utils.cpython-39.pyc
6
- model/__pycache__/utils.cpython-310.pyc
7
- model/DensePose/__pycache__/__init__.cpython-39.pyc
8
- model/DensePose/__pycache__/__init__.cpython-310.pyc
9
- model/DensePose/__pycache__/__init__.cpython-312.pyc
10
- model/SCHP/__pycache__/__init__.cpython-39.pyc
11
- model/SCHP/__pycache__/__init__.cpython-310.pyc
12
- index.html
 
1
+ model/__pycache__
2
+ model/DensePose/__pycache__
3
+ model/SCHP/__pycache__
4
+ index.html
5
+ resource/demo/output
6
+ resource/demo/example/.DS_Store
7
+ model/SCHP/*/__pycache__
8
+ densepose_
 
 
 
 
README.md CHANGED
@@ -2,22 +2,23 @@
2
 
3
  # <center> 🐈 CatVTON: Concatenation Is All You Need for Virtual Try-On with Diffusion Models
4
 
5
- <p align="center">
6
- <a href="https://github.com/Zheng-Chong/CatVTON">
7
  <img src='https://img.shields.io/badge/arXiv-Paper(soon)-red?style=flat&logo=arXiv&logoColor=red' alt='arxiv'>
8
  </a>
9
- <a href="http://120.76.142.206:8888">
10
  <img src='https://img.shields.io/badge/Demo-Gradio-orange?style=flat&logo=Gradio&logoColor=red' alt='Demo'>
11
  </a>
12
- <a href='https://huggingface.co/zhengchong/CatVTON'>
13
  <img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'>
14
  </a>
15
- <a href="https://github.com/Zheng-Chong/CatVTON">
16
  <img src='https://img.shields.io/badge/GitHub-Repo-blue?style=flat&logo=GitHub' alt='GitHub'>
17
  </a>
18
- <a href="https://github.com/Zheng-Chong/CatVTON/LICENCE"><img src='https://img.shields.io/badge/License-CC BY--NC--SA--4.0-lightgreen?style=flat&logo=Lisence' alt='License'>
 
19
  </a>
20
- </p>
21
 
22
  <div align="center">
23
  <img src="resource/img/teaser.jpg" width="100%" height="100%"/>
@@ -29,10 +30,21 @@
29
 
30
 
31
  ## Updates
32
- - **`2024/7/21`**: Our **Inference Code** and [**🤗Weights**](https://huggingface.co/zhengchong/CatVTON) are released.
33
- - **`2024/7/11`**: [**Online Demo**](http://120.76.142.206:8888) is released.
 
 
34
 
 
 
35
 
 
 
 
 
 
 
 
36
 
37
  ## Inference
38
  ### Data Preparation
 
2
 
3
  # <center> 🐈 CatVTON: Concatenation Is All You Need for Virtual Try-On with Diffusion Models
4
 
5
+ <div style="display: flex; justify-content: center; align-items: center;">
6
+ <a href="https://github.com/Zheng-Chong/CatVTON" style="margin: 0 2px;">
7
  <img src='https://img.shields.io/badge/arXiv-Paper(soon)-red?style=flat&logo=arXiv&logoColor=red' alt='arxiv'>
8
  </a>
9
+ <a href="http://120.76.142.206:8888" style="margin: 0 2px;">
10
  <img src='https://img.shields.io/badge/Demo-Gradio-orange?style=flat&logo=Gradio&logoColor=red' alt='Demo'>
11
  </a>
12
+ <a href='https://huggingface.co/zhengchong/CatVTON' style="margin: 0 2px;">
13
  <img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'>
14
  </a>
15
+ <a href="https://github.com/Zheng-Chong/CatVTON" style="margin: 0 2px;">
16
  <img src='https://img.shields.io/badge/GitHub-Repo-blue?style=flat&logo=GitHub' alt='GitHub'>
17
  </a>
18
+ <a href="https://github.com/Zheng-Chong/CatVTON/LICENCE" style="margin: 0 2px;">
19
+ <img src='https://img.shields.io/badge/License-CC BY--NC--SA--4.0-lightgreen?style=flat&logo=Lisence' alt='License'>
20
  </a>
21
+ </div>
22
 
23
  <div align="center">
24
  <img src="resource/img/teaser.jpg" width="100%" height="100%"/>
 
30
 
31
 
32
  ## Updates
33
+ - **`2024/7/22`**: Our [**App Code**](https://github.com/Zheng-Chong/CatVTON/blob/main/app.py) is released, deploy and enjoy CatVTON on your own mechine 🎉!
34
+ - **`2024/7/21`**: Our [**Inference Code**](https://github.com/Zheng-Chong/CatVTON/blob/main/inference.py) and [**Weights** 🤗](https://huggingface.co/zhengchong/CatVTON) are released.
35
+ - **`2024/7/11`**: Our [**Online Demo**](http://120.76.142.206:8888) is released 😁.
36
+
37
 
38
+ ## Deployment (Gradio App)
39
+ To deploy the Gradio App for CatVTON on your own mechine, just run the following command, and checkpoints will be automaticly download from HuggingFace.
40
 
41
+ ```PowerShell
42
+ CUDA_VISIBLE_DEVICES=0 python app.py \
43
+ --output_dir="resource/demo/output" \
44
+ --mixed_precision="bf16" \
45
+ --allow_tf32
46
+ ```
47
+ When using `bf16` precision, generating results with a resolution of `1024x768` only requires about `8G` VRAM.
48
 
49
  ## Inference
50
  ### Data Preparation
app.py ADDED
@@ -0,0 +1,364 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import os
3
+ from datetime import datetime
4
+
5
+ import gradio as gr
6
+ import numpy as np
7
+ import torch
8
+ from diffusers.image_processor import VaeImageProcessor
9
+ from huggingface_hub import snapshot_download
10
+ from PIL import Image
11
+
12
+ from model.cloth_masker import AutoMasker, vis_mask
13
+ from model.pipeline import CatVTONPipeline
14
+ from utils import init_weight_dtype, resize_and_crop, resize_and_padding
15
+
16
+
17
+ def parse_args():
18
+ parser = argparse.ArgumentParser(description="Simple example of a training script.")
19
+ parser.add_argument(
20
+ "--base_model_path",
21
+ type=str,
22
+ default="runwayml/stable-diffusion-inpainting",
23
+ help=(
24
+ "The path to the base model to use for evaluation. This can be a local path or a model identifier from the Model Hub."
25
+ ),
26
+ )
27
+ parser.add_argument(
28
+ "--resume_path",
29
+ type=str,
30
+ default="zhengchong/CatVTON",
31
+ help=(
32
+ "The Path to the checkpoint of trained tryon model."
33
+ ),
34
+ )
35
+ parser.add_argument(
36
+ "--output_dir",
37
+ type=str,
38
+ default="resource/demo/output",
39
+ help="The output directory where the model predictions will be written.",
40
+ )
41
+
42
+ parser.add_argument(
43
+ "--width",
44
+ type=int,
45
+ default=768,
46
+ help=(
47
+ "The resolution for input images, all the images in the train/validation dataset will be resized to this"
48
+ " resolution"
49
+ ),
50
+ )
51
+ parser.add_argument(
52
+ "--height",
53
+ type=int,
54
+ default=1024,
55
+ help=(
56
+ "The resolution for input images, all the images in the train/validation dataset will be resized to this"
57
+ " resolution"
58
+ ),
59
+ )
60
+ parser.add_argument(
61
+ "--repaint",
62
+ action="store_true",
63
+ help="Whether to repaint the result image with the original background."
64
+ )
65
+ parser.add_argument(
66
+ "--allow_tf32",
67
+ action="store_true",
68
+ default=True,
69
+ help=(
70
+ "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
71
+ " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
72
+ ),
73
+ )
74
+ parser.add_argument(
75
+ "--mixed_precision",
76
+ type=str,
77
+ default="bf16",
78
+ choices=["no", "fp16", "bf16"],
79
+ help=(
80
+ "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
81
+ " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the"
82
+ " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."
83
+ ),
84
+ )
85
+ # parser.add_argument(
86
+ # "--enable_condition_noise",
87
+ # action="store_true",
88
+ # default=True,
89
+ # help="Whether or not to enable condition noise.",
90
+ # )
91
+
92
+ args = parser.parse_args()
93
+ env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
94
+ if env_local_rank != -1 and env_local_rank != args.local_rank:
95
+ args.local_rank = env_local_rank
96
+
97
+ return args
98
+
99
+ def image_grid(imgs, rows, cols):
100
+ assert len(imgs) == rows * cols
101
+
102
+ w, h = imgs[0].size
103
+ grid = Image.new("RGB", size=(cols * w, rows * h))
104
+
105
+ for i, img in enumerate(imgs):
106
+ grid.paste(img, box=(i % cols * w, i // cols * h))
107
+ return grid
108
+
109
+
110
+ args = parse_args()
111
+ repo_path = snapshot_download(repo_id=args.resume_path)
112
+ # Pipeline
113
+ pipeline = CatVTONPipeline(
114
+ base_ckpt=args.base_model_path,
115
+ attn_ckpt=repo_path,
116
+ attn_ckpt_version="mix",
117
+ weight_dtype=init_weight_dtype(args.mixed_precision),
118
+ use_tf32=args.allow_tf32,
119
+ device='cuda'
120
+ )
121
+ # AutoMasker
122
+ mask_processor = VaeImageProcessor(vae_scale_factor=8, do_normalize=False, do_binarize=True, do_convert_grayscale=True)
123
+ automasker = AutoMasker(
124
+ densepose_ckpt=os.path.join(repo_path, "DensePose"),
125
+ schp_ckpt=os.path.join(repo_path, "SCHP"),
126
+ device='cuda',
127
+ )
128
+
129
+ def submit_function(
130
+ person_image,
131
+ cloth_image,
132
+ cloth_type,
133
+ num_inference_steps,
134
+ guidance_scale,
135
+ seed,
136
+ show_type
137
+ ):
138
+ person_image, mask = person_image["background"], person_image["layers"][0]
139
+ mask = Image.open(mask).convert("L")
140
+ if len(np.unique(np.array(mask))) == 1:
141
+ mask = None
142
+ else:
143
+ mask = np.array(mask)
144
+ mask[mask > 0] = 255
145
+ mask = Image.fromarray(mask)
146
+
147
+ tmp_folder = args.output_dir
148
+ date_str = datetime.now().strftime("%Y%m%d%H%M%S")
149
+ result_save_path = os.path.join(tmp_folder, date_str[:8], date_str[8:] + ".png")
150
+ if not os.path.exists(os.path.join(tmp_folder, date_str[:8])):
151
+ os.makedirs(os.path.join(tmp_folder, date_str[:8]))
152
+
153
+ generator = None
154
+ if seed != -1:
155
+ generator = torch.Generator(device='cuda').manual_seed(seed)
156
+
157
+ person_image = Image.open(person_image).convert("RGB")
158
+ cloth_image = Image.open(cloth_image).convert("RGB")
159
+ person_image = resize_and_crop(person_image, (args.width, args.height))
160
+ cloth_image = resize_and_padding(cloth_image, (args.width, args.height))
161
+
162
+ # Process mask
163
+ if mask is not None:
164
+ mask = resize_and_crop(mask, (args.width, args.height))
165
+ else:
166
+ mask = automasker(
167
+ person_image,
168
+ cloth_type
169
+ )['mask']
170
+ mask = mask_processor.blur(mask, blur_factor=9)
171
+
172
+ # Inference
173
+ try:
174
+ result_image = pipeline(
175
+ image=person_image,
176
+ condition_image=cloth_image,
177
+ mask=mask,
178
+ num_inference_steps=num_inference_steps,
179
+ guidance_scale=guidance_scale,
180
+ generator=generator
181
+ )[0]
182
+ except Exception as e:
183
+ raise gr.Error(
184
+ "An error occurred. Please try again later: {}".format(e)
185
+ )
186
+
187
+ # Post-process
188
+ masked_person = vis_mask(person_image, mask)
189
+ save_result_image = image_grid([person_image, masked_person, cloth_image, result_image], 1, 4)
190
+ save_result_image.save(result_save_path)
191
+ if show_type == "result only":
192
+ return result_image
193
+ else:
194
+ width, height = person_image.size
195
+ if show_type == "input & result":
196
+ condition_width = width // 2
197
+ conditions = image_grid([person_image, cloth_image], 2, 1)
198
+ else:
199
+ condition_width = width // 3
200
+ conditions = image_grid([person_image, masked_person , cloth_image], 3, 1)
201
+ conditions = conditions.resize((condition_width, height), Image.NEAREST)
202
+ new_result_image = Image.new("RGB", (width + condition_width + 5, height))
203
+ new_result_image.paste(conditions, (0, 0))
204
+ new_result_image.paste(result_image, (condition_width + 5, 0))
205
+ return new_result_image
206
+
207
+
208
+ def person_example_fn(image_path):
209
+ return image_path
210
+
211
+ HEADER = """
212
+ <h1 style="text-align: center;"> 🐈 CatVTON: Concatenation Is All You Need for Virtual Try-On with Diffusion Models </h1>
213
+ <div style="display: flex; justify-content: center; align-items: center;">
214
+ <a href="https://github.com/Zheng-Chong/CatVTON" style="margin: 0 2px;">
215
+ <img src='https://img.shields.io/badge/arXiv-Paper(soon)-red?style=flat&logo=arXiv&logoColor=red' alt='arxiv'>
216
+ </a>
217
+ <a href="http://120.76.142.206:8888" style="margin: 0 2px;">
218
+ <img src='https://img.shields.io/badge/Demo-Gradio-orange?style=flat&logo=Gradio&logoColor=red' alt='Demo'>
219
+ </a>
220
+ <a href='https://huggingface.co/zhengchong/CatVTON' style="margin: 0 2px;">
221
+ <img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'>
222
+ </a>
223
+ <a href="https://github.com/Zheng-Chong/CatVTON" style="margin: 0 2px;">
224
+ <img src='https://img.shields.io/badge/GitHub-Repo-blue?style=flat&logo=GitHub' alt='GitHub'>
225
+ </a>
226
+ <a href="https://github.com/Zheng-Chong/CatVTON/LICENCE" style="margin: 0 2px;">
227
+ <img src='https://img.shields.io/badge/License-CC BY--NC--SA--4.0-lightgreen?style=flat&logo=Lisence' alt='License'>
228
+ </a>
229
+ </div>
230
+ """
231
+
232
+ def app_gradio():
233
+ with gr.Blocks() as demo:
234
+ gr.Markdown(HEADER)
235
+ with gr.Row():
236
+ with gr.Column(scale=1, min_width=350):
237
+ with gr.Row():
238
+ image_path = gr.Image(
239
+ type="filepath",
240
+ interactive=True,
241
+ visible=False,
242
+ )
243
+ person_image = gr.ImageEditor(
244
+ interactive=True, label="Person Image", type="filepath"
245
+ )
246
+
247
+ with gr.Row():
248
+ with gr.Column(scale=1, min_width=230):
249
+ cloth_image = gr.Image(
250
+ interactive=True, label="Condition Image", type="filepath"
251
+ )
252
+ with gr.Column(scale=1, min_width=120):
253
+ gr.Markdown(
254
+ '<span style="color: #808080; font-size: small;">Two ways to provide Mask:<br>1. Upload the person image and use the `🖌️` above to draw the Mask (higher priority)<br>2. Select the `Try-On Cloth Type` to generate automatically </span>'
255
+ )
256
+ cloth_type = gr.Radio(
257
+ label="Try-On Cloth Type",
258
+ choices=["upper", "lower", "overall"],
259
+ value="upper",
260
+ )
261
+
262
+
263
+ gr.Markdown(
264
+ '<span style="color: #808080; font-size: small;">Advanced options can adjust details:<br>1. `Inference Step` may enhance details;<br>2. `CFG` is highly correlated with saturation;<br>3. `Random seed` may improve pseudo-shadow.</span>'
265
+ )
266
+ with gr.Accordion("Advanced Options", open=False):
267
+ num_inference_steps = gr.Slider(
268
+ label="Inference Step", minimum=10, maximum=100, step=5, value=50
269
+ )
270
+ # Guidence Scale
271
+ guidance_scale = gr.Slider(
272
+ label="CFG Strenth", minimum=0.0, maximum=7.5, step=0.5, value=2.5
273
+ )
274
+ # Random Seed
275
+ seed = gr.Slider(
276
+ label="Seed", minimum=-1, maximum=10000, step=1, value=42
277
+ )
278
+ show_type = gr.Radio(
279
+ label="Show Type",
280
+ choices=["result only", "input & result", "input & mask & result"],
281
+ value="input & mask & result",
282
+ )
283
+ submit = gr.Button("Submit")
284
+ with gr.Column(scale=2, min_width=500):
285
+ result_image = gr.Image(interactive=False, label="Result")
286
+ with gr.Row():
287
+ # Photo Examples
288
+ root_path = "resource/demo/example"
289
+ with gr.Column():
290
+ men_exm = gr.Examples(
291
+ examples=[
292
+ os.path.join(root_path, "person", "men", _)
293
+ for _ in os.listdir(os.path.join(root_path, "person", "men"))
294
+ ],
295
+ examples_per_page=4,
296
+ inputs=image_path,
297
+ label="Person Examples ①",
298
+ )
299
+ women_exm = gr.Examples(
300
+ examples=[
301
+ os.path.join(root_path, "person", "women", _)
302
+ for _ in os.listdir(os.path.join(root_path, "person", "women"))
303
+ ],
304
+ examples_per_page=4,
305
+ inputs=image_path,
306
+ label="Person Examples ②",
307
+ )
308
+ gr.Markdown(
309
+ '<span style="color: #808080; font-size: small;">*Person examples come from the demos of <a href="https://huggingface.co/spaces/levihsu/OOTDiffusion">OOTDiffusion</a> and <a href="https://www.outfitanyone.org">OutfitAnyone</a>. </span>'
310
+ )
311
+ with gr.Column():
312
+ condition_upper_exm = gr.Examples(
313
+ examples=[
314
+ os.path.join(root_path, "condition", "upper", _)
315
+ for _ in os.listdir(os.path.join(root_path, "condition", "upper"))
316
+ ],
317
+ examples_per_page=4,
318
+ inputs=cloth_image,
319
+ label="Condition Upper Examples",
320
+ )
321
+ condition_overall_exm = gr.Examples(
322
+ examples=[
323
+ os.path.join(root_path, "condition", "overall", _)
324
+ for _ in os.listdir(os.path.join(root_path, "condition", "overall"))
325
+ ],
326
+ examples_per_page=4,
327
+ inputs=cloth_image,
328
+ label="Condition Overall Examples",
329
+ )
330
+ condition_person_exm = gr.Examples(
331
+ examples=[
332
+ os.path.join(root_path, "condition", "person", _)
333
+ for _ in os.listdir(os.path.join(root_path, "condition", "person"))
334
+ ],
335
+ examples_per_page=4,
336
+ inputs=cloth_image,
337
+ label="Condition Reference Person Examples",
338
+ )
339
+ gr.Markdown(
340
+ '<span style="color: #808080; font-size: small;">*Condition examples come from the Internet. </span>'
341
+ )
342
+
343
+ image_path.change(
344
+ person_example_fn, inputs=image_path, outputs=person_image
345
+ )
346
+
347
+ submit.click(
348
+ submit_function,
349
+ [
350
+ person_image,
351
+ cloth_image,
352
+ cloth_type,
353
+ num_inference_steps,
354
+ guidance_scale,
355
+ seed,
356
+ show_type,
357
+ ],
358
+ result_image,
359
+ )
360
+ demo.queue().launch(share=True, show_error=True)
361
+
362
+
363
+ if __name__ == "__main__":
364
+ app_gradio()
model/SCHP/modules/__pycache__/__init__.cpython-39.pyc CHANGED
Binary files a/model/SCHP/modules/__pycache__/__init__.cpython-39.pyc and b/model/SCHP/modules/__pycache__/__init__.cpython-39.pyc differ
 
model/SCHP/modules/__pycache__/bn.cpython-39.pyc CHANGED
Binary files a/model/SCHP/modules/__pycache__/bn.cpython-39.pyc and b/model/SCHP/modules/__pycache__/bn.cpython-39.pyc differ
 
model/SCHP/modules/__pycache__/dense.cpython-39.pyc CHANGED
Binary files a/model/SCHP/modules/__pycache__/dense.cpython-39.pyc and b/model/SCHP/modules/__pycache__/dense.cpython-39.pyc differ
 
model/SCHP/modules/__pycache__/functions.cpython-39.pyc CHANGED
Binary files a/model/SCHP/modules/__pycache__/functions.cpython-39.pyc and b/model/SCHP/modules/__pycache__/functions.cpython-39.pyc differ
 
model/SCHP/modules/__pycache__/misc.cpython-39.pyc CHANGED
Binary files a/model/SCHP/modules/__pycache__/misc.cpython-39.pyc and b/model/SCHP/modules/__pycache__/misc.cpython-39.pyc differ
 
model/SCHP/modules/__pycache__/residual.cpython-39.pyc CHANGED
Binary files a/model/SCHP/modules/__pycache__/residual.cpython-39.pyc and b/model/SCHP/modules/__pycache__/residual.cpython-39.pyc differ
 
model/SCHP/networks/__pycache__/AugmentCE2P.cpython-39.pyc CHANGED
Binary files a/model/SCHP/networks/__pycache__/AugmentCE2P.cpython-39.pyc and b/model/SCHP/networks/__pycache__/AugmentCE2P.cpython-39.pyc differ
 
model/SCHP/networks/__pycache__/__init__.cpython-39.pyc CHANGED
Binary files a/model/SCHP/networks/__pycache__/__init__.cpython-39.pyc and b/model/SCHP/networks/__pycache__/__init__.cpython-39.pyc differ
 
model/SCHP/utils/__pycache__/__init__.cpython-39.pyc CHANGED
Binary files a/model/SCHP/utils/__pycache__/__init__.cpython-39.pyc and b/model/SCHP/utils/__pycache__/__init__.cpython-39.pyc differ
 
model/SCHP/utils/__pycache__/transforms.cpython-39.pyc CHANGED
Binary files a/model/SCHP/utils/__pycache__/transforms.cpython-39.pyc and b/model/SCHP/utils/__pycache__/transforms.cpython-39.pyc differ
 
model/cloth_masker.py CHANGED
@@ -213,9 +213,8 @@ class AutoMasker:
213
  (part_mask_of(['Left-arm', 'Right-arm', 'Left-leg', 'Right-leg'], schp_atr_mask, ATR_MAPPING) | \
214
  part_mask_of(['Left-arm', 'Right-arm', 'Left-leg', 'Right-leg'], schp_lip_mask, LIP_MAPPING))
215
  face_protect_area = part_mask_of('Face', schp_lip_mask, LIP_MAPPING)
216
- accessory_protect_area = part_mask_of((accessory_parts := ['Hat', 'Glove', 'Sunglasses', 'Bag', 'Left-shoe', 'Right-shoe', 'Scarf', 'Socks']), schp_lip_mask, LIP_MAPPING) | \
217
- part_mask_of(accessory_parts, schp_atr_mask, ATR_MAPPING)
218
- strong_protect_area = hands_protect_area | face_protect_area | accessory_protect_area
219
 
220
  # Weak Protect Area (Hair, Irrelevant Clothes, Body Parts)
221
  body_protect_area = part_mask_of(PROTECT_BODY_PARTS[part], schp_lip_mask, LIP_MAPPING) | part_mask_of(PROTECT_BODY_PARTS[part], schp_atr_mask, ATR_MAPPING)
@@ -223,7 +222,9 @@ class AutoMasker:
223
  part_mask_of(['Hair'], schp_atr_mask, ATR_MAPPING)
224
  cloth_protect_area = part_mask_of(PROTECT_CLOTH_PARTS[part]['LIP'], schp_lip_mask, LIP_MAPPING) | \
225
  part_mask_of(PROTECT_CLOTH_PARTS[part]['ATR'], schp_atr_mask, ATR_MAPPING)
226
- weak_protect_area = body_protect_area | cloth_protect_area | hair_protect_area | strong_protect_area
 
 
227
 
228
  # Mask Area
229
  strong_mask_area = part_mask_of(MASK_CLOTH_PARTS[part], schp_lip_mask, LIP_MAPPING) | \
 
213
  (part_mask_of(['Left-arm', 'Right-arm', 'Left-leg', 'Right-leg'], schp_atr_mask, ATR_MAPPING) | \
214
  part_mask_of(['Left-arm', 'Right-arm', 'Left-leg', 'Right-leg'], schp_lip_mask, LIP_MAPPING))
215
  face_protect_area = part_mask_of('Face', schp_lip_mask, LIP_MAPPING)
216
+
217
+ strong_protect_area = hands_protect_area | face_protect_area
 
218
 
219
  # Weak Protect Area (Hair, Irrelevant Clothes, Body Parts)
220
  body_protect_area = part_mask_of(PROTECT_BODY_PARTS[part], schp_lip_mask, LIP_MAPPING) | part_mask_of(PROTECT_BODY_PARTS[part], schp_atr_mask, ATR_MAPPING)
 
222
  part_mask_of(['Hair'], schp_atr_mask, ATR_MAPPING)
223
  cloth_protect_area = part_mask_of(PROTECT_CLOTH_PARTS[part]['LIP'], schp_lip_mask, LIP_MAPPING) | \
224
  part_mask_of(PROTECT_CLOTH_PARTS[part]['ATR'], schp_atr_mask, ATR_MAPPING)
225
+ accessory_protect_area = part_mask_of((accessory_parts := ['Hat', 'Glove', 'Sunglasses', 'Bag', 'Left-shoe', 'Right-shoe', 'Scarf', 'Socks']), schp_lip_mask, LIP_MAPPING) | \
226
+ part_mask_of(accessory_parts, schp_atr_mask, ATR_MAPPING)
227
+ weak_protect_area = body_protect_area | cloth_protect_area | hair_protect_area | strong_protect_area | accessory_protect_area
228
 
229
  # Mask Area
230
  strong_mask_area = part_mask_of(MASK_CLOTH_PARTS[part], schp_lip_mask, LIP_MAPPING) | \
resource/demo/example/condition/.DS_Store ADDED
Binary file (6.15 kB). View file
 
resource/demo/example/condition/overall/21744571_51588794_1000.jpg ADDED

Git LFS Details

  • SHA256: 735397cb34a0c9c941739633951f10a6dde23a6005b2a4928f7fe50a884feeea
  • Pointer size: 131 Bytes
  • Size of remote file: 173 kB
resource/demo/example/condition/overall/22153949_52376342_1000.jpg ADDED

Git LFS Details

  • SHA256: bc31d0626e73d78a1f39111acf6d24af29c1d5772b1abe3954781cd913b5f16a
  • Pointer size: 130 Bytes
  • Size of remote file: 54.5 kB
resource/demo/example/condition/overall/23962182_54027982_1000.jpg ADDED

Git LFS Details

  • SHA256: 059b6b391255d449a7ca5cf8bf9361e90c08fc5d54d0d8398a31bcb4ee09d8f4
  • Pointer size: 131 Bytes
  • Size of remote file: 145 kB
resource/demo/example/condition/overall/24047235_54199143_1000.jpg ADDED

Git LFS Details

  • SHA256: 9ca958c25d733a86e6e28c6b2244ba860b425f3471803ecc99769bb736eb2d61
  • Pointer size: 130 Bytes
  • Size of remote file: 58.4 kB
resource/demo/example/condition/person/baumu30483223c3_1719437121402_2-0._QL90_UX564_V12524t6_.jpg ADDED

Git LFS Details

  • SHA256: 9373d531e44e4cf05cba87e6a80e74bbb0aba992e1432486bb1485be13d26b63
  • Pointer size: 131 Bytes
  • Size of remote file: 189 kB
resource/demo/example/condition/person/jbeeq301271a569_1719429971246_2-0._QL90_UX564_V12524t6_.jpg ADDED

Git LFS Details

  • SHA256: e07788d985106837c53d92b4c12b0a27fc3377174b3d7d16177e3d3793719dc2
  • Pointer size: 130 Bytes
  • Size of remote file: 88.7 kB
resource/demo/example/condition/person/mison407622250d_1719258948458_2-0._QL90_UX564_V12524t6_.jpg ADDED

Git LFS Details

  • SHA256: ef9a1b6ec7f5c94e0b5dc27bd76421e06d0643eb63f6a5fb3169744dbcf4bb4c
  • Pointer size: 131 Bytes
  • Size of remote file: 136 kB
resource/demo/example/condition/person/mothr22044226e8_1718142523286_2-0._QL90_UX564_V12524t6_.jpg ADDED

Git LFS Details

  • SHA256: 9d52cbc44d3c3970015b8e1f99715f7f270e1dd6e460ff274e2ea24fc427dfb2
  • Pointer size: 131 Bytes
  • Size of remote file: 102 kB
resource/demo/example/condition/upper/21514384_52353349_1000.jpg ADDED

Git LFS Details

  • SHA256: fd74b6db0fea913db3a18e4a154d4035c0149c9250f7e1335dbd6077c0318064
  • Pointer size: 131 Bytes
  • Size of remote file: 196 kB
resource/demo/example/condition/upper/22790049_53294275_1000.jpg ADDED

Git LFS Details

  • SHA256: 2e8990f31c56829e2dd1460dd905c619ab1dd2b889b6be28770670f84d32541a
  • Pointer size: 131 Bytes
  • Size of remote file: 134 kB
resource/demo/example/condition/upper/23255574_53383833_1000.jpg ADDED

Git LFS Details

  • SHA256: 332036128caffe010e377f7ecb1852791804c53ebc4865d377339b9762c6689f
  • Pointer size: 130 Bytes
  • Size of remote file: 94 kB
resource/demo/example/condition/upper/24083449_54173465_2048.jpg ADDED

Git LFS Details

  • SHA256: 3929f83104c547bcf4a74985999438c05f887d97445c700029414466680eeee3
  • Pointer size: 131 Bytes
  • Size of remote file: 383 kB
resource/demo/example/person/.DS_Store ADDED
Binary file (6.15 kB). View file
 
resource/demo/example/person/men/Simon_1.png ADDED

Git LFS Details

  • SHA256: 3cb43bbdd3cfa852338d3dfb3d9852595e28039fd7afd5b19c0bd433e69099f7
  • Pointer size: 131 Bytes
  • Size of remote file: 816 kB
resource/demo/example/person/men/Yifeng_0.png ADDED

Git LFS Details

  • SHA256: 3decfedeed705e2529326b1600e07f86f317a614832dfac43c75534e5ff832e3
  • Pointer size: 131 Bytes
  • Size of remote file: 969 kB
resource/demo/example/person/men/model_5.png ADDED

Git LFS Details

  • SHA256: 5faaea84635da215bd0819cf5ce65512ca1b742c39ee8fd67176b19e084ed872
  • Pointer size: 131 Bytes
  • Size of remote file: 638 kB
resource/demo/example/person/men/model_7.png ADDED

Git LFS Details

  • SHA256: 583995b49ed1b40834822c9f3b87086b90d68a4c4fcbf6709f7cd1f42450ee56
  • Pointer size: 131 Bytes
  • Size of remote file: 817 kB
resource/demo/example/person/women/1-model_3.png ADDED

Git LFS Details

  • SHA256: 4043d799daf067546952e4ee561c853a231ee3258a3ba99c8c63b02e7e664e68
  • Pointer size: 131 Bytes
  • Size of remote file: 856 kB
resource/demo/example/person/women/2-model_4.png ADDED

Git LFS Details

  • SHA256: 005b4ff0b4cb78e0e165330ea5f329e404df9d739c7728285590675b3b43dbf9
  • Pointer size: 131 Bytes
  • Size of remote file: 761 kB
resource/demo/example/person/women/Eva_0.png ADDED

Git LFS Details

  • SHA256: b3cc8e2f9493b4665b12d7803a4f4517fbe79f833f2ef4427bca99d0100bdf8f
  • Pointer size: 131 Bytes
  • Size of remote file: 843 kB
resource/demo/example/person/women/Yaqi_0.png ADDED

Git LFS Details

  • SHA256: 7cc0bf10cedf528a4530a89d565eb35ac04db7130e459594f11056e48754dc61
  • Pointer size: 131 Bytes
  • Size of remote file: 963 kB