draca7600 xinsir commited on
Commit
3492d85
·
0 Parent(s):

Duplicate from xinsir/controlnet-openpose-sdxl-1.0

Browse files

Co-authored-by: qi <xinsir@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ masonry0.webp filter=lfs diff=lfs merge=lfs -text
37
+ masonry_real.webp filter=lfs diff=lfs merge=lfs -text
000001_scribble_concat.webp ADDED
000003_scribble_concat.webp ADDED
000005_scribble_concat.webp ADDED
000008_scribble_concat.webp ADDED
000010_scribble_concat.webp ADDED
000015_scribble_concat.webp ADDED
000024_scribble_concat.webp ADDED
000028_scribble_concat.webp ADDED
000030_scribble_concat.webp ADDED
000031_scribble_concat.webp ADDED
000042_scribble_concat.webp ADDED
000044_scribble_concat.webp ADDED
000047_scribble_concat.webp ADDED
000048_scribble_concat.webp ADDED
000083_scribble_concat.webp ADDED
000101_scribble_concat.webp ADDED
000127_scribble_concat.webp ADDED
000128_scribble_concat.webp ADDED
000155_scribble_concat.webp ADDED
000180_scribble_concat.webp ADDED
README.md ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - openpose
5
+ - controlnet
6
+ - diffusers
7
+ - controlnet-openpose-sdxl-1.0
8
+ - text_to_image
9
+ pipeline_tag: text-to-image
10
+ ---
11
+ # ***State of the art ControlNet-openpose-sdxl-1.0 model, below are the result for midjourney and anime, just for show***
12
+ ![images](./masonry_real.webp)
13
+ ![images](./masonry0.webp)
14
+
15
+
16
+ ### controlnet-openpose-sdxl-1.0
17
+
18
+ <!-- Provide a longer summary of what this model is. -->
19
+
20
+ - **Developed by:** xinsir
21
+ - **Model type:** ControlNet_SDXL
22
+ - **License:** apache-2.0
23
+ - **Finetuned from model [optional]:** stabilityai/stable-diffusion-xl-base-1.0
24
+
25
+ ### Model Sources [optional]
26
+
27
+ <!-- Provide the basic links for the model. -->
28
+
29
+ - **Paper [optional]:** https://arxiv.org/abs/2302.05543
30
+ -
31
+
32
+ ### Examples
33
+ ![images10](./000010_scribble_concat.webp)
34
+ ![images20](./000024_scribble_concat.webp)
35
+ ![images30](./000028_scribble_concat.webp)
36
+ ![images40](./000030_scribble_concat.webp)
37
+ ![images50](./000044_scribble_concat.webp)
38
+ ![images60](./000101_scribble_concat.webp)
39
+ ![images70](./000127_scribble_concat.webp)
40
+ ![images80](./000128_scribble_concat.webp)
41
+ ![images90](./000155_scribble_concat.webp)
42
+ ![images99](./000180_scribble_concat.webp)
43
+
44
+ ![images0](./000001_scribble_concat.webp)
45
+ ![images1](./000003_scribble_concat.webp)
46
+ ![images2](./000005_scribble_concat.webp)
47
+ ![images3](./000008_scribble_concat.webp)
48
+ ![images4](./000015_scribble_concat.webp)
49
+ ![images5](./000031_scribble_concat.webp)
50
+ ![images6](./000042_scribble_concat.webp)
51
+ ![images7](./000047_scribble_concat.webp)
52
+ ![images8](./000048_scribble_concat.webp)
53
+ ![images9](./000083_scribble_concat.webp)
54
+
55
+ ## Replace the default draw pose function to get better result
56
+ thanks feiyuuu for report the problem. When using the default pose line the performance may be unstable, this is because the pose label use more thick line in training to have a better look.
57
+ This difference can be fix by using the following method:
58
+
59
+ Find the util.py in controlnet_aux python package, usually the path is like: /your anaconda3 path/envs/your env name/lib/python3.8/site-packages/controlnet_aux/open_pose/util.py
60
+ Replace the draw_bodypose function with the following code:
61
+ ```python
62
+ def draw_bodypose(canvas: np.ndarray, keypoints: List[Keypoint]) -> np.ndarray:
63
+ """
64
+ Draw keypoints and limbs representing body pose on a given canvas.
65
+
66
+ Args:
67
+ canvas (np.ndarray): A 3D numpy array representing the canvas (image) on which to draw the body pose.
68
+ keypoints (List[Keypoint]): A list of Keypoint objects representing the body keypoints to be drawn.
69
+
70
+ Returns:
71
+ np.ndarray: A 3D numpy array representing the modified canvas with the drawn body pose.
72
+
73
+ Note:
74
+ The function expects the x and y coordinates of the keypoints to be normalized between 0 and 1.
75
+ """
76
+ H, W, C = canvas.shape
77
+
78
+
79
+ if max(W, H) < 500:
80
+ ratio = 1.0
81
+ elif max(W, H) >= 500 and max(W, H) < 1000:
82
+ ratio = 2.0
83
+ elif max(W, H) >= 1000 and max(W, H) < 2000:
84
+ ratio = 3.0
85
+ elif max(W, H) >= 2000 and max(W, H) < 3000:
86
+ ratio = 4.0
87
+ elif max(W, H) >= 3000 and max(W, H) < 4000:
88
+ ratio = 5.0
89
+ elif max(W, H) >= 4000 and max(W, H) < 5000:
90
+ ratio = 6.0
91
+ else:
92
+ ratio = 7.0
93
+
94
+ stickwidth = 4
95
+
96
+ limbSeq = [
97
+ [2, 3], [2, 6], [3, 4], [4, 5],
98
+ [6, 7], [7, 8], [2, 9], [9, 10],
99
+ [10, 11], [2, 12], [12, 13], [13, 14],
100
+ [2, 1], [1, 15], [15, 17], [1, 16],
101
+ [16, 18],
102
+ ]
103
+
104
+ colors = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0], [0, 255, 0], \
105
+ [0, 255, 85], [0, 255, 170], [0, 255, 255], [0, 170, 255], [0, 85, 255], [0, 0, 255], [85, 0, 255], \
106
+ [170, 0, 255], [255, 0, 255], [255, 0, 170], [255, 0, 85]]
107
+
108
+ for (k1_index, k2_index), color in zip(limbSeq, colors):
109
+ keypoint1 = keypoints[k1_index - 1]
110
+ keypoint2 = keypoints[k2_index - 1]
111
+
112
+ if keypoint1 is None or keypoint2 is None:
113
+ continue
114
+
115
+ Y = np.array([keypoint1.x, keypoint2.x]) * float(W)
116
+ X = np.array([keypoint1.y, keypoint2.y]) * float(H)
117
+ mX = np.mean(X)
118
+ mY = np.mean(Y)
119
+ length = ((X[0] - X[1]) ** 2 + (Y[0] - Y[1]) ** 2) ** 0.5
120
+ angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1]))
121
+ polygon = cv2.ellipse2Poly((int(mY), int(mX)), (int(length / 2), int(stickwidth * ratio)), int(angle), 0, 360, 1)
122
+ cv2.fillConvexPoly(canvas, polygon, [int(float(c) * 0.6) for c in color])
123
+
124
+ for keypoint, color in zip(keypoints, colors):
125
+ if keypoint is None:
126
+ continue
127
+
128
+ x, y = keypoint.x, keypoint.y
129
+ x = int(x * W)
130
+ y = int(y * H)
131
+ cv2.circle(canvas, (int(x), int(y)), int(4 * ratio), color, thickness=-1)
132
+
133
+ return canvas
134
+ ```
135
+
136
+ ## How to Get Started with the Model
137
+
138
+ Use the code below to get started with the model.
139
+
140
+ ```python
141
+ from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
142
+ from diffusers import DDIMScheduler, EulerAncestralDiscreteScheduler
143
+ from controlnet_aux import OpenposeDetector
144
+ from PIL import Image
145
+ import torch
146
+ import numpy as np
147
+ import cv2
148
+
149
+
150
+
151
+ controlnet_conditioning_scale = 1.0
152
+ prompt = "your prompt, the longer the better, you can describe it as detail as possible"
153
+ negative_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
154
+
155
+
156
+
157
+ eulera_scheduler = EulerAncestralDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler")
158
+
159
+
160
+ controlnet = ControlNetModel.from_pretrained(
161
+ "xinsir/controlnet-openpose-sdxl-1.0",
162
+ torch_dtype=torch.float16
163
+ )
164
+
165
+ # when test with other base model, you need to change the vae also.
166
+ vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
167
+
168
+
169
+ pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
170
+ "stabilityai/stable-diffusion-xl-base-1.0",
171
+ controlnet=controlnet,
172
+ vae=vae,
173
+ safety_checker=None,
174
+ torch_dtype=torch.float16,
175
+ scheduler=eulera_scheduler,
176
+ )
177
+
178
+ processor = OpenposeDetector.from_pretrained('lllyasviel/ControlNet')
179
+
180
+
181
+ controlnet_img = cv2.imread("your image path")
182
+ controlnet_img = processor(controlnet_img, hand_and_face=False, output_type='cv2')
183
+
184
+
185
+ # need to resize the image resolution to 1024 * 1024 or same bucket resolution to get the best performance
186
+ height, width, _ = controlnet_img.shape
187
+ ratio = np.sqrt(1024. * 1024. / (width * height))
188
+ new_width, new_height = int(width * ratio), int(height * ratio)
189
+ controlnet_img = cv2.resize(controlnet_img, (new_width, new_height))
190
+ controlnet_img = Image.fromarray(controlnet_img)
191
+
192
+ images = pipe(
193
+ prompt,
194
+ negative_prompt=negative_prompt,
195
+ image=controlnet_img,
196
+ controlnet_conditioning_scale=controlnet_conditioning_scale,
197
+ width=new_width,
198
+ height=new_height,
199
+ num_inference_steps=30,
200
+ ).images
201
+
202
+ images[0].save(f"your image save path, png format is usually better than jpg or webp in terms of image quality but got much bigger")
203
+ ```
204
+
205
+
206
+ ## Evaluation Data
207
+ HumanArt [https://github.com/IDEA-Research/HumanArt], select 2000 images with ground truth pose annotations to generate images and calculate mAP.
208
+
209
+
210
+
211
+ ## Quantitative Result
212
+ | metric | xinsir/controlnet-openpose-sdxl-1.0 | lllyasviel/control_v11p_sd15_openpose | thibaud/controlnet-openpose-sdxl-1.0 |
213
+ |-------|-------|-------|-------|
214
+ | mAP | **0.357** | 0.326 | 0.209 |
215
+
216
+ We are the SOTA openpose model compared with other opensource models.
config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "ControlNetModel",
3
+ "_diffusers_version": "0.20.0.dev0",
4
+ "act_fn": "silu",
5
+ "addition_embed_type": "text_time",
6
+ "addition_embed_type_num_heads": 64,
7
+ "addition_time_embed_dim": 256,
8
+ "attention_head_dim": [
9
+ 5,
10
+ 10,
11
+ 20
12
+ ],
13
+ "block_out_channels": [
14
+ 320,
15
+ 640,
16
+ 1280
17
+ ],
18
+ "class_embed_type": null,
19
+ "conditioning_channels": 3,
20
+ "conditioning_embedding_out_channels": [
21
+ 16,
22
+ 32,
23
+ 96,
24
+ 256
25
+ ],
26
+ "controlnet_conditioning_channel_order": "rgb",
27
+ "cross_attention_dim": 2048,
28
+ "down_block_types": [
29
+ "DownBlock2D",
30
+ "CrossAttnDownBlock2D",
31
+ "CrossAttnDownBlock2D"
32
+ ],
33
+ "downsample_padding": 1,
34
+ "encoder_hid_dim": null,
35
+ "encoder_hid_dim_type": null,
36
+ "flip_sin_to_cos": true,
37
+ "freq_shift": 0,
38
+ "global_pool_conditions": false,
39
+ "in_channels": 4,
40
+ "layers_per_block": 2,
41
+ "mid_block_scale_factor": 1,
42
+ "norm_eps": 1e-05,
43
+ "norm_num_groups": 32,
44
+ "num_attention_heads": null,
45
+ "num_class_embeds": null,
46
+ "only_cross_attention": false,
47
+ "projection_class_embeddings_input_dim": 2816,
48
+ "resnet_time_scale_shift": "default",
49
+ "transformer_layers_per_block": [
50
+ 1,
51
+ 2,
52
+ 10
53
+ ],
54
+ "upcast_attention": null,
55
+ "use_linear_projection": true
56
+ }
diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8524e557a7df60d081f5d4a0eb109967d107df217943bf88c2d99b9ebcc06c5
3
+ size 2502139104
diffusion_pytorch_model_twins.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54a2afb1bd21349e475566e5428884bc937a4caecf863b29dea08acc40612fa4
3
+ size 2502139104
masonry0.webp ADDED

Git LFS Details

  • SHA256: 69984038028960eef77fc195ec81568b8803b0ee4371e089bc85e083d914745e
  • Pointer size: 132 Bytes
  • Size of remote file: 2.01 MB
masonry_real.webp ADDED

Git LFS Details

  • SHA256: e1467a55d77df5b897e6aeb71b6bad5484bbf10802a8ebeb2f5fb42d41523f7d
  • Pointer size: 132 Bytes
  • Size of remote file: 2.04 MB