PrometheusProject vslinx commited on
Commit
652cbfa
·
0 Parent(s):

Duplicate from vslinx/ComfyUIDetailerWorkflow-vslinx

Browse files

Co-authored-by: David <vslinx@users.noreply.huggingface.co>

This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +37 -0
  2. README.md +250 -0
  3. TODO-script.md +3 -0
  4. models/IPAdapter/noobIPAMARK1_mark1.safetensors +3 -0
  5. models/README.md +1 -0
  6. windows-nvidia.bat +326 -0
  7. workflows/IMG2IMG/v4.0/IMG2IMG-ADetailer-v4.0-vslinx.json +0 -0
  8. workflows/IMG2IMG/v4.0/changelog.md +5 -0
  9. workflows/IMG2IMG/v4.0/img2img_zoomin.png +3 -0
  10. workflows/IMG2IMG/v4.0/img2img_zoomout.png +3 -0
  11. workflows/IMG2IMG/v4.0/sample_workflow_img2img.png +3 -0
  12. workflows/IMG2IMG/v4.1/Compare-Table1.png +3 -0
  13. workflows/IMG2IMG/v4.1/Compare-Table2.png +3 -0
  14. workflows/IMG2IMG/v4.1/Compare-Table3.png +3 -0
  15. workflows/IMG2IMG/v4.1/Compare-Table4.png +3 -0
  16. workflows/IMG2IMG/v4.1/Compare-Table5.png +3 -0
  17. workflows/IMG2IMG/v4.1/Compare-Table6.png +3 -0
  18. workflows/IMG2IMG/v4.1/IMG2IMG-ADetailer-v4.1-vslinx.json +0 -0
  19. workflows/IMG2IMG/v4.1/changelog.md +3 -0
  20. workflows/IMG2IMG/v4.1/workflow.png +3 -0
  21. workflows/IMG2IMG/v4.2/IMG2IMG-ADetailer-v4.2-vslinx.json +0 -0
  22. workflows/IMG2IMG/v4.2/IMG2IMG_ADetailer_2025-07-27-030843.png +3 -0
  23. workflows/IMG2IMG/v4.2/changelog.md +5 -0
  24. workflows/IMG2IMG/v4.2/img2img-1.png +3 -0
  25. workflows/IMG2IMG/v4.2/img2img-2.png +3 -0
  26. workflows/IMG2IMG/v4.2/img2img-fullpreview.png +3 -0
  27. workflows/IMG2IMG/v4.2/workflow-img2img.png +3 -0
  28. workflows/IMG2IMG/v4.3/IMG2IMG-ADetailer-v4.3-vslinx.json +0 -0
  29. workflows/IMG2IMG/v4.3/IMG2IMG_ADetailer_2025-08-11-023923.png +3 -0
  30. workflows/IMG2IMG/v4.3/changelog.md +3 -0
  31. workflows/IMG2IMG/v4.3/guide backup/Full IMG2IMG Guide 23.08.2025 - PAGE1.png +3 -0
  32. workflows/IMG2IMG/v4.3/guide backup/Full IMG2IMG Guide 23.08.2025 - PAGE2.png +3 -0
  33. workflows/IMG2IMG/v4.3/guide backup/guide.md +672 -0
  34. workflows/IMG2IMG/v4.3/img2img-1.png +3 -0
  35. workflows/IMG2IMG/v4.3/img2img-2.png +3 -0
  36. workflows/IMG2IMG/v4.3/img2img-fullpreview.png +3 -0
  37. workflows/IMG2IMG/v4.3/workflow-img2img.png +3 -0
  38. workflows/IMG2IMG/v4.4/IMG2IMG-ADetailer-v4.4-vslinx.json +0 -0
  39. workflows/IMG2IMG/v4.4/IMG2IMG_ADetailer_2025-08-27-012249.png +3 -0
  40. workflows/IMG2IMG/v4.4/IMG2IMG_ADetailer_2025-08-27-012250.png +3 -0
  41. workflows/IMG2IMG/v4.4/IMG2IMG_ADetailer_2025-08-27-012251.png +3 -0
  42. workflows/IMG2IMG/v4.4/IMG2IMG_ADetailer_2025-08-27-012251_01.png +3 -0
  43. workflows/IMG2IMG/v4.4/changelog.md +13 -0
  44. workflows/IMG2IMG/v4.4/workflow-txt2img.png +3 -0
  45. workflows/IMG2IMG/v4.4/workflow.png +3 -0
  46. workflows/IMG2IMG/v4.5/IMG2IMG-ADetailer-v4.5-vslinx.json +0 -0
  47. workflows/IMG2IMG/v4.5/IMG2IMG_ADetailer_2025-09-06-142407.png +3 -0
  48. workflows/IMG2IMG/v4.5/IMG2IMG_ADetailer_2025-09-06-142407_01.png +3 -0
  49. workflows/IMG2IMG/v4.5/IMG2IMG_ADetailer_2025-09-06-142408.png +3 -0
  50. workflows/IMG2IMG/v4.5/IMG2IMG_ADetailer_2025-09-06-142409.png +3 -0
.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.png filter=lfs diff=lfs merge=lfs -text
37
+ *.json -text
README.md ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ComfyUI Detailer/ADetailer Workflow
2
+ ===================================
3
+
4
+ Requirements for each version are listed below or can be found inside a **Note** in the Workflow itself.
5
+
6
+ Because of the many connections among the nodes, I **highly** recommend turning off the link visibility by clicking the **"Toggle Link visibility"** (Eye icon) in the bottom right of ComfyUI.
7
+
8
+ [![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ff81b9c8-74d4-44e5-9380-72d250195779/width=525/ff81b9c8-74d4-44e5-9380-72d250195779.jpeg)](https://civitai.com/articles/15480)
9
+
10
+ Description
11
+ -----------
12
+
13
+ I originally came from A1111 WebUI to ComfyUI and, honestly, a lot of things felt way more complicated than they needed to be. I couldn’t find workflows that were both visually pleasing _and_ showed me all the important options without requiring a deep understanding of every little thing in ComfyUI.
14
+
15
+ Over a few months I learned the ins and outs of Comfy and built my own very barebones, simple workflow (v1) with one main goal: streamline the process and make it visually understandable. I released it here thinking it might help others who also want to make the jump from A1111 to Comfy.
16
+
17
+ Over the last year I’ve kept adding more and more features as people started using the workflow and requesting things. The workflow has grown a lot, but I’m still trying to keep the same ease-of-use that this whole journey started with. The main goal is still the same: give you a lot of powerful options in a layout that’s as clean, readable, and as user-friendly as ComfyUI will allow.
18
+
19
+ At this point I feel like I have a pretty solid understanding of how Comfy works, and I’ve even created my own custom nodes to add missing functionality so this workflow could match the ideas I had for it. I try to hide as much of the technical complexity as possible — but if you’re ever curious or confused about anything, please feel free to ask!
20
+
21
+ Thanks to all of your suggestions, the workflow now includes features like:
22
+
23
+ * Single-image and batch generation
24
+
25
+ * Automatic detailers for specific body parts
26
+
27
+ * Upscaling
28
+
29
+ * v-pred models
30
+
31
+ * LoRAs
32
+
33
+ * ControlNet
34
+
35
+ * IPAdapter
36
+
37
+ * Hires fix / refiner
38
+
39
+ * Manual inpainting
40
+
41
+
42
+ Thank you to every single person who uses this workflow, donates Buzz, or shares images on this page.
43
+ I really appreciate you taking those extra steps to support and promote the project. ♥️
44
+
45
+ Requirements
46
+ ------------
47
+
48
+ **v4.4 and above require ComfyUI version 0.3.51 or above and also need the** [**frontend**](https://github.com/Comfy-Org/ComfyUI_frontend?tab=readme-ov-file#nightly-releases) **to be AT LEAST 1.24.3 or later.**
49
+
50
+ ### **v5.0** \- Full List
51
+
52
+ * [ComfyUI Impact Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack)
53
+
54
+ * [ComfyUI Impact Subpack](https://github.com/ltdrdata/ComfyUI-Impact-Subpack)
55
+
56
+ * [ComfyUI-mxToolkit](https://github.com/Smirnov75/ComfyUI-mxToolkit)
57
+
58
+ * [ComfyUI-Easy-Use](https://github.com/yolain/ComfyUI-Easy-Use)
59
+
60
+ * [ComfyUI-Custom-Scripts](https://github.com/pythongosssss/ComfyUI-Custom-Scripts)
61
+
62
+ * [ComfyUI-Crystools](https://github.com/crystian/ComfyUI-Crystools)
63
+
64
+ * [ComfyUI-Image-Saver](https://github.com/alexopus/ComfyUI-Image-Saver)
65
+
66
+ * [ComfyUI\_Comfyroll\_CustomNodes](https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes)
67
+
68
+ * [ComfyUI-Advanced-ControlNet](https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet)
69
+
70
+ * [ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes)
71
+
72
+ * [ComfyUI\_IPAdapter\_plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus)
73
+
74
+ * [ComfyUI-vslinx-nodes](https://github.com/vslinx/ComfyUI-vslinx-nodes)
75
+
76
+ * [ComfyUI-Inpaint-CropAndStitch](https://github.com/lquesada/ComfyUI-Inpaint-CropAndStitch)
77
+
78
+ * [comfyui\_controlnet\_aux](https://github.com/Fannovel16/comfyui_controlnet_aux)
79
+
80
+ * [cg-image-filter](https://github.com/chrisgoringe/cg-image-filter)
81
+
82
+ * [rgthree-comfy](https://github.com/rgthree/rgthree-comfy)
83
+
84
+ * [wlsh\_nodes](https://github.com/wallish77/wlsh_nodes)
85
+
86
+
87
+ ### **v4.4** \- Additions to v4.3 **(IMG2IMG Only)**
88
+
89
+ * [ComfyUI-vslinx-nodes](https://github.com/vslinx/ComfyUI-vslinx-nodes)
90
+
91
+ * Otherwise same as **v4-4.3** below
92
+
93
+
94
+ ### **v4.2-4.3** \- Additions to v4.1
95
+
96
+ * [WLSH Nodes](https://github.com/wallish77/wlsh_nodes)
97
+
98
+ * Otherwise same as **v4.1** (incl. **v4**) below
99
+
100
+
101
+ ### v4.1 - Additions to v4 (IMG2IMG Only)
102
+
103
+ * [ComfyUI-WD14-Tagger](https://github.com/pythongosssss/ComfyUI-WD14-Tagger)
104
+
105
+ * Otherwise same as **v4** below
106
+
107
+
108
+ ### v4 - Full List
109
+
110
+ * [ComfyUI Impact Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack)
111
+
112
+ * [ComfyUI Impact Subpack](https://github.com/ltdrdata/ComfyUI-Impact-Subpack)
113
+
114
+ * [ComfyUI-mxToolkit](https://github.com/Smirnov75/ComfyUI-mxToolkit)
115
+
116
+ * [ComfyUI-Easy-Use](https://github.com/yolain/ComfyUI-Easy-Use)
117
+
118
+ * [ComfyUI-Custom-Scripts](https://github.com/pythongosssss/ComfyUI-Custom-Scripts)
119
+
120
+ * [ComfyUI-Crystools](https://github.com/crystian/ComfyUI-Crystools)
121
+
122
+ * [ComfyUI-Image-Saver](https://github.com/alexopus/ComfyUI-Image-Saver)
123
+
124
+ * [ComfyUI\_Comfyroll\_CustomNodes](https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes)
125
+
126
+ * [ComfyUI-Advanced-ControlNet](https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet)
127
+
128
+ * [ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes)
129
+
130
+ * [ComfyUI\_IPAdapter\_plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus)
131
+
132
+ * [comfyui\_controlnet\_aux](https://github.com/Fannovel16/comfyui_controlnet_aux)
133
+
134
+ * [cg-use-everywhere](https://github.com/chrisgoringe/cg-use-everywhere)
135
+
136
+ * [cg-image-filter](https://github.com/chrisgoringe/cg-image-filter)
137
+
138
+ * [rgthree-comfy](https://github.com/rgthree/rgthree-comfy)
139
+
140
+
141
+ ### v3 - v3.2 - Full List
142
+
143
+ * [ComfyUI Impact Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack)
144
+
145
+ * [ComfyUI Impact Subpack](https://github.com/ltdrdata/ComfyUI-Impact-Subpack)
146
+
147
+ * [ComfyUI-mxToolkit](https://github.com/Smirnov75/ComfyUI-mxToolkit)
148
+
149
+ * [ComfyUI-Easy-Use](https://github.com/yolain/ComfyUI-Easy-Use)
150
+
151
+ * [ComfyUI-Custom-Scripts](https://github.com/pythongosssss/ComfyUI-Custom-Scripts)
152
+
153
+ * [ComfyUI-Crystools](https://github.com/crystian/ComfyUI-Crystools)
154
+
155
+ * [ComfyUI-Image-Saver](https://github.com/alexopus/ComfyUI-Image-Saver)
156
+
157
+ * [ComfyUI\_Comfyroll\_CustomNodes](https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes)
158
+
159
+ * [ComfyUI-Advanced-ControlNet](https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet)
160
+
161
+ * [ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes)
162
+
163
+ * [comfyui\_controlnet\_aux](https://github.com/Fannovel16/comfyui_controlnet_aux)
164
+
165
+ * [cg-use-everywhere](https://github.com/chrisgoringe/cg-use-everywhere)
166
+
167
+ * [cg-image-filter](https://github.com/chrisgoringe/cg-image-filter)
168
+
169
+ * [rgthree-comfy](https://github.com/rgthree/rgthree-comfy)
170
+
171
+
172
+ ### v2.2 - Additions to v2
173
+
174
+ * [ComfyUI\_Comfyroll\_Nodes](https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes)
175
+
176
+ * Otherwise same Custom-Nodes as v2 but you can remove [Comfyui-ergouzi-Nodes](https://github.com/11dogzi/Comfyui-ergouzi-Nodes)
177
+
178
+
179
+ v2 - Full List
180
+ --------------
181
+
182
+ * [ComfyUI Impact Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack)
183
+
184
+ * [ComfyUI Impact Subpack](https://github.com/ltdrdata/ComfyUI-Impact-Subpack)
185
+
186
+ * [ComfyUI-mxToolkit](https://github.com/Smirnov75/ComfyUI-mxToolkit)
187
+
188
+ * [ComfyUI-Easy-Use](https://github.com/yolain/ComfyUI-Easy-Use)
189
+
190
+ * [ComfyUI-Custom-Scripts](https://github.com/pythongosssss/ComfyUI-Custom-Scripts)
191
+
192
+ * [ComfyUI-Crystools](https://github.com/crystian/ComfyUI-Crystools)
193
+
194
+ * [Comfyui-ergouzi-Nodes](https://github.com/11dogzi/Comfyui-ergouzi-Nodes)
195
+
196
+ * [ComfyUI-Image-Saver](https://github.com/alexopus/ComfyUI-Image-Saver)
197
+
198
+ * [cg-use-everywhere](https://github.com/chrisgoringe/cg-use-everywhere)
199
+
200
+ * [cg-image-filter](https://github.com/chrisgoringe/cg-image-filter)
201
+
202
+ * [rgthree-comfy](https://github.com/rgthree/rgthree-comfy)
203
+
204
+
205
+ v1 - Full List
206
+ --------------
207
+
208
+ * [ComfyUI Impact Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack)
209
+
210
+ * [ComfyUI-Custom-Scripts](https://github.com/pythongosssss/ComfyUI-Custom-Scripts)
211
+
212
+ * [cg-use-everywhere](https://github.com/chrisgoringe/cg-use-everywhere)
213
+
214
+ * [cg-image-picker](https://github.com/chrisgoringe/cg-image-picker)
215
+
216
+ * [ComfyUI Impact Subpack](https://github.com/ltdrdata/ComfyUI-Impact-Subpack)
217
+
218
+
219
+ How to use
220
+ ----------
221
+
222
+ Since all of the different versions work differently, you should check the **"How to use"** Node inside of the Workflow itself.
223
+
224
+ I promise that once you read the explanation of the workflow itself, it'll click and it will be a simple plug and play experience.
225
+
226
+ It's the simplest I could've made it coming from someone who's only started using ComfyUI 4-5 months ago and had been exclusively an A1111WebUI user before.
227
+
228
+ When were what functionalitys added?
229
+ ------------------------------------
230
+
231
+ Starting from **v3**, ControlNet is included.
232
+ Starting from **v4**, IPAdapter is included.
233
+ Starting from **v4.3**, HiRes Fix and Dynamic prompts(wildcards) is included.
234
+ Starting from **v4.4**, Refiner is included.
235
+ Starting from **v5.0**, Manual Inpainting is included.
236
+
237
+ Any errors during execution?
238
+ ----------------------------
239
+
240
+ If you're running into any errors during the execution of the workflow, please check the [FAQ of my Guide](https://civitai.com/articles/15480#faq:) first. The guide is written for the IMG2IMG Workflow but when issues arise that people run into frequently i'll add the solutions and what's hapenning to that FAQ section.
241
+ If you can't find the problem you're running into there - feel free to write me a comment **and include your logs from your comfyui console that show the error** on the model page so that i can help you and other people might benefit from it as well.
242
+
243
+ Feedback
244
+ --------
245
+
246
+ I'd love to see your feedback or opinion on the workflow.
247
+ This is the first workflow I have ever created myself from scratch and I'd love to hear what you think of it.
248
+
249
+ If you want to do me a huge favor, you can post your results on this Model page.
250
+ I'll make sure to send some buzz your way!
TODO-script.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ - add support of just downloading models and nodes instead of everything (use existing comfy Folder)
2
+ - but also update all nodes that are already there to newest version
3
+ - fix check for controlnet models (currently no check if they already exist)
models/IPAdapter/noobIPAMARK1_mark1.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cdb6a00be1b12579745b5bed0c7b83f0869073d8a864fa8cd50a9356601919a
3
+ size 1405172056
models/README.md ADDED
@@ -0,0 +1 @@
 
 
1
+ All the models stored here are only stored because i couldn't find an official source and didn't want to rely on 3rd-party people to keep the models uploaded. If your model is among these here, you can always contact me [here](https://civitai.com/user/vslinx) and i'll remove it by your request.
windows-nvidia.bat ADDED
@@ -0,0 +1,326 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ @echo off
2
+ setlocal enabledelayedexpansion
3
+
4
+ echo.
5
+ echo ============================
6
+ echo ComfyUI Auto Installer
7
+ echo ============================
8
+ echo.
9
+
10
+ REM Define paths
11
+ set "comfyPath=%CD%\ComfyUI"
12
+ set "customNodesPath=%comfyPath%\custom_nodes"
13
+ set "venvPath=%comfyPath%\venv"
14
+ set "pythonPath=%venvPath%\Scripts\python.exe"
15
+ set "activateScript=%venvPath%\Scripts\activate.bat"
16
+
17
+ REM -------------------------------
18
+ REM Check if Python is available
19
+ REM -------------------------------
20
+ where python >nul 2>&1
21
+ if %errorlevel% NEQ 0 (
22
+ echo Python is not found in PATH.
23
+ echo Would you like to install Python 3.12 now? [Y/N]
24
+ set /p "PYINSTALL=Your choice [Y/N]: "
25
+ if /i "!PYINSTALL!"=="Y" (
26
+ echo Downloading Python 3.12...
27
+ %SystemRoot%\System32\curl.exe -L -o python-installer.exe https://www.python.org/ftp/python/3.12.3/python-3.12.3-amd64.exe
28
+ if not exist python-installer.exe (
29
+ echo Failed to download Python installer.
30
+ pause
31
+ exit /b 1
32
+ )
33
+ echo Installing Python 3.12...
34
+ start /wait python-installer.exe /quiet InstallAllUsers=1 PrependPath=1 Include_test=0
35
+ del /f python-installer.exe
36
+ ) else (
37
+ echo Cannot continue without Python. Exiting...
38
+ pause
39
+ exit /b 1
40
+ )
41
+ )
42
+
43
+ REM Get Python version from stderr
44
+ for /f "tokens=2 delims= " %%A in ('python -V 2^>^&1') do set PYVERSION=%%A
45
+
46
+ REM Parse major and minor
47
+ for /f "tokens=1,2 delims=." %%B in ("%PYVERSION%") do (
48
+ set "PYMAJOR=%%B"
49
+ set "PYMINOR=%%C"
50
+ )
51
+
52
+ echo Detected Python version: %PYVERSION%
53
+ if not "%PYMAJOR%.%PYMINOR%"=="3.12" (
54
+ echo Your current Python version is %PYVERSION%. It may not be supported.
55
+ echo Would you like to:
56
+ echo [Y] Install Python 3.12
57
+ echo [N] Continue using current version
58
+ set /p "PYCHOICE=Choose [Y/N]: "
59
+ if /i "!PYCHOICE!"=="Y" (
60
+ echo Downloading Python 3.12...
61
+ %SystemRoot%\System32\curl.exe -L -o python-installer.exe https://www.python.org/ftp/python/3.12.3/python-3.12.3-amd64.exe
62
+ if not exist python-installer.exe (
63
+ echo Failed to download Python installer.
64
+ pause
65
+ exit /b 1
66
+ )
67
+ echo Installing Python 3.12...
68
+ start /wait python-installer.exe /quiet InstallAllUsers=1 PrependPath=1 Include_test=0
69
+ del /f python-installer.exe
70
+ )
71
+ )
72
+
73
+ REM -------------------------------
74
+ REM Check Git
75
+ REM -------------------------------
76
+ git --version >nul 2>&1
77
+ if %errorlevel% NEQ 0 (
78
+ echo Git not found. Installing Git...
79
+ powershell -Command "& {Invoke-WebRequest -Uri 'https://github.com/git-for-windows/git/releases/download/v2.41.0.windows.3/Git-2.41.0.3-64-bit.exe' -OutFile 'git_installer.exe'}"
80
+ if %errorlevel% NEQ 0 (
81
+ echo Failed to download Git.
82
+ pause
83
+ exit /b 1
84
+ )
85
+ start /wait git_installer.exe /VERYSILENT
86
+ del /f git_installer.exe
87
+ echo Git installed successfully.
88
+ )
89
+
90
+ REM -------------------------------
91
+ REM Clone ComfyUI
92
+ REM -------------------------------
93
+ if not exist "%comfyPath%" (
94
+ echo Cloning ComfyUI...
95
+ git clone https://github.com/comfyanonymous/ComfyUI.git "%comfyPath%"
96
+ ) else (
97
+ echo ComfyUI already exists. Skipping clone.
98
+ )
99
+
100
+ REM -------------------------------
101
+ REM Create venv
102
+ REM -------------------------------
103
+ if not exist "%venvPath%" (
104
+ echo Creating virtual environment...
105
+ python -m venv "%venvPath%"
106
+ )
107
+
108
+ REM -------------------------------
109
+ REM Activate venv and install deps
110
+ REM -------------------------------
111
+ call "%activateScript%"
112
+ echo Installing CUDA-enabled torch manually...
113
+ "%pythonPath%" -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu128
114
+
115
+ echo Installing dependencies...
116
+ "%pythonPath%" -m pip install --upgrade pip
117
+ "%pythonPath%" -m pip install -r "%comfyPath%\requirements.txt"
118
+
119
+ REM -------------------------------
120
+ REM Clone custom nodes
121
+ REM -------------------------------
122
+ if not exist "%customNodesPath%" (
123
+ mkdir "%customNodesPath%"
124
+ )
125
+
126
+ echo Cloning custom nodes...
127
+
128
+ set repos[0]=https://github.com/ltdrdata/ComfyUI-Impact-Pack
129
+ set repos[1]=https://github.com/ltdrdata/ComfyUI-Impact-Subpack
130
+ set repos[2]=https://github.com/Smirnov75/ComfyUI-mxToolkit
131
+ set repos[3]=https://github.com/yolain/ComfyUI-Easy-Use
132
+ set repos[4]=https://github.com/pythongosssss/ComfyUI-Custom-Scripts
133
+ set repos[5]=https://github.com/crystian/ComfyUI-Crystools
134
+ set repos[6]=https://github.com/alexopus/ComfyUI-Image-Saver
135
+ set repos[7]=https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes
136
+ set repos[8]=https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet
137
+ set repos[9]=https://github.com/kijai/ComfyUI-KJNodes
138
+ set repos[10]=https://github.com/Fannovel16/comfyui_controlnet_aux
139
+ set repos[11]=https://github.com/vslinx/ComfyUI-vslinx-nodes.git
140
+ set repos[12]=https://github.com/chrisgoringe/cg-image-filter
141
+ set repos[13]=https://github.com/rgthree/rgthree-comfy
142
+ set repos[14]=https://github.com/cubiq/ComfyUI_IPAdapter_plus
143
+ set repos[15]=https://github.com/pythongosssss/ComfyUI-WD14-Tagger.git
144
+ set repos[16]=https://github.com/Comfy-Org/ComfyUI-Manager
145
+ set repos[17]=https://github.com/wallish77/wlsh_nodes.git
146
+ set repos[18]=https://github.com/lquesada/ComfyUI-Inpaint-CropAndStitch
147
+
148
+ for /L %%i in (0,1,18) do (
149
+ set "repo=!repos[%%i]!"
150
+ for %%A in (!repo!) do (
151
+ set "folderName=%%~nxA"
152
+ set "targetPath=%customNodesPath%\!folderName!"
153
+
154
+ if not exist "!targetPath!" (
155
+ echo - Cloning !folderName!...
156
+ git clone !repo! "!targetPath!"
157
+ ) else (
158
+ echo - !folderName! already exists. Skipping.
159
+ )
160
+
161
+ REM Install requirements if available
162
+ if exist "!targetPath!\requirements.txt" (
163
+ echo Installing requirements for !folderName!...
164
+ "%pythonPath%" -s -m pip install -r "!targetPath!\requirements.txt"
165
+ )
166
+ )
167
+ )
168
+
169
+ REM -------------------------------
170
+ REM Download JSON workflow files
171
+ REM -------------------------------
172
+ set "workflowFolder=%comfyPath%\user\default\workflows"
173
+ if not exist "%workflowFolder%" (
174
+ mkdir "%workflowFolder%"
175
+ )
176
+
177
+ REM -------------------------------
178
+ REM Download SAM model (Segment Anything)
179
+ REM -------------------------------
180
+ set "samFolder=%comfyPath%\models\sams"
181
+ if not exist "%samFolder%" (
182
+ mkdir "%samFolder%"
183
+ )
184
+
185
+ set "samFile=%samFolder%\sam_vit_b_01ec64.pth"
186
+ if exist "%samFile%" (
187
+ echo - sam_vit_b_01ec64.pth already exists. Skipping download.
188
+ ) else (
189
+ echo Downloading SAM model...
190
+ %SystemRoot%\System32\curl.exe -L -o "%samFile%" ^
191
+ https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth
192
+ )
193
+
194
+ echo Checking workflow files...
195
+
196
+ REM Define workflow filenames and URLs
197
+ set workflow[0]=TXT2IMG-ADetailer-v5.0-vslinx.json
198
+ set url[0]=https://huggingface.co/vslinx/ComfyUIDetailerWorkflow-vslinx/resolve/main/workflows/TXT2IMG/v5.0/TXT2IMG-ADetailer-v5.0-vslinx.json
199
+
200
+ set workflow[1]=IMG2IMG-ADetailer-v5.0-vslinx.json
201
+ set url[1]=https://huggingface.co/vslinx/ComfyUIDetailerWorkflow-vslinx/resolve/main/workflows/IMG2IMG/v5.0/IMG2IMG-ADetailer-v5.0-vslinx.json
202
+
203
+ REM Loop through and download if not already present
204
+ for /L %%i in (0,1,1) do (
205
+ call set "file=%%workflow[%%i]%%"
206
+ call set "link=%%url[%%i]%%"
207
+ set "path=%workflowFolder%\!file!"
208
+
209
+ if exist "!path!" (
210
+ echo - !file! already exists. Skipping download.
211
+ ) else (
212
+ echo ↓ Downloading !file!...
213
+ %SystemRoot%\System32\curl.exe -L -o "!path!" "!link!"
214
+ )
215
+ )
216
+
217
+ echo.
218
+ echo Would you like to download required models (ControlNet, IPAdapter, Upscale Model, etc.)?
219
+ echo [Y] Yes
220
+ echo [N] No
221
+ set /p "download_models=Choose [Y/N]: "
222
+
223
+ if /i "%download_models%"=="Y" (
224
+ echo Downloading required models...
225
+
226
+ REM Create folders if they don't exist
227
+ if not exist "%comfyPath%\models\vae" mkdir "%comfyPath%\models\vae"
228
+ if not exist "%comfyPath%\models\upscale_models" mkdir "%comfyPath%\models\upscale_models"
229
+ if not exist "%comfyPath%\models\ipadapter" mkdir "%comfyPath%\models\ipadapter"
230
+ if not exist "%comfyPath%\models\clip_vision" mkdir "%comfyPath%\models\clip_vision"
231
+ if not exist "%comfyPath%\models\controlnet" mkdir "%comfyPath%\models\controlnet"
232
+
233
+ REM Download files
234
+ echo Downloading VAE...
235
+ %SystemRoot%\System32\curl.exe -L -o "%comfyPath%\models\vae\sdxl_vae.safetensors" ^
236
+ https://huggingface.co/stabilityai/sdxl-vae/resolve/main/sdxl_vae.safetensors
237
+
238
+ echo Downloading Upscale Model...
239
+ %SystemRoot%\System32\curl.exe -L -o "%comfyPath%\models\upscale_models\4x_foolhardy_Remacri.pth" ^
240
+ https://huggingface.co/FacehugmanIII/4x_foolhardy_Remacri/resolve/main/4x_foolhardy_Remacri.pth
241
+
242
+ echo Downloading IPAdapter model...
243
+ %SystemRoot%\System32\curl.exe -L -o "%comfyPath%\models\ipadapter\noobIPAMARK1_mark1.safetensors" ^
244
+ https://huggingface.co/vslinx/ComfyUIDetailerWorkflow-vslinx/resolve/main/models/IPAdapter/noobIPAMARK1_mark1.safetensors
245
+
246
+ echo Downloading CLIP Vision Model...
247
+ %SystemRoot%\System32\curl.exe -L -o "%comfyPath%\models\clip_vision\CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors" ^
248
+ https://huggingface.co/XuminYu/example_safetensors/resolve/4b89d7ebd99a9913f0abbec4bf0f54932b11d243/CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors
249
+
250
+ echo Downloading ControlNet Canny Model...
251
+ %SystemRoot%\System32\curl.exe -L -o "%comfyPath%\models\controlnet\noobaiXLControlnet_epsCanny.safetensors" ^
252
+ https://huggingface.co/Eugeoter/noob-sdxl-controlnet-canny/resolve/main/noob_sdxl_controlnet_canny.safetensors
253
+
254
+ echo Downloading ControlNet DepthMidas Model...
255
+ %SystemRoot%\System32\curl.exe -L -o "%comfyPath%\models\controlnet\noobaiXLControlnet_epsDepthMidasv1-1.safetensors" ^
256
+ https://huggingface.co/Eugeoter/noob-sdxl-controlnet-depth_midas-v1-1/resolve/main/noob-sdxl-controlnet-depth-midas-v1-1.safetensors
257
+
258
+ echo Downloading ControlNet Lineart-Anime Model...
259
+ %SystemRoot%\System32\curl.exe -L -o "%comfyPath%\models\controlnet\noobaiXLControlnet_epsLineartAnime.safetensors" ^
260
+ https://huggingface.co/Eugeoter/noob-sdxl-controlnet-lineart_anime/resolve/main/noob-sdxl-controlnet-lineart_anime.safetensors
261
+
262
+ echo Downloading ControlNet Lineart-Realistic Model...
263
+ %SystemRoot%\System32\curl.exe -L -o "%comfyPath%\models\controlnet\noobaiXLControlnet_epsLineartRealistic.safetensors" ^
264
+ https://huggingface.co/Eugeoter/noob-sdxl-controlnet-lineart_realistic/resolve/main/noob-sdxl-controlnet-lineart_realistic.safetensors
265
+
266
+ echo Downloading ControlNet Manga line Model...
267
+ %SystemRoot%\System32\curl.exe -L -o "%comfyPath%\models\controlnet\noobaiXLControlnet_epsMangaLine.safetensors" ^
268
+ https://huggingface.co/Eugeoter/noob-sdxl-controlnet-manga_line/resolve/main/noob-sdxl-controlnet-manga-line.safetensors
269
+
270
+ echo Downloading ControlNet OpenPose Model...
271
+ %SystemRoot%\System32\curl.exe -L -o "%comfyPath%\models\controlnet\noobaiXLControlnet_epsOpenPose.safetensors" ^
272
+ https://huggingface.co/Laxhar/noob_openpose/resolve/main/openpose_pre.safetensors
273
+
274
+ echo Downloading ControlNet Softedge Hed Model...
275
+ %SystemRoot%\System32\curl.exe -L -o "%comfyPath%\models\controlnet\noobaiXLControlnet_epsSoftedgeHed.safetensors" ^
276
+ https://huggingface.co/Eugeoter/noob-sdxl-controlnet-softedge_hed/resolve/main/noob-sdxl-controlnet-softedge_hed.safetensors
277
+
278
+ echo Downloading ControlNet Depth Model...
279
+ %SystemRoot%\System32\curl.exe -L -o "%comfyPath%\models\controlnet\noobaiXLControlnet_epsDepth.safetensors" ^
280
+ https://huggingface.co/Eugeoter/noob-sdxl-controlnet-depth/resolve/main/noob_sdxl_controlnet_depth.safetensors
281
+
282
+ echo All required models have been downloaded.
283
+ ) else (
284
+ echo Skipping model downloads.
285
+ )
286
+
287
+ REM -------------------------------
288
+ REM Setup Complete
289
+ REM -------------------------------
290
+ echo.
291
+ echo ============================
292
+ echo Setup Complete!
293
+ echo ============================
294
+
295
+ REM -------------------------------
296
+ REM Offer to create Desktop shortcut
297
+ REM -------------------------------
298
+ echo Would you like to create a desktop shortcut to start ComfyUI?
299
+ echo [Y] Yes
300
+ echo [N] No
301
+ set /p "MAKE_SHORTCUT=Choose [Y/N]: "
302
+
303
+ if /i "!MAKE_SHORTCUT!"=="Y" (
304
+ set "shortcutBat=%USERPROFILE%\Desktop\Start_ComfyUI.bat"
305
+ echo @echo off > "!shortcutBat!"
306
+ echo cd /d "%comfyPath%" >> "!shortcutBat!"
307
+ echo call "%venvPath%\Scripts\activate.bat" >> "!shortcutBat!"
308
+ echo python main.py >> "!shortcutBat!"
309
+ echo pause >> "!shortcutBat!"
310
+ echo Shortcut created on Desktop as Start_ComfyUI.bat
311
+ )
312
+
313
+ REM -------------------------------
314
+ REM Ask to start ComfyUI
315
+ REM -------------------------------
316
+ echo Would you like to start ComfyUI now?
317
+ echo [Y] Yes
318
+ echo [N] No
319
+ set /p "STARTNOW=Choose [Y/N]: "
320
+ if /i "!STARTNOW!"=="Y" (
321
+ echo Starting ComfyUI in a new shell...
322
+ start "" cmd /k ^
323
+ "cd /d "%comfyPath%" && call "%venvPath%\Scripts\activate.bat" && python main.py"
324
+ )
325
+
326
+ pause
workflows/IMG2IMG/v4.0/IMG2IMG-ADetailer-v4.0-vslinx.json ADDED
The diff for this file is too large to render. See raw diff
 
workflows/IMG2IMG/v4.0/changelog.md ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ - initial release of img2img workflow
2
+ - includes detailer and ipadapter
3
+ - scheduler, sampler, steps & cfg now apply to detailer
4
+ - single selectors for denoise of each individual detailer
5
+ - adapted Manual + Requirements
workflows/IMG2IMG/v4.0/img2img_zoomin.png ADDED

Git LFS Details

  • SHA256: 50708f3b8886de28cd89bcb920ba688fda6d7bd2f607c18a34e883ebf85b1580
  • Pointer size: 131 Bytes
  • Size of remote file: 150 kB
workflows/IMG2IMG/v4.0/img2img_zoomout.png ADDED

Git LFS Details

  • SHA256: 94034cbea5a7ab97b1cf25928f42a8db3be1bff7a5466e74a1d4926987b40244
  • Pointer size: 131 Bytes
  • Size of remote file: 188 kB
workflows/IMG2IMG/v4.0/sample_workflow_img2img.png ADDED

Git LFS Details

  • SHA256: 71bc7957d6f5eb9353d11d0f67d1cee91308914916edda859c1c98d212c3ded9
  • Pointer size: 133 Bytes
  • Size of remote file: 18.7 MB
workflows/IMG2IMG/v4.1/Compare-Table1.png ADDED

Git LFS Details

  • SHA256: 90a2f534df1c76aee8e40ac5c77d5e405988c859d903a7042a625a690cd65dca
  • Pointer size: 132 Bytes
  • Size of remote file: 2.29 MB
workflows/IMG2IMG/v4.1/Compare-Table2.png ADDED

Git LFS Details

  • SHA256: ec4a28e6035fb73db8778aa95ae2290a631895dc36f83847b8069d1c2095116b
  • Pointer size: 132 Bytes
  • Size of remote file: 1.9 MB
workflows/IMG2IMG/v4.1/Compare-Table3.png ADDED

Git LFS Details

  • SHA256: a0dbd48f0f8d3107c6788a532ba31b7934c641bed0abaff54bc14f361e1ff265
  • Pointer size: 132 Bytes
  • Size of remote file: 2.19 MB
workflows/IMG2IMG/v4.1/Compare-Table4.png ADDED

Git LFS Details

  • SHA256: 2635b14f5935a7326368b1cc13ee0b13008fef61c294867111baaa65b60831d0
  • Pointer size: 132 Bytes
  • Size of remote file: 1.83 MB
workflows/IMG2IMG/v4.1/Compare-Table5.png ADDED

Git LFS Details

  • SHA256: 045d354ee44f1d82caa0663b27e9cfc2f5fbb7d18c836796658db03c0e67182a
  • Pointer size: 132 Bytes
  • Size of remote file: 2.3 MB
workflows/IMG2IMG/v4.1/Compare-Table6.png ADDED

Git LFS Details

  • SHA256: ada9d6c62aa7b94295de774f93377e4bd189de49791a96896e710eee8fac83a0
  • Pointer size: 132 Bytes
  • Size of remote file: 2.34 MB
workflows/IMG2IMG/v4.1/IMG2IMG-ADetailer-v4.1-vslinx.json ADDED
The diff for this file is too large to render. See raw diff
 
workflows/IMG2IMG/v4.1/changelog.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ - IMG2IMG Transfer for copying image composition
2
+ - re-arrangement of nodes
3
+ - compatibility updates
workflows/IMG2IMG/v4.1/workflow.png ADDED

Git LFS Details

  • SHA256: 202e42eaa1951df4d091ad0d1165e7be9490111fef9fb9b0a5dcc38dde20079b
  • Pointer size: 132 Bytes
  • Size of remote file: 1.07 MB
workflows/IMG2IMG/v4.2/IMG2IMG-ADetailer-v4.2-vslinx.json ADDED
The diff for this file is too large to render. See raw diff
 
workflows/IMG2IMG/v4.2/IMG2IMG_ADetailer_2025-07-27-030843.png ADDED

Git LFS Details

  • SHA256: 96a4c30bb9f341109ca99cc011444affb5aeab9420f1f0ff65402a2ed400380c
  • Pointer size: 132 Bytes
  • Size of remote file: 3.34 MB
workflows/IMG2IMG/v4.2/changelog.md ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ - Added Low VRAM Options for IPAdapter Style Transfer and IMG2IMG Transfer
2
+ - Fixed Scheduler Selector after image-saver node update broke it (have to update the custom node)
3
+ - Added upscaling factor to control the upscaling instead of leaving it to the upscale model
4
+ - Optional use of Sage Attention Patch for global speed increase if you have triton installed
5
+ - Updated Manual inside the workflow
workflows/IMG2IMG/v4.2/img2img-1.png ADDED

Git LFS Details

  • SHA256: 4d7f3ce27cba4a82447db75c4f0188d19f25cd5b3bb8b9785adf93c960f24d08
  • Pointer size: 131 Bytes
  • Size of remote file: 452 kB
workflows/IMG2IMG/v4.2/img2img-2.png ADDED

Git LFS Details

  • SHA256: 62136b0eb1aeb905e9ae581ccbb8bbbbde949bfca0c859faf4955a7858b5ac6a
  • Pointer size: 131 Bytes
  • Size of remote file: 332 kB
workflows/IMG2IMG/v4.2/img2img-fullpreview.png ADDED

Git LFS Details

  • SHA256: 61d03c904a2b68a39f823ec4615444ca086a7663d9f538d62a8125dbf79bd94e
  • Pointer size: 131 Bytes
  • Size of remote file: 285 kB
workflows/IMG2IMG/v4.2/workflow-img2img.png ADDED

Git LFS Details

  • SHA256: 168a7cf3dfe00a00caa3c663423165899fbb4e864339b29bd5260c552b3c53a4
  • Pointer size: 131 Bytes
  • Size of remote file: 987 kB
workflows/IMG2IMG/v4.3/IMG2IMG-ADetailer-v4.3-vslinx.json ADDED
The diff for this file is too large to render. See raw diff
 
workflows/IMG2IMG/v4.3/IMG2IMG_ADetailer_2025-08-11-023923.png ADDED

Git LFS Details

  • SHA256: be85334b60bf94c8e275cc8aeb7d633b1cd5737aaa8563601fa5c7bb0da23907
  • Pointer size: 132 Bytes
  • Size of remote file: 3.69 MB
workflows/IMG2IMG/v4.3/changelog.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ - added high-res fix & color fix (not recommended on upscaled images above 2x/larger than 2048x3072)
2
+ - fixed IMG2IMG Transfer Sampler not using settings from "Sampler Settings"-Group
3
+ - adapted guide notes in workflow
workflows/IMG2IMG/v4.3/guide backup/Full IMG2IMG Guide 23.08.2025 - PAGE1.png ADDED

Git LFS Details

  • SHA256: 4b450177937e917591e14889650750627f86451bb950f454c6184f2d64230db2
  • Pointer size: 132 Bytes
  • Size of remote file: 8.72 MB
workflows/IMG2IMG/v4.3/guide backup/Full IMG2IMG Guide 23.08.2025 - PAGE2.png ADDED

Git LFS Details

  • SHA256: bda10f37e7b4365f55f2eaf4e27bce17c1b1805476a41bdf1886c68b2a8244ac
  • Pointer size: 132 Bytes
  • Size of remote file: 6.17 MB
workflows/IMG2IMG/v4.3/guide backup/guide.md ADDED
@@ -0,0 +1,672 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Step-By-Step Guide
2
+ ComfyUI - IMG2IMG All-in-One Workflow
3
+ ----------------------------------------------------------
4
+
5
+ _This guide is for the IMG2IMG workflow you can find_ _[here](https://civitai.com/models/1297813?modelVersionId=2104022)__._
6
+
7
+ **Workflow description:**
8
+ -------------------------
9
+
10
+ This workflow is can be used for Image to Image generation.
11
+ It let's you select an initial Input Image and then let's you decide if you'd like to do upscaling, detailing and/or image transfer to create a completely new image based on the composition, styling or features of the initial image.
12
+ You can find some basic examples of what you can use this workflow for under the "Scenarios"-section of this guide.
13
+
14
+ **Prerequisites:**
15
+ ------------------
16
+
17
+ ### 📦 **Custom Nodes:**
18
+
19
+ * [ComfyUI Impact Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack)
20
+
21
+ * [ComfyUI Impact Subpack](https://github.com/ltdrdata/ComfyUI-Impact-Subpack)
22
+
23
+ * [ComfyUI-mxToolkit](https://github.com/Smirnov75/ComfyUI-mxToolkit)
24
+
25
+ * [ComfyUI-Easy-Use](https://github.com/yolain/ComfyUI-Easy-Use)
26
+
27
+ * [ComfyUI-Custom-Scripts](https://github.com/pythongosssss/ComfyUI-Custom-Scripts)
28
+
29
+ * [ComfyUI-Crystools](https://github.com/crystian/ComfyUI-Crystools)
30
+
31
+ * [ComfyUI-Image-Saver](https://github.com/alexopus/ComfyUI-Image-Saver)
32
+
33
+ * [ComfyUI\_Comfyroll\_CustomNodes](https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes)
34
+
35
+ * [ComfyUI-Advanced-ControlNet](https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet)
36
+
37
+ * [ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes)
38
+
39
+ * [ComfyUI\_IPAdapter\_plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus)
40
+
41
+ * [ComfyUI-WD14-Tagger](https://github.com/pythongosssss/ComfyUI-WD14-Tagger)
42
+
43
+ * [comfyui\_controlnet\_aux](https://github.com/Fannovel16/comfyui_controlnet_aux)
44
+
45
+ * [cg-use-everywhere](https://github.com/chrisgoringe/cg-use-everywhere)
46
+
47
+ * [cg-image-filter](https://github.com/chrisgoringe/cg-image-filter)
48
+
49
+ * [rgthree-comfy](https://github.com/rgthree/rgthree-comfy)
50
+
51
+ * [wlsh\_nodes](https://github.com/wallish77/wlsh_nodes)
52
+
53
+
54
+ ### 📂 **Files:**
55
+
56
+ **VAE** \- [sdxl\_vae.safetensors](https://huggingface.co/stabilityai/sdxl-vae/blob/main/sdxl_vae.safetensors)
57
+ in models/vae
58
+
59
+ **SAMLoader** model for detailing - [sam\_vit\_b\_01ec64.pth](https://github.com/facebookresearch/segment-anything?tab=readme-ov-file#model-checkpoints)
60
+ in models/sams
61
+
62
+ **Upscale Model (My recommendation, Only required if you want to do upscaling)**
63
+ **4x Foolhardy Remacri** \- [4x\_foolhardy\_Remacri.pth](https://huggingface.co/FacehugmanIII/4x_foolhardy_Remacri/blob/main/4x_foolhardy_Remacri.pth)
64
+ **4x RealESRGAN Anime** \- [RealESRGAN\_x4plus\_anime\_6B.pth](https://openmodeldb.info/models/4x-realesrgan-x4plus-anime-6b)
65
+ in models/upscale\_models
66
+
67
+ **IPAdapter (Only required if you want to copy artstyle from images)**
68
+ **Clip-Vision** \- [CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors](https://huggingface.co/h94/IP-Adapter/blob/main/sdxl_models/image_encoder/model.safetensors) (recommend to rename the model file)
69
+ in models/clip\_vision
70
+
71
+ **IPAdapter** \- [noobIPAMARK1\_mark1.safetensors](https://civitai.com/models/1000401?modelVersionId=1121145)
72
+ in models/ipadapter
73
+
74
+ **ControlNet (My recommendations, only required if you want to do IMG2IMG Transfer)**
75
+ **Canny** \- [noobaiXLControlnet\_epsCanny.safetensors](https://civitai.com/models/929685?modelVersionId=1040650)
76
+ **Depth Midas v1.1** \- [noobaiXLControlnet\_DepthMidasv11.safetensors](https://civitai.com/models/929685?modelVersionId=1091944)
77
+ **OpenPose** \- [noobaiXLControlnet\_epsOpenPose.safetensors](https://huggingface.co/Laxhar/noob_openpose/tree/main) (rename)
78
+ **LineartAnime** \- [noobaiXLControlnet\_LineartAnime.safetensors](https://civitai.com/models/929685?modelVersionId=1049196)
79
+ in models/controlnet
80
+
81
+ **Detailer (My recommendations, only required if you want to do detailing)**
82
+ **Face** \- [maskdetailer-seg.pt](https://civitai.com/models/1222100/mask-adetailer-face-detailer-for-eyes-eyebrows-and-nose), [99coins\_anime\_girl\_face\_m\_seg.pt](https://civitai.com/models/1076050/adetailer-anime-girl-face-segmentation)
83
+ **Eyes** \- [Eyes.pt](https://civitai.com/models/150925/eyes-detection-adetailer), [Eyeful\_v2-Paired.pt](https://civitai.com/models/178518/eyeful-or-robust-eye-detection-for-adetailer-comfyui), [PitEyeDetailer-v2-seg.pt](https://huggingface.co/camenduru/ultralytics/blob/main/PitEyeDetailer-v2-seg.pt)
84
+ **Nose** \- [adetailerNose\_.pt](https://www.mediafire.com/file/f6buda8p06cosn6/adetailerNose_.pt/file)(works only/best with anthro noses)
85
+ **Lips/Mouth** \- [lips\_v1.pt](https://civitai.com/models/142240/adetailer-after-detailer-lips-model), [adetailer2dMouth\_v10.pt](https://civitai.com/models/1306938/adetailer-2d-mouth-detection-yolosegmentation)
86
+ **Hands** \- [hand\_yolov8s.pt](https://huggingface.co/Bingsu/adetailer/blob/main/hand_yolov8s.pt), [hand\_yolov9c.pt](https://huggingface.co/Bingsu/adetailer/blob/main/hand_yolov9c.pt)
87
+ **Nipples** \- [Nipple-yoro11x\_seg.pt](https://civitai.com/models/1132590/nipple-adetailer-for-anime-girls), [nipples\_yolov8s-seg.pt](https://civitai.com/models/490259/adetailer-nipples-model)
88
+ **Vagina** \- [ntd11\_anime-nsfw\_segm\_v3\_pussy.pt](https://civitai.com/models/1313556/anime-nsfw-detectionadetailer-all-in-one), [pussy\_yolo11s\_seg\_best.pt](https://civitai.com/models/150234/pussy-adetailer)
89
+ **Penis** \- [cockAndBallDetection2D\_](https://civitai.com/models/310687/cock-and-ball-detection-2d-edition-adetailer)[v20.pt](http://v20.pt/)
90
+
91
+ **Recommendated Settings:**
92
+ ---------------------------
93
+
94
+ There are two settings that'll make your user experience at least a thousand percent better and i highly recommend doing these two small things.
95
+
96
+ ### **Link visibility**
97
+
98
+ The first is to deactivate Link visibility, as you can see in the background of this Screenshot the links have gotten quite complex in their structure and i can't bother adding another 50 re-routes to make it clean. So i recommend toggling link visibility in the bottom right corner of your ComfyUI.
99
+
100
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bd4c411c-ef7e-471d-b179-589c814ca6e7/width=525/bd4c411c-ef7e-471d-b179-589c814ca6e7.jpeg)
101
+
102
+ ### **AutoCompleter**
103
+
104
+ My second recommendation is to use the autocomplete settings from pythongosssss's custom-scripts plugin(required for this workflow). Simply go to your settings(cogwheel symbol, bottom left corner) and then navigate to the "pysssss"-option in the sidebar on the left.
105
+ Here activate the "Text AutocompleteEnabled" and "Auto-insert comma" as well as "Replace \_ with space_". I personally have Loras disabled since i'm using Lora loader instead of loading them via prompt. These are my complete settings:_
106
+
107
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/33efbdd2-c6be-4432-a816-f10375d85c7a/width=525/33efbdd2-c6be-4432-a816-f10375d85c7a.jpeg)
108
+
109
+ Once you have set your preferences you can click on "Manage Custom Words".
110
+ Go to [this model page](https://civitai.com/models/950325/danboorue621-autocomplete-tag-lists-incl-aliases-krita-ai-support) and download the autotag list incl. aliases. Download the newest version and put the file with "merged" in the name somewhere on your pc.
111
+ After that you can click on "Manage Custom Words" and copy the path + filename and then press the "Load" button to the right. Once you do the preview should look like this:
112
+
113
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bf46c1bc-528e-4aab-8915-ff8ae7412ae8/width=525/bf46c1bc-528e-4aab-8915-ff8ae7412ae8.jpeg)
114
+
115
+ You now have an autocomplete system for your danbooru tags when creating images for illustrious & noobai. You now get suggestions for tags, artists and characters while also having aliases that automatically switch to the correct tag that the models were trained on for the best prompt accuary. It also automatically appends a comma and an empty space after every tag automatically which will be important once we talk about "start quality prompts" and IMG2IMG transfer later on in this guide. Here is a preview of how it can help you find the right tags:
116
+
117
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/faeb85ef-6618-4d89-be8c-14d1fc6958c8/width=525/faeb85ef-6618-4d89-be8c-14d1fc6958c8.jpeg)
118
+
119
+ **General Term Explanation**
120
+ ----------------------------
121
+
122
+ ### **Detailer (ADetailer)**
123
+
124
+ The Detailer (often called ADetailer) is used to refine specific parts of an image after the initial generation. It detects body parts or facial features—like eyes, hands, or mouth—using object detection (usually with Ultralytics), and then re-renders those areas at higher quality using targeted prompts and LoRAs. It’s great for fixing anatomy issues or enhancing details like eye color or hand shape.
125
+
126
+ ### **IPAdapter (Style/Composition Transfer)**
127
+
128
+ IPAdapter is a tool for guiding the style, composition, or identity of your image using a reference image. It uses features from a CLIP model to influence the generation process without directly copying the reference. It’s ideal for tasks like transferring a specific pose, lighting, or even a person’s likeness into a new image while keeping your prompts in control.
129
+
130
+ ### **ControlNet (IMG2IMG Transfer)**
131
+
132
+ ControlNet is a powerful system that allows you to guide image generation using structural inputs like depth maps, poses, line art, or canny edges. It works in an img2img-like fashion—taking your base image and controlling how the new image is generated based on that structure. It’s extremely useful for tasks where you want consistent composition, pose, or layout across different generations.
133
+
134
+ ### **HiRes Fix**
135
+
136
+ HiRes Fix is a process that takes your generated image, after upscaling, and then re-renders it with a low denoise value to add detail, improve sharpness, and enhance overall quality at higher resolutions. This is not recommended at resolutions above 2048x3072(2x upscale with default settings) since it basically repaints the picture again.
137
+
138
+ **Node Group Explanation**
139
+ --------------------------
140
+
141
+ ### **Model Backend**
142
+
143
+ This is where you select the necessary models for each task in the workflow.
144
+ Most steps are pretty self-explanatory. If you're using a Checkpoint that was trained with V-Prediction, make sure to enable the **"Is V-Pred Model"** node.
145
+
146
+ You can stick with the default SDXL VAE or swap it out for one you prefer—totally up to you.
147
+
148
+ The **Upscale Model** does not determine how much the image will be upscaled. While that is usually the case we're using a factor slider later on in the workflow that allows you to adjust by how much the image will be upscaled regardless of what the Upscale Model says.
149
+
150
+ You only need to download and select the IPAdapter model and the corresponding CLIP model if you plan to use IPAdapter. If you're not sure what that is, check out the **"IPAdapter"** section further down in this guide.
151
+
152
+ The **Patch Sage Attention** node is for advanced users only. Installing **Sage Attention** and **Triton** on Windows can be tricky and time-consuming.
153
+ For image generation, the benefits are minor—around **2–3 seconds faster** per image and **slightly less VRAM** used. It’s more useful for video generation, which isn’t covered here.
154
+
155
+ If you still want to try it, you can find a guide [here.](https://www.reddit.com/r/comfyui/comments/1hn32jc/step_by_step_video_tutorial_on_installing/)
156
+
157
+ All files needed to start generating—except for the Checkpoint—are listed in the **Files** section above.
158
+
159
+ ![modelbackend.png](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/f1001067-7f5f-4a4b-ba15-b9fc0239723d/original=true/f1001067-7f5f-4a4b-ba15-b9fc0239723d.jpeg)
160
+
161
+ ### **Input Image**
162
+
163
+ This is where you select your input image for the entire process.
164
+
165
+ The original resolution doesn’t matter—you don’t need to resize, downscale, or upscale anything beforehand. If you’re only doing inpainting or detailing, the image will keep its original resolution. However, during **img2img transfer** (when generating a new image based on the input), the **aspect ratio will be preserved**, but the **smallest side will be resized to 1024 pixels**. This is because SDXL models and their derivatives were trained on 1024×1024 images, and outputs at this scale generally look the best—regardless of aspect ratio.
166
+
167
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/481385ae-839f-4574-9f9f-e9bf64bcbe93/width=525/481385ae-839f-4574-9f9f-e9bf64bcbe93.jpeg)
168
+
169
+ ### **Sampler Settings**
170
+
171
+ This is where you set the sampler settings for the workflow.
172
+
173
+ **CLIP Skip**, **CFG/Guidance**, **Steps**, **Sampler**, and **Scheduler** are applied globally—they affect both the **detailer** and the **img2img transfer** (if you decide to use it).
174
+
175
+ **Denoise** and **Seed** settings, on the other hand, are only used for the **img2img transfer**. That's because we'll have individual denoise controls for each body part later on in the **"Detailer Prompts"** group.
176
+
177
+ The **"Hi-Res Fix Denoise"** slider is only active when **"Enable Hi-Res Fix"** is turned on in the **"General Function Control"** group. It controls how much of the original image is overwritten—lower values mean fewer changes. Since a small amount of denoise is needed for the sampler to work properly, I recommend a value between **0.25 and 0.35**.
178
+
179
+ Lower denoise values also reduce **VRAM** and **GPU usage**.
180
+
181
+ ![b7ada8089bcd12ca32f29239626f70c3.png](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2a1b326e-ab0c-4aa4-92f8-8c798531d655/original=true/2a1b326e-ab0c-4aa4-92f8-8c798531d655.jpeg)
182
+
183
+ ### **Detailer Function Control**
184
+
185
+ In this section, you simply choose which detailer functionalities you'd like to use.
186
+
187
+ You can enable or disable each individual body part for detailing or inpainting. If you're not sure what detailing actually does, check out the **"Detailer"** group explanation further down in the guide.
188
+
189
+ You also have the option to activate a **custom prompt** and a **LoRA** for each specific body part. Prompts can be edited later in the **"Detailer Prompts"** group, while the LoRAs can be changed in the **"Detailer LoRAs"** group.
190
+
191
+ This setup lets you do things like change the **eye color** using a specific prompt and apply a higher **denoise value** just to that area. You could also use a LoRA specifically designed for **improving hands** during hand detailing to fix bad hands, for example.
192
+
193
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ccb04376-ac77-4a24-a1cb-0add4758b129/width=525/ccb04376-ac77-4a24-a1cb-0add4758b129.jpeg)
194
+
195
+ ### **General Function Control**
196
+
197
+ In this section, you can toggle some general features and processes.
198
+
199
+ **Upscaling** uses the model selected in the **"Model Backend"** group. If you're working with a high-res image and only using the detailer, it's best to turn upscaling off—otherwise, a 4096×4096 image could become 16384×16384. Higher resolutions also slow down the detailing process.
200
+
201
+ For **img2img transfer**, I recommend leaving upscaling on, since the input is resized to 1024 on the shortest side, and this helps bring it back up after generation.
202
+
203
+ You can enable **Hi-Res Fix** to resample the upscaled image with a low denoise value, repainting it for higher resolution and improved quality. It’s recommended to use **"Color Fix"** alongside it to preserve the original colors, as resampling can sometimes reduce contrast.
204
+
205
+ The **Start Quality Prompt** (in the upcoming **"General Prompt Control"** group) automatically prepends itself to every detailer prompt. It’s useful for checkpoints that need specific quality tags.
206
+
207
+ For example:
208
+ Start Prompt: `masterpiece, best quality, absurdres,`
209
+ Eyes Prompt: `brown eyes`
210
+ Resulting prompt: `masterpiece, best quality, absurdres, brown eyes`
211
+
212
+ Make sure it ends with a comma and a space for clean merging. You can preview it in the **"Debug"** group.
213
+
214
+ The **Upscale Factor** determines how much the image will be upscaled **if upscaling is enabled**—**1.0** means no upscaling at all.
215
+
216
+ **General LoRA Control** toggles global LoRAs from the **"LoRAs"** group. These apply throughout the whole process—great for adding a consistent style or boosting details using a general detailer LoRA.
217
+
218
+ ![8d186152f00b6838480098a763ddbae0.png](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bce7072b-2762-4ddd-ac61-35e506612b4d/original=true/bce7072b-2762-4ddd-ac61-35e506612b4d.jpeg)
219
+
220
+ ### **General Prompt Control**
221
+
222
+ As mentioned in the **"General Function Control"** group, the **Start Quality Prompt** automatically prepends itself to all detailer prompts—this is completely optional and can be turned off in the previously mentioned group node.
223
+
224
+ The **Negative Prompt**, on the other hand, is used globally for both **img2img transfer** and all **detailers**, and is a required part of the workflow.
225
+
226
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/0b346286-9e8a-40c9-beb4-4144d527d5fe/width=525/0b346286-9e8a-40c9-beb4-4144d527d5fe.jpeg)
227
+
228
+ ### **Detailer Prompts**
229
+
230
+ In this section, you can prompt individual body parts. If you're not sure what the Detailer does, check out the **General Term Explanation** at the top of this guide.
231
+
232
+ These prompts tell the Detailer what to generate in the regions detected by the Ultralytics model. This is especially helpful if you want to enhance specific features—like changing nail polish color or eye color. It also works well when using a LoRA targeted at a particular body part, allowing you to apply the trigger word only where it's needed.
233
+
234
+ Another important setting here is the **Denoise** value.
235
+
236
+ Denoise controls how much of the original shape, form, and color will be replaced:
237
+
238
+ * A **higher value** will completely overwrite the area.
239
+
240
+ * A **lower value** will preserve the original form and just enhance it at higher resolution.
241
+
242
+
243
+ If the anatomy already looks good and you only want to improve quality, a Denoise value of **0.30 to 0.35** is recommended.
244
+ If the anatomy is off—like extra fingers—you can increase it to **0.50 or higher** and see if the results improve.
245
+
246
+ As a general rule:
247
+ **The higher the Denoise value, the more the Detailer will ignore what's already in that area.**
248
+
249
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/b0886e68-2285-4521-bb7e-abffc7166ea5/width=525/b0886e68-2285-4521-bb7e-abffc7166ea5.jpeg)
250
+
251
+ ### **IPAdapter Control**
252
+
253
+ If you're not sure what IPAdapter does, check out the **General Term Explanation** at the top of this guide. But in short: it allows you to transfer the **style and/or composition** of a reference image into your generated image. This works by injecting the visual "likeness" of the reference into the CLIP model during generation. It's especially useful in the **img2img transfer group** to preserve the original style, and even more so during **detailing**, where you want inpainted areas to match the original look.
254
+
255
+ We won't be using IPAdapter for **composition copying**, since that's handled more effectively with **ControlNet** during img2img transfer.
256
+
257
+ Enabling **"Enable IPAdapter (Style Transfer)"** turns on the IPAdapter feature. You can think of it like activating a LoRA—it’s most effective when used **without** other LoRAs and with a **base model** that doesn’t already have a strong built-in style. You can control how much influence the adapter has using the **"IPAdapter Style Strength"** slider.
258
+
259
+ By default, IPAdapter uses the original image you selected in the **"Load Image"** group at the start of the workflow. But if you enable **"Alternative Style Image"**, you can choose a different reference image using the **"Alternative Style Image"** node. This is helpful when you're doing an img2img transfer but want the result to have a different style than the original.
260
+
261
+ If you enable the **"Low VRAM"** option **alongside IPAdapter**, your input image will be **scaled down** so that the **smallest side is 512px**. This helps reduce the amount of VRAM used when IPAdapter analyzes the image. This will **reduce the quality** of the IPAdapter results, so only use it if you have **less than 12GB of VRAM** and/or can’t run IPAdapter otherwise.
262
+
263
+ ![ipadapter.png](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/c78ebf89-0905-4008-a193-151e4e2817f1/original=true/c78ebf89-0905-4008-a193-151e4e2817f1.jpeg)
264
+
265
+
266
+ **Important!**
267
+ One downside of using **IPAdapter** is that it can be difficult to change the **color palette** of the image if its influence is set too high. This is because IPAdapter acts like a **very aggressive LoRA**—it essentially copies everything it sees in the reference image, including colors, composition, and lighting.
268
+
269
+ To work around this, you have a couple of options:
270
+
271
+ * Add unwanted traits (like a specific hair color) to your **negative prompt**
272
+
273
+ * **Reduce the IPAdapter strength** to lessen its influence
274
+
275
+
276
+ If you want more advanced control, head down to the **"IPAdapter Tiled"** node in the **"Backend – IPAdapter"** group. This is where you can fine-tune how IPAdapter behaves:
277
+
278
+ * **Change the** `weight_type`
279
+ Try switching from `"strong style transfer"` to `"style transfer precise"` or just `"style transfer"`. These settings can significantly affect how tightly the final image follows the reference style.
280
+
281
+ * **Adjust** `start_at` **and** `end_at` **values**
282
+ These define **when** in the generation process IPAdapter is active. For example, setting them to `0.2` and `0.8` applies the influence during the middle part of the generation rather than the entire time. This lets you preserve the overall style while still allowing changes—like a different hair color—outside that influence window.
283
+
284
+ * **Modify the** `combine_embeds` **method**
285
+ This determines **how** the IPAdapter embedding is merged with the CLIP model. Options like `"concat"`, `"average"`, or `"norm average"` affect how strongly IPAdapter overrides what your prompt specifies. If it’s overriding too much, switching methods might give you more balanced results.
286
+
287
+
288
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/6b868453-a62f-4b4b-abac-f65ae8d64fe9/width=525/6b868453-a62f-4b4b-abac-f65ae8d64fe9.jpeg)
289
+
290
+ This node is explained in more detail [here](https://www.runcomfy.com/comfyui-nodes/ComfyUI_IPAdapter_plus/IPAdapterTiled#ipadapter-tiled-input-parameters), where you can also review what each setting does.
291
+
292
+ ### **IMG2IMG Transfer**
293
+
294
+ This section lets you use any image as a base to create a new one with the **same composition and elements**, while still giving you creative control over what’s shown. In short, it allows you to **recreate or "copy"** an image with your own characters, styling, or prompts.
295
+ For examples, check out the **"Replace Character in Image"** scenario at the bottom of this guide.
296
+
297
+ Behind the scenes, this node group uses **automatic interrogation** of the original image with **Danbooru tags** to describe what’s visible. Then, **ControlNet** is used to lock in the composition and character placement.
298
+
299
+ If you're unfamiliar with ControlNet, refer to the **General Term Explanation** section at the top of the guide. For our use case, the important part is that you always **match the ControlNet model with its corresponding preprocessor** in the **"ControlNet Models"** node.
300
+
301
+ Let’s walk through the features in this group:
302
+
303
+ **Enable IMG2IMG Transfer**
304
+ This switches the workflow from detail-only mode to full **img2img transfer**. It lets you take the original image and re-create it with different characters, styles, or other visual changes—while keeping the overall composition and creating a completely new image instead of inpainting the original.
305
+ Use the **"Low VRAM"** option **TOGETHER** with **IMG2IMG Transfer** to reduce VRAM usage during pre-processing and when applying ControlNet. This can affect output quality - especially the anatomy of complicated poses, so only use it if you have **12GB VRAM or less**, or if you want **faster generation** and are okay with lower quality.
306
+
307
+ **Accuracy Slider** (also known as **ControlNet Strength**)
308
+ This controls how much the original image influences the final result.
309
+
310
+ * **0** means no influence (the prompt takes full control).
311
+
312
+ * **1** locks in the original composition completely.
313
+
314
+
315
+ A good balance is usually around **0.50–0.60**; I personally use **0.55**.
316
+ If you’ve used ControlNet in A1111 WebUI, this is similar to choosing:
317
+
318
+ * “**My prompt is more important**” (below 0.50),
319
+
320
+ * “**Balanced**” (around 0.51–0.55),
321
+
322
+ * or “**ControlNet is more important**” (above 0.55).
323
+
324
+
325
+ Be aware: setting the strength too high will **limit your ability to change features** like hairstyle, clothing, or proportions (e.g., height or bust size).
326
+
327
+ Also note: the effect depends on the **ControlNet model and preprocessor** you're using.
328
+ For example, the **OpenPose preprocessor** only creates a skeleton of the character’s pose—it doesn’t lock in features like hair or body proportions. This means you can increase accuracy/strength without losing flexibility in how your character looks.
329
+
330
+ However, OpenPose can struggle with **complex poses** or **multiple characters**, so keep that in mind depending on your image.
331
+
332
+ The **IMG2IMG prompt** is what gets prepended to the prompt generated from image interrogation. This is where you can add quality tags for the new image, as well as any character-specific or scene-specific details—like the name of the character you're swapping in, or the artist style you'd like to apply.
333
+
334
+ The **"Exclude tags from interrogation"** option lets you remove certain features from the automatic tag extraction.
335
+ By default, the interrogation process will try to identify everything in the original image—so even if you write a new prompt (e.g., a new character with different features), the system might still include the old tags from the image. If the weights align unfavorably, it could ignore parts of your new prompt and instead favor the extracted tags.
336
+
337
+ To avoid that, you can list the characteristics you want to exclude. For example, if the original image shows a girl with **short black hair**, **blue eyes**, and a **school uniform**, and you want to replace her with someone who has **long orange hair**, **brown eyes**, and is wearing **pajamas**, you would:
338
+
339
+ * Write `"orange hair, brown eyes, long hair, pajamas"` in the **IMG2IMG prompt**
340
+
341
+ * And write `"short hair, black hair, blue eyes, school uniform"` in the **"Exclude tags from interrogation"** field
342
+
343
+
344
+ This tells the workflow to leave those unwanted tags out of the final prompt.
345
+
346
+ To review the full prompt that will be used, check the **"Debug"** group—this is especially helpful for spotting leftover tags you might have missed.
347
+
348
+ For a full practical example, check the **Scenario** section at the end of this guide, where I walk through how to swap characters in an image step by step.
349
+
350
+ ![img2imgtransfer.png](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ec7dd9e4-2fd7-4e5d-a6ef-6b45b98e26e0/original=true/ec7dd9e4-2fd7-4e5d-a6ef-6b45b98e26e0.jpeg)
351
+
352
+ **Important!**
353
+
354
+ A crucial part of achieving good **img2img transfer** results is choosing the **right ControlNet model and preprocessor**.
355
+ At the beginning of this guide, you'll find the **"Files"** section where you can download all the ControlNet models I recommend for **Illustrious-based models**, including **NoobAI** and its derivatives.
356
+
357
+ Each preprocessor detects the structure of your image differently. As a general rule:
358
+ **The more detailed the preprocessor, the more original features will be preserved.**
359
+ This also means that if your source image has chibi-style features or a younger character design, it may be harder to significantly alter the body type without reducing ControlNet’s overall effectiveness.
360
+
361
+ For **accurate preservation** of the original look, I recommend:
362
+
363
+ * **CannyEdge** or **PyraCanny**(More forgiving) as preprocessors, used with the **Canny** ControlNet model
364
+
365
+ * **AnimeLineArt** preprocessor with the **AnimeLineArt** ControlNet model
366
+
367
+
368
+ If you want more **flexibility** in adjusting features like body proportions or facial details, consider using a **depth-based preprocessor**:
369
+
370
+ * **MiDaS-DepthMap** or **Depth Anything V2**, paired with the **DepthMiDaS V1.1** ControlNet model
371
+
372
+
373
+ For cases where you **only care about the character's position**—and not the background, clothing, or fine details—use:
374
+
375
+ * **OpenPose** preprocessor with the **OpenPose** ControlNet model
376
+ This will give you a simple skeleton structure of the pose (including hand positions), leaving you maximum control over everything else in the image.
377
+
378
+
379
+ ### Tips on ControlNet Strength:
380
+
381
+ * With **less detailed preprocessors** like OpenPose, you can increase **ControlNet Strength (Accuracy)** to better lock in the character’s pose.
382
+
383
+ * For **Canny** and **Depth**, I recommend a strength of **0.51–0.58**—this keeps the overall composition while still allowing changes to details.
384
+
385
+ * If the output isn’t what you expected—like the pose being off or your prompt not taking effect—**try a new seed** before adjusting strength. Sometimes a bad seed is all it takes to throw things off.
386
+
387
+
388
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/4e493e40-64eb-4a02-a4c4-95d6e41a3c10/width=525/4e493e40-64eb-4a02-a4c4-95d6e41a3c10.jpeg)
389
+
390
+ You can see a preview of your generated pose/composition from the pre-processor in the "Debug"-Group if you click on the square in the "ControlNet Preview"-Node.
391
+
392
+ ### **Before & After**
393
+
394
+ This section lets you view the **before and after** of the generation process.
395
+
396
+ Depending on whether you have **upscaling** and/or **Hi-Res Fix** enabled, the preview will appear on the left. The image on the right shows the **final result** after all selected detailing processes have been applied. Once the right image appears, it has also been saved to your **output folder**.
397
+
398
+ If you want to preview the upscaled image **before** Hi-Res Fix is applied, check the **"Pre-HiRes Fix"** node in the **Debug** section.
399
+
400
+ ![72cca51f038c3dffb7574a8766769e64.png](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/c934d1b9-f7ac-47fd-85ba-058220a3a4ae/original=true/c934d1b9-f7ac-47fd-85ba-058220a3a4ae.jpeg)
401
+
402
+ These are the results from using only the **face detailer** and **face prompt** with the default settings enabled:
403
+
404
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/42d542f6-b6ef-4824-8db2-f528d89f7833/width=525/42d542f6-b6ef-4824-8db2-f528d89f7833.jpeg)
405
+
406
+ ### **Detection Models**
407
+
408
+ In this section, you select the **detection models** for each specific body part.
409
+
410
+ You can find my recommended models either in the **"Files"** section at the top of this guide or directly inside the workflow under **"Recommended Ultralytics Model"**, located to the left of the layout. There are also many great options available on **[Civitai](https://civitai.com/search/models?modelType=Detection&sortBy=models_v9)** if you want to explore further.
411
+
412
+ You only need to load the Ultralytics models you plan to use—but keep in mind:
413
+ **If you activate a detailer for a body part but haven’t selected a detection model for it, image generation will fail.**
414
+
415
+ Ultralytics models are trained to detect specific body parts or features—like hands, faces, clothing, or tails—and are used to automatically **mask those areas** so the detailer knows where to inpaint. For more background on how this works, check the **General Term Explanation** at the top of the guide.
416
+
417
+ If you don’t plan to use the **nose detailer**, but want to use that slot for something else—like detecting **headwear**—you can absolutely swap it out. The node names themselves don’t really matter. What actually determines what gets detected is your **Ultralytics model** and the prompt you assign to that detailer in the **Detailer Prompts** section.
418
+
419
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cde52c9e-e454-402c-9ba0-a4e210401cb1/width=525/cde52c9e-e454-402c-9ba0-a4e210401cb1.jpeg)
420
+
421
+ ### **LoRA's**
422
+
423
+ In this area, you can select **general LoRAs** to apply across the entire process. These LoRAs will be used for **both image generation and all detailers**, so they affect the whole workflow.
424
+
425
+ Only enable LoRAs here if you want to apply a consistent **style** or **character** to the entire image from start to finish.
426
+
427
+ If you're unsure about the difference between **CLIP strength** and **model strength**, it's best to keep both set to the **same value** for consistent results.
428
+
429
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/f57e706b-8636-4df6-a978-b22b709860d3/width=525/f57e706b-8636-4df6-a978-b22b709860d3.jpeg)
430
+
431
+ ### **Detailer LoRA's**
432
+
433
+ These **detailer LoRAs** are applied **only** to the specific body part being detailed. This is useful if you have a LoRA trained to enhance certain features—like eyes, hands, or other detailed areas.
434
+
435
+ If you want to improve quality but don’t have a LoRA tailored to that body part, you can also use a **general detail-enhancer**, which you’ll find in the **"Recommended Detailer LoRAs"** node inside the workflow.
436
+
437
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/eb03436e-913c-46c4-8aee-0fec15d828b8/width=525/eb03436e-913c-46c4-8aee-0fec15d828b8.jpeg)
438
+
439
+ ### **Detailer**
440
+
441
+ This is where the **detailing magic** happens. The Ultralytics model detects the body part it was trained to recognize, and that area is then **inpainted at the final resolution** to fix blurry details, incorrect anatomy, or off colors.
442
+
443
+ Each detailer comes with **recommended default settings**, but you still have full control over **CFG, sampler, scheduler, and steps** via the **"Sampler Settings"** group at the top of the workflow.
444
+ The other key setting is the **denoise value**, which you can adjust in the **"Detailer Prompts"** group.
445
+
446
+ For a full breakdown of what each parameter does, check out the documentation [here](https://www.runcomfy.com/comfyui-nodes/ComfyUI-Impact-Pack/FaceDetailer).
447
+
448
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/0b9bfb68-d3a4-4f86-8e74-728778080c5d/width=525/0b9bfb68-d3a4-4f86-8e74-728778080c5d.jpeg)
449
+
450
+ ### **Debug**
451
+
452
+ This section is here to help you **analyze any issues** that might come up during image generation.
453
+
454
+ The note inside the "Debug" group already provides a solid explanation of most of what's happening under the hood.
455
+
456
+ The **"Show full IMG2IMG Transfer Prompt"** node displays the combined result of your **IMG2IMG transfer prompt** and the **interrogated prompt** from your original image.
457
+
458
+ Use this to:
459
+
460
+ * **Check which tags** were extracted from the original image
461
+
462
+ * **Remove unwanted tags** by adding them to the **"Exclude from Interrogation"** node in the **"IMG2IMG Transfer"** group
463
+
464
+ * **Verify prompt formatting**, especially making sure your IMG2IMG prompt ends with a **comma and a space** to ensure a clean, cohesive prompt
465
+
466
+
467
+ The **"Show example eye prompt"** node is especially useful if you're using the **"Start Quality Prompt"** from the **"General Function Control"** group. It shows how the prompts are being combined so you can better understand what the detailer is actually working with.
468
+
469
+ Next are all the **Detailers** for each body part. Here, you can check the individually processed sections of the image if you have detailers enabled. This allows you to spot any issues that might occur during detailing.
470
+
471
+ Clicking the rectangle next to a detailer’s name will show its preview:
472
+
473
+ * If the preview shows the **full image**, the detailer wasn’t activated.
474
+
475
+ * If the preview is **completely black**, the Ultralytics model didn’t detect any body parts matching what it was trained to find.
476
+
477
+
478
+ If that happens, try using a different **detection model** or adjusting the **detailer’s parameters**. For guidance, see the **"Detailer Parameters"** node located to the left.
479
+
480
+ **ControlNet Preview** shows the image generated during **IMG2IMG transfer**, which serves as the base for your final image.
481
+
482
+ **Pre-HiRes Fix** displays a preview of the image before the **Hi-Res Fix** process is applied.
483
+
484
+ ![7e728fe971c10bd5eac751f651e8ab22.png](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/f7d171b1-5f3d-4959-9526-71463c377c08/original=true/f7d171b1-5f3d-4959-9526-71463c377c08.jpeg)
485
+
486
+ **Scenarios:**
487
+ --------------
488
+
489
+ We will be using this sample image as the base for these examples:
490
+
491
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ed074044-6adb-4310-a9b3-757e18b4a357/width=525/ed074044-6adb-4310-a9b3-757e18b4a357.jpeg)
492
+
493
+ ### Detail features of image
494
+
495
+ If you simply want to **detail an existing image**, it's as easy as loading your image into the **"Load Image"** node, selecting the **detailers** you want to use, and enabling any other processes you'd like to include.
496
+
497
+ You can also generate a completely new image with a different **character** or **style** by exploring the **"Replace Character in Image"** or **"Change Style"** scenarios included in this guide.
498
+
499
+ For best results, I recommend using a **neutral checkpoint**—one that doesn’t have strong styling baked in. This helps preserve the original image more faithfully. Pairing it with **IPAdapter** further maintains the original style, making it much easier to apply detailing without needing to test multiple checkpoints or LoRAs to get the look right.
500
+
501
+ With just a few simple settings, you can dramatically improve image quality:
502
+
503
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1e3d4583-cb91-44f0-a31f-85badaabf890/width=525/1e3d4583-cb91-44f0-a31f-85badaabf890.jpeg)
504
+
505
+ The results below show how **blurry areas in the face and hands**—often caused by upscaling—have been successfully fixed:
506
+
507
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7fa6a143-7d87-44b4-ac7d-4925a5e45ed8/width=525/7fa6a143-7d87-44b4-ac7d-4925a5e45ed8.jpeg)
508
+
509
+ ### Replace character in image
510
+
511
+ The goal of this scenario is to generate an image that **preserves the style of the original** while **changing the character**. To do this, we’ll use **IMG2IMG Transfer** to create a new image with a custom prompt, and **IPAdapter** to transfer the original image’s style.
512
+
513
+ First, select a **Checkpoint that doesn’t have heavy styling baked in**. In this example, I’m using the **[NoobAICyberFix](https://civitai.com/models/913998?modelVersionId=1122850)** [](https://civitai.com/models/913998?modelVersionId=1122850) checkpoint—it has great anatomy and keeps styling minimal, which makes it a solid base for this kind of task.
514
+
515
+ Next, activate the **IPAdapter** and set a style strength. You may need to experiment with this value—IPAdapter tends to **strongly copy the original color palette**, which can make it difficult to change features like **hair color** if the strength is set too high.
516
+
517
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/0f85a6f9-ef19-4ecd-bf11-a7e5cf4792a0/width=525/0f85a6f9-ef19-4ecd-bf11-a7e5cf4792a0.jpeg)
518
+
519
+ Next, we’ll activate the **IMG2IMG Transfer** and choose a **Canny preprocessor** along with a compatible **ControlNet model**. In this example, I’m using the **PyraCanny PreProcessor**, which gives a highly accurate representation of the original composition while still allowing for minor changes. I’ve set the **accuracy (strength)** to **0.55** for the first run to strike a balance between structure and flexibility.
520
+
521
+ Now I’ll add my **IMG2IMG prompt**, including some quality tags at the beginning. For this example, I want to swap the original character with **Inoue Orihime** from _Bleach_, so my prompt looks like this:
522
+
523
+ `masterpiece, best quality, absurdres, amazing quality, inoue orihime, brown eyes, orange hair, long hair, large breasts,`
524
+
525
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/0b34ae85-a5cc-41f0-a988-0c5f837c5001/width=525/0b34ae85-a5cc-41f0-a988-0c5f837c5001.jpeg)
526
+
527
+ The result after image generation looks like this:
528
+
529
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/99df5102-7d35-4304-9d62-fb604833ba22/width=525/99df5102-7d35-4304-9d62-fb604833ba22.jpeg)
530
+
531
+ As we can see, the result already looks pretty good—but there are still a few issues, like some **discoloration in the hair** and the **purple eyes**, which don’t match the prompt.
532
+
533
+ The cause is easy to identify once we head down to the **Debug** section of the workflow and check the **"Show full IMG2IMG Transfer Prompt"** node. There, we can see the full prompt after interrogation, which looks like this:
534
+
535
+ `masterpiece, best quality, absurdres, amazing quality, inoue orihime, brown eyes, orange hair, long hair, large breasts, 1girl, solo, long hair, looking at viewer, blush, smile, bangs, skirt, brown hair, school uniform, standing, purple eyes, ponytail, short sleeves, cowboy shot, pleated skirt, serafuku, sailor collar, blue skirt, neckerchief, blue background, index finger raised, finger to mouth`
536
+
537
+ Clearly, some tags—like **brown hair** and **purple eyes**—are still being picked up from the original image. To fix this, I’ll add any tags I don’t want in the final result to the **"Exclude Tags from Interrogation"** node. This helps ensure the output sticks closer to my intended prompt.
538
+
539
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/77c9ecdc-7372-485e-8c0e-09d6c5d76048/width=525/77c9ecdc-7372-485e-8c0e-09d6c5d76048.jpeg)
540
+
541
+ After adding this change my result would now look like this:
542
+
543
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7a483ca2-81bb-4713-a63f-f5cfdee4deb7/width=525/7a483ca2-81bb-4713-a63f-f5cfdee4deb7.jpeg)
544
+
545
+ If I now want to change the **hairstyle**, I have a few options:
546
+
547
+ 1. **Decrease the Accuracy (Strength)**
548
+ Lowering the ControlNet strength gives the model more freedom to follow your prompt rather than sticking closely to the original image.
549
+
550
+ 2. **Switch to the OpenPose PreProcessor and ControlNet Model**
551
+ Since this is a relatively simple pose, OpenPose can easily detect it. Using a **less detailed preprocessor** like OpenPose gives you more flexibility to change visual features such as hair, clothing, or body proportions.
552
+
553
+ 3. **Explicitly prompt the desired hairstyle**
554
+ Add the specific hairstyle you want directly to your prompt. This can help override details retained from the original image—especially when combined with reduced ControlNet strength.
555
+
556
+
557
+ Simply adding **"straight hair"** to the IMG2IMG prompt results in an output where the **body proportions from the original image are preserved**, but the **hairstyle is updated**:
558
+
559
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/dc883c6b-0e64-4a7d-825a-084b93274eb5/width=525/dc883c6b-0e64-4a7d-825a-084b93274eb5.jpeg)
560
+
561
+ Switching the **PreProcessor** to **OpenPosePreprocessor** and using the **OpenPose ControlNet model** allows you to **ignore the original body proportions entirely** and focus solely on preserving the pose. This gives your checkpoint full freedom to depict the character according to how it was trained, without being constrained by the original image’s features:
562
+
563
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/52a92310-9a31-4c36-abcc-73885500bb70/width=525/52a92310-9a31-4c36-abcc-73885500bb70.jpeg)
564
+
565
+ To show how easy it is to **replace characters** with this you can simply change `inoue orihime, brown eyes, orange hair, long hair, large breasts,` to `shihouin yoruichi, yellow eyes, slit pupils, medium breasts,` and the result would look like this (using OpenPose for ControlNet):
566
+
567
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ae1e27a3-18c1-4008-9243-2c33c55a97fc/width=525/ae1e27a3-18c1-4008-9243-2c33c55a97fc.jpeg)
568
+
569
+ ### Change style of image
570
+
571
+ Changing the **style of an entire image** is incredibly easy with this workflow.
572
+
573
+ Start by selecting all the processes you want to include—like **upscaling**, **detailers**, and any other enhancements—then simply activate the **IMG2IMG Transfer** function.
574
+
575
+ If your only goal is to **recreate the image in a different style**, I recommend the following:
576
+
577
+ * Leave the **"Exclude Tags from Interrogation"** node **empty**
578
+
579
+ * Add only the **default quality tags** from your checkpoint or LoRAs to the **IMG2IMG prompt**
580
+
581
+ * Use a **high-accuracy preprocessor**, such as **CannyEdgePreprocessor**, along with the **Canny ControlNet model**
582
+
583
+
584
+ If you want to **faithfully replicate the composition** without changing anything else, increase the **ControlNet Accuracy (Strength)**. I usually set it to **0.55**, but in this case—since I want to copy everything exactly—I’ve increased it to **0.65**.
585
+
586
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/f92e8f51-2664-43fc-b2fc-648a7d84aa95/width=525/f92e8f51-2664-43fc-b2fc-648a7d84aa95.jpeg)![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/21943822-c527-4909-a41f-f5f5bfd24027/width=525/21943822-c527-4909-a41f-f5f5bfd24027.jpeg)
587
+
588
+ These settings result in this image together with an Anime styled LoRA that i'm using:
589
+
590
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/e34e53d5-9fa5-46cd-a95d-724ec03dcc10/width=525/e34e53d5-9fa5-46cd-a95d-724ec03dcc10.jpeg)
591
+
592
+ Next, I checked the **"Show full IMG2IMG Transfer Prompt"** node in the **"Debug"**\-Group and saw the following output:
593
+
594
+ `masterpiece, best quality, absurdres, amazing quality, 1girl, solo, long hair, looking at viewer, blush, brown hair, smile, bangs, skirt, school uniform, standing, purple eyes, ponytail, short sleeves, cowboy shot, pleated skirt, serafuku, sailor collar, blue skirt, neckerchief, blue background, index finger raised, finger to mouth`
595
+
596
+ To finalize the result, I added a couple of elements the interrogation either missed or misidentified. In this case, I added `"stars, vignetting, black hair"` to the **IMG2IMG prompt** and added `"brown hair"` to the **"Exclude Tags from Interrogation"** node—since I wanted to adjust the hair color slightly.
597
+
598
+ I ran it again, and here’s the final result:
599
+
600
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/a94d1355-4fef-43ec-b017-a9589b6a6753/width=525/a94d1355-4fef-43ec-b017-a9589b6a6753.jpeg)
601
+
602
+ If you want to change the **style of an image** without using a LoRA, you can use the **IPAdapter** instead. By enabling **"Alternative Style Image"** in the **"IPAdapter Control"** node, you can copy the style from a completely different reference image.
603
+
604
+ In this example, I used the **alternative image** shown in my settings above. I simply activated both **"IPAdapter"** and **"Alternative Style Image"**, and ended up with this result:
605
+
606
+ ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/38fe7218-2943-4b16-96bb-ef09fdcbbdfb/width=525/38fe7218-2943-4b16-96bb-ef09fdcbbdfb.jpeg)
607
+
608
+ FAQ:
609
+ ----
610
+
611
+ ### SAMLoader 21: Value not in list: model\_name: 'sam\_vit\_b\_01ec64.pth' not in \[\]
612
+
613
+ If you get this error it's because your ComfyUI Installation is missing the SAM ViT-B Model.
614
+ This is usually included in the standard ComfyUI installation but can be left out if you download specific versions like the portable comfy version. You can fix this issue by doing either of these:
615
+
616
+ * Go into your ComfyUI Manager, then click on Model Manager(should be somewhere in the middle) and then search for the ["ViT-B Sam"-Model](https://i.gyazo.com/5cc96ed026eeb3368affe74db032a4eb.png) and install it.
617
+
618
+ * Download the model [here](https://huggingface.co/datasets/Gourieff/ReActor/blob/main/models/sams/sam_vit_b_01ec64.pth) for example.
619
+ And move it into your models/sams folder
620
+
621
+
622
+ ### I'm missing one of these nodes "workflowStart quality prompt", "workflowEnd quality prompt" or "workflowControlNet Models"
623
+
624
+ This error appears when you either haven't installed all the necessary comfyui custom nodes or your comfyui has not cleared it's cache. If you have this issue do the following:
625
+
626
+ * Make sure you have all the custom nodes installed as listed in the workflow, the model page (check for your specific version in the description) or at the top of this guide
627
+
628
+ * Check your ComfyUI Console where you've started comfyui and make sure none of the nodes say (IMPORT FAILED) when loading them, if one of them do - do the following:
629
+
630
+ * Inside ComfyUI open your [ComfyUI-Manager](https://github.com/Comfy-Org/ComfyUI-Manager) (install it if you haven't yet)
631
+
632
+ * Click on Custom Nodes Manager
633
+
634
+ * In the top left corner change the filter to "Import failed"
635
+
636
+ * Wait till it loads and on the node/s where it failed click on "Try fix"
637
+
638
+ * After it's done loading close down your comfyui (including the console where u started it)
639
+
640
+ * Start ComfyUI again, once it's started you can open it in the browser again, make sure to click "refresh" in your browser once.
641
+
642
+ * If you have all custom nodes installed and none of them fail, you can open your [ComfyUI-Manager](https://github.com/Comfy-Org/ComfyUI-Manager) (install it if you haven't yet) and click on "Custom Nodes Manager", here click on "Filter" in the top left corner and change it to "Update". ComfyUI is now going to check if any of your nodes can be updated, if there's an update available make sure to update every node.
643
+
644
+ * Close ComfyUI completely, including your console where you started ComfyUI.
645
+ Start ComfyUI again and once it's done loading go into your browser and enter the comfyui address. Once comfyui is loaded you close all open workflows, click the "refresh" button (Default is F5-Button) and then open the workflow again.
646
+
647
+ * ComfyUI should now have cleared the cache and when you open/drag the workflow in comfyui again it should work without problems.
648
+
649
+
650
+ ### **Loop Detected TypeError: Cannot read properties of undefined (reading '0') at...**
651
+
652
+ If you're getting this error it's most likely because of the recent huge changes to Comfy which are going to replace Node-Groups with Subgraphs. This Workflow still uses Node-Groups as of now since it's the simpler solution but will be upgraded soon when node-groups are being deprecated.
653
+ The issue is caused by the cg-use-everywhere custom-node. So if you're getting this error message, make sure you do the following:
654
+
655
+ * Update comfy to the newest Version
656
+
657
+ * Update the comfy-frontend of your comfy instance
658
+
659
+ * You can do this by adding "--front-end-version Comfy-Org/ComfyUI\_frontend@latest" as a start parameter of your main.py - check how to [here](https://comfyui-wiki.com/en/tutorial/basic/how-to-update-comfyui#how-to-upgrade-the-comfyui-web-frontend).
660
+
661
+ * Open your Comfy-Manager and make sure all custom nodes are up-to-date by going into your "Custom Nodes Manager" and then click on "Filter" in the top left corner and change it to "Update". ComfyUI is now going to check if any of your nodes can be updated, if there's an update available make sure to update every node (especially cg-use-everywhere).
662
+
663
+ * Close ComfyUI completely, including your console where you started ComfyUI.
664
+ Start ComfyUI again and once it's done loading go into your browser and enter the comfyui address. Once comfyui is loaded you close all open workflows, click the "refresh" button (Default is F5-Button) and then open the workflow again.
665
+
666
+ * ComfyUI should now have cleared the cache and when you open/drag the workflow in comfyui again it should work without problems.
667
+
668
+
669
+ Thank you for reading the guide or at least the parts that help you, if you have any more questions feel free to leave them as a comment in either the model or here in the guide.
670
+ As more questions will come my way i'll make sure to add answers to the FAQ.
671
+ So if you have an issue that is not listed in the FAQ right now, just ask away and i'll help to the best of my abilities!
672
+ Enjoy generating ♥️
workflows/IMG2IMG/v4.3/img2img-1.png ADDED

Git LFS Details

  • SHA256: b5234922f84bab3379aad57e846b6ef47ffee57c6a60a74129e1d36f79a1ef9e
  • Pointer size: 131 Bytes
  • Size of remote file: 263 kB
workflows/IMG2IMG/v4.3/img2img-2.png ADDED

Git LFS Details

  • SHA256: 262882d829c199232e9c55af127811954f6bd37cc14fa3e1bab5136e41aa44a6
  • Pointer size: 131 Bytes
  • Size of remote file: 462 kB
workflows/IMG2IMG/v4.3/img2img-fullpreview.png ADDED

Git LFS Details

  • SHA256: afaf87478f1f3f672954430c85d17db75f434bc3a18c6ab85460cac8834e9dbb
  • Pointer size: 131 Bytes
  • Size of remote file: 137 kB
workflows/IMG2IMG/v4.3/workflow-img2img.png ADDED

Git LFS Details

  • SHA256: 29b9ad05d8cdcbe4fe45812444aa8eaab639095c186cab43793d10e53506dff3
  • Pointer size: 132 Bytes
  • Size of remote file: 1.8 MB
workflows/IMG2IMG/v4.4/IMG2IMG-ADetailer-v4.4-vslinx.json ADDED
The diff for this file is too large to render. See raw diff
 
workflows/IMG2IMG/v4.4/IMG2IMG_ADetailer_2025-08-27-012249.png ADDED

Git LFS Details

  • SHA256: acd22cdaed7273bcd2eb784d600ed335c83f5dc60ddc0bd47725b6c28ab4dca8
  • Pointer size: 132 Bytes
  • Size of remote file: 4.53 MB
workflows/IMG2IMG/v4.4/IMG2IMG_ADetailer_2025-08-27-012250.png ADDED

Git LFS Details

  • SHA256: bc71c18d9e9b53e22cc3fcb46cb11019bc07a86d45e2dc5fc32ebbc8e5286f33
  • Pointer size: 132 Bytes
  • Size of remote file: 3.95 MB
workflows/IMG2IMG/v4.4/IMG2IMG_ADetailer_2025-08-27-012251.png ADDED

Git LFS Details

  • SHA256: 7e6daf4a22e584411afcbd6dc845bc6b08874276af7191ca858ca404812f72b1
  • Pointer size: 132 Bytes
  • Size of remote file: 4.5 MB
workflows/IMG2IMG/v4.4/IMG2IMG_ADetailer_2025-08-27-012251_01.png ADDED

Git LFS Details

  • SHA256: 89e15216179d70caec2106b811d43937c4d250791a0738a63a752b5f6d2f1b56
  • Pointer size: 132 Bytes
  • Size of remote file: 4.1 MB
workflows/IMG2IMG/v4.4/changelog.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ - implementation of new image loading node (by me) to be able to multi-select one or more images for a run (will be executed one after another)
2
+ - dynamic prompts/wildcards now supported through Impact-Pack, syntax [here](https://github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/ImpactWildcard.md) - txt files have to be in the folder BEFORE comfy starts to be recognized
3
+ - introduction of refiner functionality
4
+ - switch for turning off seperate VAE to use baked-in VAE of checkpoint
5
+ - replaced the single lora loaders and their 6 activation buttons with rgthree's lora stack loader to load as many as you want (same for detailer loras)
6
+ - moved all model selection stuff into second row for more streamlined process
7
+ - removal of all group nodes & abstraction through subgraphs
8
+ - fixed v-pred model bug that didn't apply v-parameterization
9
+ - fixed modelname not saving correctly for civitai
10
+ - overhauled notes in the workflow for better understanding & improving your results
11
+ - global clipskip now happens AFTER lora loading (correct implementation, no difference to outcome quality)
12
+
13
+ _wanted to add recommended detailer values you can just activate/deactivate (more vram but better results) but this is currently not possible since multiple custom_nodes still cause issues with subgraphs being bypassed_
workflows/IMG2IMG/v4.4/workflow-txt2img.png ADDED

Git LFS Details

  • SHA256: 37a7c526ae36c87edb1aa5dc6f21b4e19d9902b876babe0663bcf2e09f34d6b5
  • Pointer size: 131 Bytes
  • Size of remote file: 691 kB
workflows/IMG2IMG/v4.4/workflow.png ADDED

Git LFS Details

  • SHA256: a75b5f03773977d6c99d715c15197a1d83c22e34ae5e18e11f062bfea4e4ca9a
  • Pointer size: 132 Bytes
  • Size of remote file: 2.02 MB
workflows/IMG2IMG/v4.5/IMG2IMG-ADetailer-v4.5-vslinx.json ADDED
The diff for this file is too large to render. See raw diff
 
workflows/IMG2IMG/v4.5/IMG2IMG_ADetailer_2025-09-06-142407.png ADDED

Git LFS Details

  • SHA256: 45b8aeb4f36ab67c23f8d64cce0b5e1e1719b3d2c0211ceee60d7e977eba3698
  • Pointer size: 132 Bytes
  • Size of remote file: 6.28 MB
workflows/IMG2IMG/v4.5/IMG2IMG_ADetailer_2025-09-06-142407_01.png ADDED

Git LFS Details

  • SHA256: be8a49eb5f1ceab51d8d53a9e98d9bcb62728475a7afa257a87a2707375d1de6
  • Pointer size: 132 Bytes
  • Size of remote file: 7.95 MB
workflows/IMG2IMG/v4.5/IMG2IMG_ADetailer_2025-09-06-142408.png ADDED

Git LFS Details

  • SHA256: c2ac473738e27e41ef02b317fcd59798de9d182f8e5d20ced86e053df7953d91
  • Pointer size: 132 Bytes
  • Size of remote file: 8.3 MB
workflows/IMG2IMG/v4.5/IMG2IMG_ADetailer_2025-09-06-142409.png ADDED

Git LFS Details

  • SHA256: fd9076da7cf9b34e8aa164ebe0537ff8d41067631cbacb35df895f1b5e30705b
  • Pointer size: 132 Bytes
  • Size of remote file: 5.57 MB