Add YAML metadata to fix model card warning
Browse files
README.md
CHANGED
|
@@ -1,252 +1,51 @@
|
|
| 1 |
-
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
|
| 4 |
-
|
| 5 |
|
| 6 |
-
|
| 7 |
|
| 8 |
-
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
-
|
| 13 |
-
Go to [Update page](./UPDATES.md) to follow updates
|
| 14 |
|
| 15 |
-
|
| 16 |
-
## Using ComfyUI Manager (recommended):
|
| 17 |
-
Install [ComfyUI Manager](https://github.com/ltdrdata/ComfyUI-Manager) and do steps introduced there to install this repo.
|
| 18 |
|
| 19 |
-
|
| 20 |
-
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
-
|
| 23 |
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
- With system python
|
| 31 |
-
- Run `pip install -r requirements.txt`
|
| 32 |
-
- Start ComfyUI
|
| 33 |
|
| 34 |
-
|
| 35 |
-
Please note that this repo only supports preprocessors making hint images (e.g. stickman, canny edge, etc).
|
| 36 |
-
All preprocessors except Inpaint are intergrated into `AIO Aux Preprocessor` node.
|
| 37 |
-
This node allow you to quickly get the preprocessor but a preprocessor's own threshold parameters won't be able to set.
|
| 38 |
-
You need to use its node directly to set thresholds.
|
| 39 |
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
| Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
|
| 43 |
-
|-----------------------------|---------------------------|-------------------------------------------|
|
| 44 |
-
| Binary Lines | binary | control_scribble |
|
| 45 |
-
| Canny Edge | canny | control_v11p_sd15_canny <br> control_canny <br> t2iadapter_canny |
|
| 46 |
-
| HED Soft-Edge Lines | hed | control_v11p_sd15_softedge <br> control_hed |
|
| 47 |
-
| Standard Lineart | standard_lineart | control_v11p_sd15_lineart |
|
| 48 |
-
| Realistic Lineart | lineart (or `lineart_coarse` if `coarse` is enabled) | control_v11p_sd15_lineart |
|
| 49 |
-
| Anime Lineart | lineart_anime | control_v11p_sd15s2_lineart_anime |
|
| 50 |
-
| Manga Lineart | lineart_anime_denoise | control_v11p_sd15s2_lineart_anime |
|
| 51 |
-
| M-LSD Lines | mlsd | control_v11p_sd15_mlsd <br> control_mlsd |
|
| 52 |
-
| PiDiNet Soft-Edge Lines | pidinet | control_v11p_sd15_softedge <br> control_scribble |
|
| 53 |
-
| Scribble Lines | scribble | control_v11p_sd15_scribble <br> control_scribble |
|
| 54 |
-
| Scribble XDoG Lines | scribble_xdog | control_v11p_sd15_scribble <br> control_scribble |
|
| 55 |
-
| Fake Scribble Lines | scribble_hed | control_v11p_sd15_scribble <br> control_scribble |
|
| 56 |
-
| TEED Soft-Edge Lines | teed | [controlnet-sd-xl-1.0-softedge-dexined](https://huggingface.co/SargeZT/controlnet-sd-xl-1.0-softedge-dexined/blob/main/controlnet-sd-xl-1.0-softedge-dexined.safetensors) <br> control_v11p_sd15_softedge (Theoretically)
|
| 57 |
-
| Scribble PiDiNet Lines | scribble_pidinet | control_v11p_sd15_scribble <br> control_scribble |
|
| 58 |
-
| AnyLine Lineart | | mistoLine_fp16.safetensors <br> mistoLine_rank256 <br> control_v11p_sd15s2_lineart_anime <br> control_v11p_sd15_lineart |
|
| 59 |
|
| 60 |
-
|
| 61 |
-
| Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
|
| 62 |
-
|-----------------------------|---------------------------|-------------------------------------------|
|
| 63 |
-
| MiDaS Depth Map | (normal) depth | control_v11f1p_sd15_depth <br> control_depth <br> t2iadapter_depth |
|
| 64 |
-
| LeReS Depth Map | depth_leres | control_v11f1p_sd15_depth <br> control_depth <br> t2iadapter_depth |
|
| 65 |
-
| Zoe Depth Map | depth_zoe | control_v11f1p_sd15_depth <br> control_depth <br> t2iadapter_depth |
|
| 66 |
-
| MiDaS Normal Map | normal_map | control_normal |
|
| 67 |
-
| BAE Normal Map | normal_bae | control_v11p_sd15_normalbae |
|
| 68 |
-
| MeshGraphormer Hand Refiner ([HandRefinder](https://github.com/wenquanlu/HandRefiner)) | depth_hand_refiner | [control_sd15_inpaint_depth_hand_fp16](https://huggingface.co/hr16/ControlNet-HandRefiner-pruned/blob/main/control_sd15_inpaint_depth_hand_fp16.safetensors) |
|
| 69 |
-
| Depth Anything | depth_anything | [Depth-Anything](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints_controlnet/diffusion_pytorch_model.safetensors) |
|
| 70 |
-
| Zoe Depth Anything <br> (Basically Zoe but the encoder is replaced with DepthAnything) | depth_anything | [Depth-Anything](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints_controlnet/diffusion_pytorch_model.safetensors) |
|
| 71 |
-
| Normal DSINE | | control_normal/control_v11p_sd15_normalbae |
|
| 72 |
-
| Metric3D Depth | | control_v11f1p_sd15_depth <br> control_depth <br> t2iadapter_depth |
|
| 73 |
-
| Metric3D Normal | | control_v11p_sd15_normalbae |
|
| 74 |
-
| Depth Anything V2 | | [Depth-Anything](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints_controlnet/diffusion_pytorch_model.safetensors) |
|
| 75 |
|
| 76 |
-
|
| 77 |
-
| Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
|
| 78 |
-
|-----------------------------|---------------------------|-------------------------------------------|
|
| 79 |
-
| DWPose Estimator | dw_openpose_full | control_v11p_sd15_openpose <br> control_openpose <br> t2iadapter_openpose |
|
| 80 |
-
| OpenPose Estimator | openpose (detect_body) <br> openpose_hand (detect_body + detect_hand) <br> openpose_faceonly (detect_face) <br> openpose_full (detect_hand + detect_body + detect_face) | control_v11p_sd15_openpose <br> control_openpose <br> t2iadapter_openpose |
|
| 81 |
-
| MediaPipe Face Mesh | mediapipe_face | controlnet_sd21_laion_face_v2 |
|
| 82 |
-
| Animal Estimator | animal_openpose | [control_sd15_animal_openpose_fp16](https://huggingface.co/huchenlei/animal_openpose/blob/main/control_sd15_animal_openpose_fp16.pth) |
|
| 83 |
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
| Unimatch Optical Flow | | [DragNUWA](https://github.com/ProjectNUWA/DragNUWA) |
|
| 88 |
|
| 89 |
-
|
| 90 |
-
#### User-side
|
| 91 |
-
This workflow will save images to ComfyUI's output folder (the same location as output images). If you haven't found `Save Pose Keypoints` node, update this extension
|
| 92 |
-

|
| 93 |
-
|
| 94 |
-
#### Dev-side
|
| 95 |
-
An array of [OpenPose-format JSON](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/02_output.md#json-output-format) corresponsding to each frame in an IMAGE batch can be gotten from DWPose and OpenPose using `app.nodeOutputs` on the UI or `/history` API endpoint. JSON output from AnimalPose uses a kinda similar format to OpenPose JSON:
|
| 96 |
-
```
|
| 97 |
-
[
|
| 98 |
-
{
|
| 99 |
-
"version": "ap10k",
|
| 100 |
-
"animals": [
|
| 101 |
-
[[x1, y1, 1], [x2, y2, 1],..., [x17, y17, 1]],
|
| 102 |
-
[[x1, y1, 1], [x2, y2, 1],..., [x17, y17, 1]],
|
| 103 |
-
...
|
| 104 |
-
],
|
| 105 |
-
"canvas_height": 512,
|
| 106 |
-
"canvas_width": 768
|
| 107 |
-
},
|
| 108 |
-
...
|
| 109 |
-
]
|
| 110 |
-
```
|
| 111 |
-
|
| 112 |
-
For extension developers (e.g. Openpose editor):
|
| 113 |
-
```js
|
| 114 |
-
const poseNodes = app.graph._nodes.filter(node => ["OpenposePreprocessor", "DWPreprocessor", "AnimalPosePreprocessor"].includes(node.type))
|
| 115 |
-
for (const poseNode of poseNodes) {
|
| 116 |
-
const openposeResults = JSON.parse(app.nodeOutputs[poseNode.id].openpose_json[0])
|
| 117 |
-
console.log(openposeResults) //An array containing Openpose JSON for each frame
|
| 118 |
-
}
|
| 119 |
-
```
|
| 120 |
-
|
| 121 |
-
For API users:
|
| 122 |
-
Javascript
|
| 123 |
-
```js
|
| 124 |
-
import fetch from "node-fetch" //Remember to add "type": "module" to "package.json"
|
| 125 |
-
async function main() {
|
| 126 |
-
const promptId = '792c1905-ecfe-41f4-8114-83e6a4a09a9f' //Too lazy to POST /queue
|
| 127 |
-
let history = await fetch(`http://127.0.0.1:8188/history/${promptId}`).then(re => re.json())
|
| 128 |
-
history = history[promptId]
|
| 129 |
-
const nodeOutputs = Object.values(history.outputs).filter(output => output.openpose_json)
|
| 130 |
-
for (const nodeOutput of nodeOutputs) {
|
| 131 |
-
const openposeResults = JSON.parse(nodeOutput.openpose_json[0])
|
| 132 |
-
console.log(openposeResults) //An array containing Openpose JSON for each frame
|
| 133 |
-
}
|
| 134 |
-
}
|
| 135 |
-
main()
|
| 136 |
-
```
|
| 137 |
-
|
| 138 |
-
Python
|
| 139 |
-
```py
|
| 140 |
-
import json, urllib.request
|
| 141 |
-
|
| 142 |
-
server_address = "127.0.0.1:8188"
|
| 143 |
-
prompt_id = '' #Too lazy to POST /queue
|
| 144 |
-
|
| 145 |
-
def get_history(prompt_id):
|
| 146 |
-
with urllib.request.urlopen("http://{}/history/{}".format(server_address, prompt_id)) as response:
|
| 147 |
-
return json.loads(response.read())
|
| 148 |
-
|
| 149 |
-
history = get_history(prompt_id)[prompt_id]
|
| 150 |
-
for o in history['outputs']:
|
| 151 |
-
for node_id in history['outputs']:
|
| 152 |
-
node_output = history['outputs'][node_id]
|
| 153 |
-
if 'openpose_json' in node_output:
|
| 154 |
-
print(json.loads(node_output['openpose_json'][0])) #An list containing Openpose JSON for each frame
|
| 155 |
-
```
|
| 156 |
-
## Semantic Segmentation
|
| 157 |
-
| Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
|
| 158 |
-
|-----------------------------|---------------------------|-------------------------------------------|
|
| 159 |
-
| OneFormer ADE20K Segmentor | oneformer_ade20k | control_v11p_sd15_seg |
|
| 160 |
-
| OneFormer COCO Segmentor | oneformer_coco | control_v11p_sd15_seg |
|
| 161 |
-
| UniFormer Segmentor | segmentation |control_sd15_seg <br> control_v11p_sd15_seg|
|
| 162 |
-
|
| 163 |
-
## T2IAdapter-only
|
| 164 |
-
| Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
|
| 165 |
-
|-----------------------------|---------------------------|-------------------------------------------|
|
| 166 |
-
| Color Pallete | color | t2iadapter_color |
|
| 167 |
-
| Content Shuffle | shuffle | t2iadapter_style |
|
| 168 |
-
|
| 169 |
-
## Recolor
|
| 170 |
-
| Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
|
| 171 |
-
|-----------------------------|---------------------------|-------------------------------------------|
|
| 172 |
-
| Image Luminance | recolor_luminance | [ioclab_sd15_recolor](https://huggingface.co/lllyasviel/sd_control_collection/resolve/main/ioclab_sd15_recolor.safetensors) <br> [sai_xl_recolor_256lora](https://huggingface.co/lllyasviel/sd_control_collection/resolve/main/sai_xl_recolor_256lora.safetensors) <br> [bdsqlsz_controlllite_xl_recolor_luminance](https://huggingface.co/bdsqlsz/qinglong_controlnet-lllite/resolve/main/bdsqlsz_controlllite_xl_recolor_luminance.safetensors) |
|
| 173 |
-
| Image Intensity | recolor_intensity | Idk. Maybe same as above? |
|
| 174 |
-
|
| 175 |
-
# Examples
|
| 176 |
-
> A picture is worth a thousand words
|
| 177 |
-
|
| 178 |
-

|
| 179 |
-

|
| 180 |
-
|
| 181 |
-
# Testing workflow
|
| 182 |
-
https://github.com/Fannovel16/comfyui_controlnet_aux/blob/main/examples/ExecuteAll.png
|
| 183 |
-
Input image: https://github.com/Fannovel16/comfyui_controlnet_aux/blob/main/examples/comfyui-controlnet-aux-logo.png
|
| 184 |
-
|
| 185 |
-
# Q&A:
|
| 186 |
-
## Why some nodes doesn't appear after I installed this repo?
|
| 187 |
-
|
| 188 |
-
This repo has a new mechanism which will skip any custom node can't be imported. If you meet this case, please create a issue on [Issues tab](https://github.com/Fannovel16/comfyui_controlnet_aux/issues) with the log from the command line.
|
| 189 |
-
|
| 190 |
-
## DWPose/AnimalPose only uses CPU so it's so slow. How can I make it use GPU?
|
| 191 |
-
There are two ways to speed-up DWPose: using TorchScript checkpoints (.torchscript.pt) checkpoints or ONNXRuntime (.onnx). TorchScript way is little bit slower than ONNXRuntime but doesn't require any additional library and still way way faster than CPU.
|
| 192 |
-
|
| 193 |
-
A torchscript bbox detector is compatiable with an onnx pose estimator and vice versa.
|
| 194 |
-
### TorchScript
|
| 195 |
-
Set `bbox_detector` and `pose_estimator` according to this picture. You can try other bbox detector endings with `.torchscript.pt` to reduce bbox detection time if input images are ideal.
|
| 196 |
-

|
| 197 |
-
### ONNXRuntime
|
| 198 |
-
If onnxruntime is installed successfully and the checkpoint used endings with `.onnx`, it will replace default cv2 backend to take advantage of GPU. Note that if you are using NVidia card, this method currently can only works on CUDA 11.8 (ComfyUI_windows_portable_nvidia_cu118_or_cpu.7z) unless you compile onnxruntime yourself.
|
| 199 |
-
|
| 200 |
-
1. Know your onnxruntime build:
|
| 201 |
-
* * NVidia CUDA 11.x or bellow/AMD GPU: `onnxruntime-gpu`
|
| 202 |
-
* * NVidia CUDA 12.x: `onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/`
|
| 203 |
-
* * DirectML: `onnxruntime-directml`
|
| 204 |
-
* * OpenVINO: `onnxruntime-openvino`
|
| 205 |
-
|
| 206 |
-
Note that if this is your first time using ComfyUI, please test if it can run on your device before doing next steps.
|
| 207 |
-
|
| 208 |
-
2. Add it into `requirements.txt`
|
| 209 |
-
|
| 210 |
-
3. Run `install.bat` or pip command mentioned in Installation
|
| 211 |
-
|
| 212 |
-

|
| 213 |
-
|
| 214 |
-
# Assets files of preprocessors
|
| 215 |
-
* anime_face_segment: [bdsqlsz/qinglong_controlnet-lllite/Annotators/UNet.pth](https://huggingface.co/bdsqlsz/qinglong_controlnet-lllite/blob/main/Annotators/UNet.pth), [anime-seg/isnetis.ckpt](https://huggingface.co/skytnt/anime-seg/blob/main/isnetis.ckpt)
|
| 216 |
-
* densepose: [LayerNorm/DensePose-TorchScript-with-hint-image/densepose_r50_fpn_dl.torchscript](https://huggingface.co/LayerNorm/DensePose-TorchScript-with-hint-image/blob/main/densepose_r50_fpn_dl.torchscript)
|
| 217 |
-
* dwpose:
|
| 218 |
-
* * bbox_detector: Either [yzd-v/DWPose/yolox_l.onnx](https://huggingface.co/yzd-v/DWPose/blob/main/yolox_l.onnx), [hr16/yolox-onnx/yolox_l.torchscript.pt](https://huggingface.co/hr16/yolox-onnx/blob/main/yolox_l.torchscript.pt), [hr16/yolo-nas-fp16/yolo_nas_l_fp16.onnx](https://huggingface.co/hr16/yolo-nas-fp16/blob/main/yolo_nas_l_fp16.onnx), [hr16/yolo-nas-fp16/yolo_nas_m_fp16.onnx](https://huggingface.co/hr16/yolo-nas-fp16/blob/main/yolo_nas_m_fp16.onnx), [hr16/yolo-nas-fp16/yolo_nas_s_fp16.onnx](https://huggingface.co/hr16/yolo-nas-fp16/blob/main/yolo_nas_s_fp16.onnx)
|
| 219 |
-
* * pose_estimator: Either [hr16/DWPose-TorchScript-BatchSize5/dw-ll_ucoco_384_bs5.torchscript.pt](https://huggingface.co/hr16/DWPose-TorchScript-BatchSize5/blob/main/dw-ll_ucoco_384_bs5.torchscript.pt), [yzd-v/DWPose/dw-ll_ucoco_384.onnx](https://huggingface.co/yzd-v/DWPose/blob/main/dw-ll_ucoco_384.onnx)
|
| 220 |
-
* animal_pose (ap10k):
|
| 221 |
-
* * bbox_detector: Either [yzd-v/DWPose/yolox_l.onnx](https://huggingface.co/yzd-v/DWPose/blob/main/yolox_l.onnx), [hr16/yolox-onnx/yolox_l.torchscript.pt](https://huggingface.co/hr16/yolox-onnx/blob/main/yolox_l.torchscript.pt), [hr16/yolo-nas-fp16/yolo_nas_l_fp16.onnx](https://huggingface.co/hr16/yolo-nas-fp16/blob/main/yolo_nas_l_fp16.onnx), [hr16/yolo-nas-fp16/yolo_nas_m_fp16.onnx](https://huggingface.co/hr16/yolo-nas-fp16/blob/main/yolo_nas_m_fp16.onnx), [hr16/yolo-nas-fp16/yolo_nas_s_fp16.onnx](https://huggingface.co/hr16/yolo-nas-fp16/blob/main/yolo_nas_s_fp16.onnx)
|
| 222 |
-
* * pose_estimator: Either [hr16/DWPose-TorchScript-BatchSize5/rtmpose-m_ap10k_256_bs5.torchscript.pt](https://huggingface.co/hr16/DWPose-TorchScript-BatchSize5/blob/main/rtmpose-m_ap10k_256_bs5.torchscript.pt), [hr16/UnJIT-DWPose/rtmpose-m_ap10k_256.onnx](https://huggingface.co/hr16/UnJIT-DWPose/blob/main/rtmpose-m_ap10k_256.onnx)
|
| 223 |
-
* hed: [lllyasviel/Annotators/ControlNetHED.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/ControlNetHED.pth)
|
| 224 |
-
* leres: [lllyasviel/Annotators/res101.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/res101.pth), [lllyasviel/Annotators/latest_net_G.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/latest_net_G.pth)
|
| 225 |
-
* lineart: [lllyasviel/Annotators/sk_model.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/sk_model.pth), [lllyasviel/Annotators/sk_model2.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/sk_model2.pth)
|
| 226 |
-
* lineart_anime: [lllyasviel/Annotators/netG.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/netG.pth)
|
| 227 |
-
* manga_line: [lllyasviel/Annotators/erika.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/erika.pth)
|
| 228 |
-
* mesh_graphormer: [hr16/ControlNet-HandRefiner-pruned/graphormer_hand_state_dict.bin](https://huggingface.co/hr16/ControlNet-HandRefiner-pruned/blob/main/graphormer_hand_state_dict.bin), [hr16/ControlNet-HandRefiner-pruned/hrnetv2_w64_imagenet_pretrained.pth](https://huggingface.co/hr16/ControlNet-HandRefiner-pruned/blob/main/hrnetv2_w64_imagenet_pretrained.pth)
|
| 229 |
-
* midas: [lllyasviel/Annotators/dpt_hybrid-midas-501f0c75.pt](https://huggingface.co/lllyasviel/Annotators/blob/main/dpt_hybrid-midas-501f0c75.pt)
|
| 230 |
-
* mlsd: [lllyasviel/Annotators/mlsd_large_512_fp32.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/mlsd_large_512_fp32.pth)
|
| 231 |
-
* normalbae: [lllyasviel/Annotators/scannet.pt](https://huggingface.co/lllyasviel/Annotators/blob/main/scannet.pt)
|
| 232 |
-
* oneformer: [lllyasviel/Annotators/250_16_swin_l_oneformer_ade20k_160k.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/250_16_swin_l_oneformer_ade20k_160k.pth)
|
| 233 |
-
* open_pose: [lllyasviel/Annotators/body_pose_model.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/body_pose_model.pth), [lllyasviel/Annotators/hand_pose_model.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/hand_pose_model.pth), [lllyasviel/Annotators/facenet.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/facenet.pth)
|
| 234 |
-
* pidi: [lllyasviel/Annotators/table5_pidinet.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/table5_pidinet.pth)
|
| 235 |
-
* sam: [dhkim2810/MobileSAM/mobile_sam.pt](https://huggingface.co/dhkim2810/MobileSAM/blob/main/mobile_sam.pt)
|
| 236 |
-
* uniformer: [lllyasviel/Annotators/upernet_global_small.pth](https://huggingface.co/lllyasviel/Annotators/blob/main/upernet_global_small.pth)
|
| 237 |
-
* zoe: [lllyasviel/Annotators/ZoeD_M12_N.pt](https://huggingface.co/lllyasviel/Annotators/blob/main/ZoeD_M12_N.pt)
|
| 238 |
-
* teed: [bdsqlsz/qinglong_controlnet-lllite/7_model.pth](https://huggingface.co/bdsqlsz/qinglong_controlnet-lllite/blob/main/Annotators/7_model.pth)
|
| 239 |
-
* depth_anything: Either [LiheYoung/Depth-Anything/checkpoints/depth_anything_vitl14.pth](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints/depth_anything_vitl14.pth), [LiheYoung/Depth-Anything/checkpoints/depth_anything_vitb14.pth](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints/depth_anything_vitb14.pth) or [LiheYoung/Depth-Anything/checkpoints/depth_anything_vits14.pth](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints/depth_anything_vits14.pth)
|
| 240 |
-
* diffusion_edge: Either [hr16/Diffusion-Edge/diffusion_edge_indoor.pt](https://huggingface.co/hr16/Diffusion-Edge/blob/main/diffusion_edge_indoor.pt), [hr16/Diffusion-Edge/diffusion_edge_urban.pt](https://huggingface.co/hr16/Diffusion-Edge/blob/main/diffusion_edge_urban.pt) or [hr16/Diffusion-Edge/diffusion_edge_natrual.pt](https://huggingface.co/hr16/Diffusion-Edge/blob/main/diffusion_edge_natrual.pt)
|
| 241 |
-
* unimatch: Either [hr16/Unimatch/gmflow-scale2-regrefine6-mixdata.pth](https://huggingface.co/hr16/Unimatch/blob/main/gmflow-scale2-regrefine6-mixdata.pth), [hr16/Unimatch/gmflow-scale2-mixdata.pth](https://huggingface.co/hr16/Unimatch/blob/main/gmflow-scale2-mixdata.pth) or [hr16/Unimatch/gmflow-scale1-mixdata.pth](https://huggingface.co/hr16/Unimatch/blob/main/gmflow-scale1-mixdata.pth)
|
| 242 |
-
* zoe_depth_anything: Either [LiheYoung/Depth-Anything/checkpoints_metric_depth/depth_anything_metric_depth_indoor.pt](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints_metric_depth/depth_anything_metric_depth_indoor.pt) or [LiheYoung/Depth-Anything/checkpoints_metric_depth/depth_anything_metric_depth_outdoor.pt](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints_metric_depth/depth_anything_metric_depth_outdoor.pt)
|
| 243 |
-
# 2000 Stars 😄
|
| 244 |
-
<a href="https://star-history.com/#Fannovel16/comfyui_controlnet_aux&Date">
|
| 245 |
-
<picture>
|
| 246 |
-
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=Fannovel16/comfyui_controlnet_aux&type=Date&theme=dark" />
|
| 247 |
-
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=Fannovel16/comfyui_controlnet_aux&type=Date" />
|
| 248 |
-
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=Fannovel16/comfyui_controlnet_aux&type=Date" />
|
| 249 |
-
</picture>
|
| 250 |
-
</a>
|
| 251 |
-
|
| 252 |
-
Thanks for yalls supports. I never thought the graph for stars would be linear lol.
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
tags:
|
| 4 |
+
- comfyui
|
| 5 |
+
- comfyui-custom-nodes
|
| 6 |
+
- mirror
|
| 7 |
+
- github-mirror
|
| 8 |
+
library_name: comfyui
|
| 9 |
+
---
|
| 10 |
|
| 11 |
+
# comfyui_controlnet_aux
|
| 12 |
|
| 13 |
+
🔄 **Mirror of GitHub Repository**
|
| 14 |
|
| 15 |
+
This is a personal mirror of the original GitHub repository for easy access and backup.
|
| 16 |
|
| 17 |
+
## 📦 Original Source
|
| 18 |
|
| 19 |
+
**GitHub:** [https://github.com/Fannovel16/comfyui_controlnet_aux](https://github.com/Fannovel16/comfyui_controlnet_aux)
|
|
|
|
| 20 |
|
| 21 |
+
## 🎯 Purpose
|
|
|
|
|
|
|
| 22 |
|
| 23 |
+
This mirror is maintained for:
|
| 24 |
+
- ✅ Easy access and backup
|
| 25 |
+
- ✅ Integration with Hugging Face workflows
|
| 26 |
+
- ✅ Personal development and testing
|
| 27 |
+
- ✅ Offline availability
|
| 28 |
|
| 29 |
+
## 📖 Usage
|
| 30 |
|
| 31 |
+
Please refer to the original GitHub repository for:
|
| 32 |
+
- Latest updates and releases
|
| 33 |
+
- Installation instructions
|
| 34 |
+
- Documentation
|
| 35 |
+
- Issues and support
|
| 36 |
+
- Contributing guidelines
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
+
## 🙏 Attribution
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
|
| 40 |
+
All code and content in this repository belongs to the original authors and contributors.
|
| 41 |
+
This is purely a mirror for personal convenience.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
+
**Please support the original project:** https://github.com/Fannovel16/comfyui_controlnet_aux
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
+
**Mirrored by:** aliensmn
|
| 48 |
+
**Mirror Type:** ComfyUI Custom Node
|
| 49 |
+
**Original Repository:** https://github.com/Fannovel16/comfyui_controlnet_aux
|
|
|
|
| 50 |
|
| 51 |
+
*If you are the original author and would like this mirror removed, please contact me.*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|