Duplicate from Phr00t/WAN2.2-14B-Rapid-AllInOne
Browse filesCo-authored-by: Phr00t <Phr00t@users.noreply.huggingface.co>
This view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +35 -0
- .mega_workflow_in_mega_v3_folder +0 -0
- Custom-Advanced-VACE-Node/README.md +18 -0
- Custom-Advanced-VACE-Node/nodes_utility.py +807 -0
- Mega-v1/Rapid-AIO-Mega.json +768 -0
- Mega-v1/wan2.2-rapid-mega-aio-nsfw-v1.safetensors +3 -0
- Mega-v1/wan2.2-rapid-mega-aio-v1.safetensors +3 -0
- Mega-v10/.use_mega_v3_workflow +0 -0
- Mega-v10/wan2.2-rapid-mega-aio-nsfw-v10.safetensors +3 -0
- Mega-v10/wan2.2-rapid-mega-aio-v10.safetensors +3 -0
- Mega-v11/.use_mega_v3_workflow +0 -0
- Mega-v11/wan2.2-rapid-mega-aio-nsfw-v11.safetensors +3 -0
- Mega-v11/wan2.2-rapid-mega-aio-v11.safetensors +3 -0
- Mega-v12/.use_mega_v3_workflow +0 -0
- Mega-v12/wan2.2-rapid-mega-aio-nsfw-v12.1.safetensors +3 -0
- Mega-v12/wan2.2-rapid-mega-aio-nsfw-v12.2.safetensors +3 -0
- Mega-v12/wan2.2-rapid-mega-aio-nsfw-v12.safetensors +3 -0
- Mega-v12/wan2.2-rapid-mega-aio-v12.safetensors +3 -0
- Mega-v2/Rapid-AIO-Mega.json +815 -0
- Mega-v2/wan2.2-rapid-mega-aio-nsfw-v2.safetensors +3 -0
- Mega-v2/wan2.2-rapid-mega-aio-v2.safetensors +3 -0
- Mega-v3/Rapid-AIO-Mega.json +794 -0
- Mega-v3/wan2.2-rapid-mega-aio-v3.safetensors +3 -0
- Mega-v3/wan2.2-rapid-mega-nsfw-aio-v3.1.safetensors +3 -0
- Mega-v3/wan2.2-rapid-mega-nsfw-aio-v3.safetensors +3 -0
- Mega-v4/.use_mega_v3_workflow +0 -0
- Mega-v4/wan2.2-rapid-mega-aio-v4.safetensors +3 -0
- Mega-v5/.use_mega_v3_workflow +0 -0
- Mega-v5/wan2.2-rapid-mega-aio-nsfw-v5.safetensors +3 -0
- Mega-v5/wan2.2-rapid-mega-aio-v5.safetensors +3 -0
- Mega-v6/wan2.2-rapid-mega-aio-nsfw-v6.1.safetensors +3 -0
- Mega-v6/wan2.2-rapid-mega-aio-nsfw-v6.safetensors +3 -0
- Mega-v6/wan2.2-rapid-mega-aio-v6.safetensors +3 -0
- Mega-v7/.use_mega_v3_workflow +0 -0
- Mega-v7/wan2.2-rapid-mega-aio-nsfw-v7.1.safetensors +3 -0
- Mega-v7/wan2.2-rapid-mega-aio-nsfw-v7.safetensors +3 -0
- Mega-v7/wan2.2-rapid-mega-aio-v7.safetensors +3 -0
- Mega-v8/.use_mega_v3_workflow +0 -0
- Mega-v8/wan2.2-rapid-mega-aio-nsfw-v8.safetensors +3 -0
- Mega-v8/wan2.2-rapid-mega-aio-v8.safetensors +3 -0
- Mega-v9/.use_mega_v3_workflow +0 -0
- Mega-v9/wan2.2-rapid-mega-aio-nsfw-v9.safetensors +3 -0
- Mega-v9/wan2.2-rapid-mega-aio-v9.safetensors +3 -0
- README.md +102 -0
- v10/wan2.2-i2v-rapid-aio-v10-nsfw.safetensors +3 -0
- v10/wan2.2-i2v-rapid-aio-v10.safetensors +3 -0
- v10/wan2.2-t2v-rapid-aio-v10-nsfw.safetensors +3 -0
- v10/wan2.2-t2v-rapid-aio-v10.safetensors +3 -0
- v2/wan2.2-i2v-aio-v2.safetensors +3 -0
- v2/wan2.2-t2v-aio-v2.safetensors +3 -0
.gitattributes
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
.mega_workflow_in_mega_v3_folder
ADDED
|
File without changes
|
Custom-Advanced-VACE-Node/README.md
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
This is a custom "WAN Start to End Frame" node that replaces the nodes_utility.py provided by kijai's "ComfyUI-WanVideoWrapper". That file to replace is usually found in your custom_nodes/ComfyUI-WanVideoWrapper folder.
|
| 2 |
+
|
| 3 |
+
**What does this do?**
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
It basically allows you to use a "start frame" AND a raw video motion (no controlnet processing required) at the same time. You'll still get better results with ControlNet for what VACE can handle, but this gives you another option for messing with non-ControlNet motion.
|
| 7 |
+
|
| 8 |
+
It adds two new parameters to the node: "control_strength" and "control_ease".
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
**control_strength**: When using a start frame and control_images (that will contain motion), how strong should the motion be applied to the WAN generation? Too much will change the video significantly to match your control_images. Too little will not bring enough motion over. Values around 0.1 to 0.5 seem to work best, maybe. Values of 1 will use default behavior.
|
| 12 |
+
|
| 13 |
+
**control_ease**: How many frames to "ease in" the control_images motion? Starting too quickly can cause a strange early jump to try and match the motion to your start image. This is to give more time for your start frame to line up with the control_images motion. If your starting frame is very close to the motion start, try low values (like 8). If you have a very different motion from your starting frame, go with 24 to 48.
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
**Replacing this file could break your WanVideoWrapper with future WanVideoWrapper updates**. It may be more compatible just replacing the 'class WanVideoVACEStartToEndFrame:' code with the class code from my file.
|
| 17 |
+
|
| 18 |
+
I should probably have a fork and offer a pull request for this, but for now, I'm just providing the file if you want to use it. If Kijai or anyone wants to bring this over if y'all find it useful, it is much appreciated.
|
Custom-Advanced-VACE-Node/nodes_utility.py
ADDED
|
@@ -0,0 +1,807 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
import numpy as np
|
| 3 |
+
from comfy.utils import common_upscale
|
| 4 |
+
from comfy import model_management
|
| 5 |
+
from tqdm import tqdm
|
| 6 |
+
from .utils import log
|
| 7 |
+
from einops import rearrange
|
| 8 |
+
|
| 9 |
+
try:
|
| 10 |
+
from server import PromptServer
|
| 11 |
+
except:
|
| 12 |
+
PromptServer = None
|
| 13 |
+
|
| 14 |
+
VAE_STRIDE = (4, 8, 8)
|
| 15 |
+
PATCH_SIZE = (1, 2, 2)
|
| 16 |
+
|
| 17 |
+
main_device = model_management.get_torch_device()
|
| 18 |
+
offload_device = model_management.unet_offload_device()
|
| 19 |
+
|
| 20 |
+
class WanVideoImageResizeToClosest:
|
| 21 |
+
@classmethod
|
| 22 |
+
def INPUT_TYPES(s):
|
| 23 |
+
return {"required": {
|
| 24 |
+
"image": ("IMAGE", {"tooltip": "Image to resize"}),
|
| 25 |
+
"generation_width": ("INT", {"default": 832, "min": 64, "max": 8096, "step": 8, "tooltip": "Width of the image to encode"}),
|
| 26 |
+
"generation_height": ("INT", {"default": 480, "min": 64, "max": 8096, "step": 8, "tooltip": "Height of the image to encode"}),
|
| 27 |
+
"aspect_ratio_preservation": (["keep_input", "stretch_to_new", "crop_to_new"],),
|
| 28 |
+
},
|
| 29 |
+
}
|
| 30 |
+
|
| 31 |
+
RETURN_TYPES = ("IMAGE", "INT", "INT", )
|
| 32 |
+
RETURN_NAMES = ("image","width","height",)
|
| 33 |
+
FUNCTION = "process"
|
| 34 |
+
CATEGORY = "WanVideoWrapper"
|
| 35 |
+
DESCRIPTION = "Resizes image to the closest supported resolution based on aspect ratio and max pixels, according to the original code"
|
| 36 |
+
|
| 37 |
+
def process(self, image, generation_width, generation_height, aspect_ratio_preservation ):
|
| 38 |
+
|
| 39 |
+
H, W = image.shape[1], image.shape[2]
|
| 40 |
+
max_area = generation_width * generation_height
|
| 41 |
+
|
| 42 |
+
crop = "disabled"
|
| 43 |
+
|
| 44 |
+
if aspect_ratio_preservation == "keep_input":
|
| 45 |
+
aspect_ratio = H / W
|
| 46 |
+
elif aspect_ratio_preservation == "stretch_to_new" or aspect_ratio_preservation == "crop_to_new":
|
| 47 |
+
aspect_ratio = generation_height / generation_width
|
| 48 |
+
if aspect_ratio_preservation == "crop_to_new":
|
| 49 |
+
crop = "center"
|
| 50 |
+
|
| 51 |
+
lat_h = round(
|
| 52 |
+
np.sqrt(max_area * aspect_ratio) // VAE_STRIDE[1] //
|
| 53 |
+
PATCH_SIZE[1] * PATCH_SIZE[1])
|
| 54 |
+
lat_w = round(
|
| 55 |
+
np.sqrt(max_area / aspect_ratio) // VAE_STRIDE[2] //
|
| 56 |
+
PATCH_SIZE[2] * PATCH_SIZE[2])
|
| 57 |
+
h = lat_h * VAE_STRIDE[1]
|
| 58 |
+
w = lat_w * VAE_STRIDE[2]
|
| 59 |
+
|
| 60 |
+
resized_image = common_upscale(image.movedim(-1, 1), w, h, "lanczos", crop).movedim(1, -1)
|
| 61 |
+
|
| 62 |
+
return (resized_image, w, h)
|
| 63 |
+
|
| 64 |
+
class ExtractStartFramesForContinuations:
|
| 65 |
+
@classmethod
|
| 66 |
+
def INPUT_TYPES(s):
|
| 67 |
+
return {
|
| 68 |
+
"required": {
|
| 69 |
+
"input_video_frames": ("IMAGE", {"tooltip": "Input video frames to extract the start frames from."}),
|
| 70 |
+
"num_frames": ("INT", {"default": 10, "min": 1, "max": 1024, "step": 1, "tooltip": "Number of frames to get from the start of the video."}),
|
| 71 |
+
},
|
| 72 |
+
}
|
| 73 |
+
|
| 74 |
+
RETURN_TYPES = ("IMAGE",)
|
| 75 |
+
RETURN_NAMES = ("start_frames",)
|
| 76 |
+
FUNCTION = "get_start_frames"
|
| 77 |
+
CATEGORY = "WanVideoWrapper"
|
| 78 |
+
DESCRIPTION = "Extracts the first N frames from a video sequence for continuations."
|
| 79 |
+
|
| 80 |
+
def get_start_frames(self, input_video_frames, num_frames):
|
| 81 |
+
if input_video_frames is None or input_video_frames.shape[0] == 0:
|
| 82 |
+
log.warning("Input video frames are empty. Returning an empty tensor.")
|
| 83 |
+
if input_video_frames is not None:
|
| 84 |
+
return (torch.empty((0,) + input_video_frames.shape[1:], dtype=input_video_frames.dtype),)
|
| 85 |
+
else:
|
| 86 |
+
# Return a tensor with 4 dimensions, as expected for an IMAGE type.
|
| 87 |
+
return (torch.empty((0, 64, 64, 3), dtype=torch.float32),)
|
| 88 |
+
|
| 89 |
+
total_frames = input_video_frames.shape[0]
|
| 90 |
+
num_to_get = min(num_frames, total_frames)
|
| 91 |
+
|
| 92 |
+
if num_to_get < num_frames:
|
| 93 |
+
log.warning(f"Requested {num_frames} frames, but input video only has {total_frames} frames. Returning first {num_to_get} frames.")
|
| 94 |
+
|
| 95 |
+
start_frames = input_video_frames[:num_to_get]
|
| 96 |
+
|
| 97 |
+
return (start_frames.cpu().float(),)
|
| 98 |
+
|
| 99 |
+
class WanVideoVACEStartToEndFrame:
|
| 100 |
+
@classmethod
|
| 101 |
+
def INPUT_TYPES(s):
|
| 102 |
+
return {"required": {
|
| 103 |
+
"num_frames": ("INT", {"default": 81, "min": 1, "max": 10000, "step": 4, "tooltip": "Number of frames to encode"}),
|
| 104 |
+
"empty_frame_level": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01, "tooltip": "White level of empty frame to use"}),
|
| 105 |
+
},
|
| 106 |
+
"optional": {
|
| 107 |
+
"start_image": ("IMAGE",),
|
| 108 |
+
"end_image": ("IMAGE",),
|
| 109 |
+
"control_images": ("IMAGE",),
|
| 110 |
+
"inpaint_mask": ("MASK", {"tooltip": "Inpaint mask to use for the empty frames"}),
|
| 111 |
+
"start_index": ("INT", {"default": 0, "min": 0, "max": 10000, "step": 1, "tooltip": "Index to start from"}),
|
| 112 |
+
"end_index": ("INT", {"default": -1, "min": -10000, "max": 10000, "step": 1, "tooltip": "Index to end at"}),
|
| 113 |
+
"control_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01, "round": 0.01, "tooltip": "How much does the control images apply?"}),
|
| 114 |
+
"control_ease": ("INT", {"default": 0.0, "min": 0.0, "max": 100.0, "step": 1, "tooltip": "How many frames to ease in the control video?"}),
|
| 115 |
+
},
|
| 116 |
+
}
|
| 117 |
+
|
| 118 |
+
RETURN_TYPES = ("IMAGE", "MASK", )
|
| 119 |
+
RETURN_NAMES = ("images", "masks",)
|
| 120 |
+
FUNCTION = "process"
|
| 121 |
+
CATEGORY = "WanVideoWrapper"
|
| 122 |
+
DESCRIPTION = "Helper node to create start/end frame batch and masks for VACE"
|
| 123 |
+
|
| 124 |
+
def process(self, num_frames, empty_frame_level, start_image=None, end_image=None, control_images=None, inpaint_mask=None, start_index=0, end_index=-1, control_strength=1.0, control_ease=0):
|
| 125 |
+
|
| 126 |
+
device = start_image.device if start_image is not None else end_image.device
|
| 127 |
+
B, H, W, C = start_image.shape if start_image is not None else end_image.shape
|
| 128 |
+
|
| 129 |
+
if control_images is not None:
|
| 130 |
+
# weaken the control images?
|
| 131 |
+
if control_strength < 1.0:
|
| 132 |
+
# strength happens at much smaller number
|
| 133 |
+
control_strength *= 2.0
|
| 134 |
+
control_strength = control_strength * control_strength / 8.0
|
| 135 |
+
control_images = torch.lerp(torch.ones((control_images.shape[0], control_images.shape[1], control_images.shape[2], control_images.shape[3])) * empty_frame_level, control_images, control_strength)
|
| 136 |
+
|
| 137 |
+
# ease in control stuff?
|
| 138 |
+
if num_frames > control_ease and control_ease > 0:
|
| 139 |
+
empty_frame = torch.ones((1, control_images.shape[1], control_images.shape[2], control_images.shape[3])) * empty_frame_level
|
| 140 |
+
if start_image is not None:
|
| 141 |
+
for i in range(1, control_ease + 1):
|
| 142 |
+
control_images[i] = torch.lerp(control_images[i], empty_frame, (control_ease - i) / (1 + control_ease))
|
| 143 |
+
else:
|
| 144 |
+
for i in range(num_frames - control_ease - 1, num_frames - 1):
|
| 145 |
+
control_images[i] = torch.lerp(control_images[i], empty_frame, i / (1 + control_ease))
|
| 146 |
+
|
| 147 |
+
if start_image is None and end_image is None and control_images is not None:
|
| 148 |
+
if control_images.shape[0] >= num_frames:
|
| 149 |
+
control_images = control_images[:num_frames]
|
| 150 |
+
elif control_images.shape[0] < num_frames:
|
| 151 |
+
# padd with empty_frame_level frames
|
| 152 |
+
padding = torch.ones((num_frames - control_images.shape[0], control_images.shape[1], control_images.shape[2], control_images.shape[3]), device=control_images.device) * empty_frame_level
|
| 153 |
+
control_images = torch.cat([control_images, padding], dim=0)
|
| 154 |
+
return (control_images.cpu().float(), torch.zeros_like(control_images[:, :, :, 0]).cpu().float())
|
| 155 |
+
|
| 156 |
+
# Convert negative end_index to positive
|
| 157 |
+
if end_index < 0:
|
| 158 |
+
end_index = num_frames + end_index
|
| 159 |
+
|
| 160 |
+
# Create output batch with empty frames
|
| 161 |
+
out_batch = torch.ones((num_frames, H, W, 3), device=device) * empty_frame_level
|
| 162 |
+
|
| 163 |
+
# Create mask tensor with proper dimensions
|
| 164 |
+
masks = torch.ones((num_frames, H, W), device=device)
|
| 165 |
+
|
| 166 |
+
# Pre-process all images at once to avoid redundant work
|
| 167 |
+
if end_image is not None and (end_image.shape[1] != H or end_image.shape[2] != W):
|
| 168 |
+
end_image = common_upscale(end_image.movedim(-1, 1), W, H, "lanczos", "disabled").movedim(1, -1)
|
| 169 |
+
|
| 170 |
+
if control_images is not None and (control_images.shape[1] != H or control_images.shape[2] != W):
|
| 171 |
+
control_images = common_upscale(control_images.movedim(-1, 1), W, H, "lanczos", "disabled").movedim(1, -1)
|
| 172 |
+
|
| 173 |
+
# Place start image at start_index
|
| 174 |
+
if start_image is not None:
|
| 175 |
+
frames_to_copy = min(start_image.shape[0], num_frames - start_index)
|
| 176 |
+
if frames_to_copy > 0:
|
| 177 |
+
out_batch[start_index:start_index + frames_to_copy] = start_image[:frames_to_copy]
|
| 178 |
+
masks[start_index:start_index + frames_to_copy] = 0
|
| 179 |
+
|
| 180 |
+
# Place end image at end_index
|
| 181 |
+
if end_image is not None:
|
| 182 |
+
# Calculate where to start placing end images
|
| 183 |
+
end_start = end_index - end_image.shape[0] + 1
|
| 184 |
+
if end_start < 0: # Handle case where end images won't all fit
|
| 185 |
+
end_image = end_image[abs(end_start):]
|
| 186 |
+
end_start = 0
|
| 187 |
+
|
| 188 |
+
frames_to_copy = min(end_image.shape[0], num_frames - end_start)
|
| 189 |
+
if frames_to_copy > 0:
|
| 190 |
+
out_batch[end_start:end_start + frames_to_copy] = end_image[:frames_to_copy]
|
| 191 |
+
masks[end_start:end_start + frames_to_copy] = 0
|
| 192 |
+
|
| 193 |
+
# Apply control images to remaining frames that don't have start or end images
|
| 194 |
+
if control_images is not None:
|
| 195 |
+
# Create a mask of frames that are still empty (mask == 1)
|
| 196 |
+
empty_frames = masks.sum(dim=(1, 2)) > 0.5 * H * W
|
| 197 |
+
|
| 198 |
+
if empty_frames.any():
|
| 199 |
+
# Only apply control images where they exist
|
| 200 |
+
control_length = control_images.shape[0]
|
| 201 |
+
for frame_idx in range(num_frames):
|
| 202 |
+
if empty_frames[frame_idx] and frame_idx < control_length:
|
| 203 |
+
out_batch[frame_idx] = control_images[frame_idx]
|
| 204 |
+
|
| 205 |
+
# Apply inpaint mask if provided
|
| 206 |
+
if inpaint_mask is not None:
|
| 207 |
+
inpaint_mask = common_upscale(inpaint_mask.unsqueeze(1), W, H, "nearest-exact", "disabled").squeeze(1).to(device)
|
| 208 |
+
|
| 209 |
+
# Handle different mask lengths efficiently
|
| 210 |
+
if inpaint_mask.shape[0] > num_frames:
|
| 211 |
+
inpaint_mask = inpaint_mask[:num_frames]
|
| 212 |
+
elif inpaint_mask.shape[0] < num_frames:
|
| 213 |
+
repeat_factor = (num_frames + inpaint_mask.shape[0] - 1) // inpaint_mask.shape[0] # Ceiling division
|
| 214 |
+
inpaint_mask = inpaint_mask.repeat(repeat_factor, 1, 1)[:num_frames]
|
| 215 |
+
|
| 216 |
+
# Apply mask in one operation
|
| 217 |
+
masks = inpaint_mask * masks
|
| 218 |
+
|
| 219 |
+
return (out_batch.cpu().float(), masks.cpu().float())
|
| 220 |
+
|
| 221 |
+
|
| 222 |
+
class CreateCFGScheduleFloatList:
|
| 223 |
+
@classmethod
|
| 224 |
+
def INPUT_TYPES(s):
|
| 225 |
+
return {"required": {
|
| 226 |
+
"steps": ("INT", {"default": 30, "min": 2, "max": 1000, "step": 1, "tooltip": "Number of steps to schedule cfg for"} ),
|
| 227 |
+
"cfg_scale_start": ("FLOAT", {"default": 5.0, "min": 0.0, "max": 30.0, "step": 0.01, "round": 0.01, "tooltip": "CFG scale to use for the steps"}),
|
| 228 |
+
"cfg_scale_end": ("FLOAT", {"default": 5.0, "min": 0.0, "max": 30.0, "step": 0.01, "round": 0.01, "tooltip": "CFG scale to use for the steps"}),
|
| 229 |
+
"interpolation": (["linear", "ease_in", "ease_out"], {"default": "linear", "tooltip": "Interpolation method to use for the cfg scale"}),
|
| 230 |
+
"start_percent": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01, "round": 0.01,"tooltip": "Start percent of the steps to apply cfg"}),
|
| 231 |
+
"end_percent": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01, "round": 0.01,"tooltip": "End percent of the steps to apply cfg"}),
|
| 232 |
+
},
|
| 233 |
+
"hidden": {
|
| 234 |
+
"unique_id": "UNIQUE_ID",
|
| 235 |
+
},
|
| 236 |
+
}
|
| 237 |
+
|
| 238 |
+
RETURN_TYPES = ("FLOAT", )
|
| 239 |
+
RETURN_NAMES = ("float_list",)
|
| 240 |
+
FUNCTION = "process"
|
| 241 |
+
CATEGORY = "WanVideoWrapper"
|
| 242 |
+
DESCRIPTION = "Helper node to generate a list of floats that can be used to schedule cfg scale for the steps, outside the set range cfg is set to 1.0"
|
| 243 |
+
|
| 244 |
+
def process(self, steps, cfg_scale_start, cfg_scale_end, interpolation, start_percent, end_percent, unique_id):
|
| 245 |
+
|
| 246 |
+
# Create a list of floats for the cfg schedule
|
| 247 |
+
cfg_list = [1.0] * steps
|
| 248 |
+
start_idx = min(int(steps * start_percent), steps - 1)
|
| 249 |
+
end_idx = min(int(steps * end_percent), steps - 1)
|
| 250 |
+
|
| 251 |
+
for i in range(start_idx, end_idx + 1):
|
| 252 |
+
if i >= steps:
|
| 253 |
+
break
|
| 254 |
+
|
| 255 |
+
if end_idx == start_idx:
|
| 256 |
+
t = 0
|
| 257 |
+
else:
|
| 258 |
+
t = (i - start_idx) / (end_idx - start_idx)
|
| 259 |
+
|
| 260 |
+
if interpolation == "linear":
|
| 261 |
+
factor = t
|
| 262 |
+
elif interpolation == "ease_in":
|
| 263 |
+
factor = t * t
|
| 264 |
+
elif interpolation == "ease_out":
|
| 265 |
+
factor = t * (2 - t)
|
| 266 |
+
|
| 267 |
+
cfg_list[i] = round(cfg_scale_start + factor * (cfg_scale_end - cfg_scale_start), 2)
|
| 268 |
+
|
| 269 |
+
# If start_percent > 0, always include the first step
|
| 270 |
+
if start_percent > 0:
|
| 271 |
+
cfg_list[0] = 1.0
|
| 272 |
+
|
| 273 |
+
if unique_id and PromptServer is not None:
|
| 274 |
+
try:
|
| 275 |
+
PromptServer.instance.send_progress_text(
|
| 276 |
+
f"{cfg_list}",
|
| 277 |
+
unique_id
|
| 278 |
+
)
|
| 279 |
+
except:
|
| 280 |
+
pass
|
| 281 |
+
|
| 282 |
+
return (cfg_list,)
|
| 283 |
+
|
| 284 |
+
class CreateScheduleFloatList:
|
| 285 |
+
@classmethod
|
| 286 |
+
def INPUT_TYPES(s):
|
| 287 |
+
return {"required": {
|
| 288 |
+
"steps": ("INT", {"default": 30, "min": 2, "max": 1000, "step": 1, "tooltip": "Number of steps to schedule cfg for"} ),
|
| 289 |
+
"start_value": ("FLOAT", {"default": 5.0, "min": 0.0, "max": 100.0, "step": 0.01, "round": 0.01, "tooltip": "CFG scale to use for the steps"}),
|
| 290 |
+
"end_value": ("FLOAT", {"default": 5.0, "min": 0.0, "max": 100.0, "step": 0.01, "round": 0.01, "tooltip": "CFG scale to use for the steps"}),
|
| 291 |
+
"default_value": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1000.0, "step": 0.01, "round": 0.01, "tooltip": "Default value to use for the steps"}),
|
| 292 |
+
"interpolation": (["linear", "ease_in", "ease_out"], {"default": "linear", "tooltip": "Interpolation method to use for the cfg scale"}),
|
| 293 |
+
"start_percent": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01, "round": 0.01,"tooltip": "Start percent of the steps to apply cfg"}),
|
| 294 |
+
"end_percent": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01, "round": 0.01,"tooltip": "End percent of the steps to apply cfg"}),
|
| 295 |
+
},
|
| 296 |
+
"hidden": {
|
| 297 |
+
"unique_id": "UNIQUE_ID",
|
| 298 |
+
},
|
| 299 |
+
}
|
| 300 |
+
|
| 301 |
+
RETURN_TYPES = ("FLOAT", )
|
| 302 |
+
RETURN_NAMES = ("float_list",)
|
| 303 |
+
FUNCTION = "process"
|
| 304 |
+
CATEGORY = "WanVideoWrapper"
|
| 305 |
+
DESCRIPTION = "Helper node to generate a list of floats that can be used to schedule things like cfg and lora scale per step"
|
| 306 |
+
|
| 307 |
+
def process(self, steps, start_value, end_value, default_value,interpolation, start_percent, end_percent, unique_id):
|
| 308 |
+
|
| 309 |
+
# Create a list of floats for the cfg schedule
|
| 310 |
+
cfg_list = [default_value] * steps
|
| 311 |
+
start_idx = min(int(steps * start_percent), steps - 1)
|
| 312 |
+
end_idx = min(int(steps * end_percent), steps - 1)
|
| 313 |
+
|
| 314 |
+
for i in range(start_idx, end_idx + 1):
|
| 315 |
+
if i >= steps:
|
| 316 |
+
break
|
| 317 |
+
|
| 318 |
+
if end_idx == start_idx:
|
| 319 |
+
t = 0
|
| 320 |
+
else:
|
| 321 |
+
t = (i - start_idx) / (end_idx - start_idx)
|
| 322 |
+
|
| 323 |
+
if interpolation == "linear":
|
| 324 |
+
factor = t
|
| 325 |
+
elif interpolation == "ease_in":
|
| 326 |
+
factor = t * t
|
| 327 |
+
elif interpolation == "ease_out":
|
| 328 |
+
factor = t * (2 - t)
|
| 329 |
+
|
| 330 |
+
cfg_list[i] = round(start_value + factor * (end_value - start_value), 2)
|
| 331 |
+
|
| 332 |
+
# If start_percent > 0, always include the first step
|
| 333 |
+
if start_percent > 0:
|
| 334 |
+
cfg_list[0] = default_value
|
| 335 |
+
|
| 336 |
+
if unique_id and PromptServer is not None:
|
| 337 |
+
try:
|
| 338 |
+
PromptServer.instance.send_progress_text(
|
| 339 |
+
f"{cfg_list}",
|
| 340 |
+
unique_id
|
| 341 |
+
)
|
| 342 |
+
except:
|
| 343 |
+
pass
|
| 344 |
+
|
| 345 |
+
return (cfg_list,)
|
| 346 |
+
|
| 347 |
+
|
| 348 |
+
class DummyComfyWanModelObject:
|
| 349 |
+
@classmethod
|
| 350 |
+
def INPUT_TYPES(s):
|
| 351 |
+
return {"required": {
|
| 352 |
+
"shift": ("FLOAT", {"default": 1.0, "min": -100.0, "max": 100.0, "step": 0.01, "tooltip": "Sigma shift value"}),
|
| 353 |
+
}
|
| 354 |
+
}
|
| 355 |
+
|
| 356 |
+
RETURN_TYPES = ("MODEL", )
|
| 357 |
+
RETURN_NAMES = ("model",)
|
| 358 |
+
FUNCTION = "create"
|
| 359 |
+
CATEGORY = "WanVideoWrapper"
|
| 360 |
+
DESCRIPTION = "Helper node to create empty Wan model to use with BasicScheduler -node to get sigmas"
|
| 361 |
+
|
| 362 |
+
def create(self, shift):
|
| 363 |
+
from comfy.model_sampling import ModelSamplingDiscreteFlow
|
| 364 |
+
class DummyModel:
|
| 365 |
+
def get_model_object(self, name):
|
| 366 |
+
if name == "model_sampling":
|
| 367 |
+
model_sampling = ModelSamplingDiscreteFlow()
|
| 368 |
+
model_sampling.set_parameters(shift=shift)
|
| 369 |
+
return model_sampling
|
| 370 |
+
return None
|
| 371 |
+
return (DummyModel(),)
|
| 372 |
+
|
| 373 |
+
class WanVideoLatentReScale:
|
| 374 |
+
@classmethod
|
| 375 |
+
def INPUT_TYPES(s):
|
| 376 |
+
return {"required": {
|
| 377 |
+
"samples": ("LATENT",),
|
| 378 |
+
"direction": (["comfy_to_wrapper", "wrapper_to_comfy"], {"tooltip": "Direction to rescale latents, from comfy to wrapper or vice versa"}),
|
| 379 |
+
}
|
| 380 |
+
}
|
| 381 |
+
|
| 382 |
+
RETURN_TYPES = ("LATENT",)
|
| 383 |
+
RETURN_NAMES = ("samples",)
|
| 384 |
+
FUNCTION = "encode"
|
| 385 |
+
CATEGORY = "WanVideoWrapper"
|
| 386 |
+
DESCRIPTION = "Rescale latents to match the expected range for encoding or decoding between native ComfyUI VAE and the WanVideoWrapper VAE."
|
| 387 |
+
|
| 388 |
+
def encode(self, samples, direction):
|
| 389 |
+
samples = samples.copy()
|
| 390 |
+
latents = samples["samples"]
|
| 391 |
+
|
| 392 |
+
if latents.shape[1] == 48:
|
| 393 |
+
mean = [
|
| 394 |
+
-0.2289, -0.0052, -0.1323, -0.2339, -0.2799, 0.0174, 0.1838, 0.1557,
|
| 395 |
+
-0.1382, 0.0542, 0.2813, 0.0891, 0.1570, -0.0098, 0.0375, -0.1825,
|
| 396 |
+
-0.2246, -0.1207, -0.0698, 0.5109, 0.2665, -0.2108, -0.2158, 0.2502,
|
| 397 |
+
-0.2055, -0.0322, 0.1109, 0.1567, -0.0729, 0.0899, -0.2799, -0.1230,
|
| 398 |
+
-0.0313, -0.1649, 0.0117, 0.0723, -0.2839, -0.2083, -0.0520, 0.3748,
|
| 399 |
+
0.0152, 0.1957, 0.1433, -0.2944, 0.3573, -0.0548, -0.1681, -0.0667,
|
| 400 |
+
]
|
| 401 |
+
std = [
|
| 402 |
+
0.4765, 1.0364, 0.4514, 1.1677, 0.5313, 0.4990, 0.4818, 0.5013,
|
| 403 |
+
0.8158, 1.0344, 0.5894, 1.0901, 0.6885, 0.6165, 0.8454, 0.4978,
|
| 404 |
+
0.5759, 0.3523, 0.7135, 0.6804, 0.5833, 1.4146, 0.8986, 0.5659,
|
| 405 |
+
0.7069, 0.5338, 0.4889, 0.4917, 0.4069, 0.4999, 0.6866, 0.4093,
|
| 406 |
+
0.5709, 0.6065, 0.6415, 0.4944, 0.5726, 1.2042, 0.5458, 1.6887,
|
| 407 |
+
0.3971, 1.0600, 0.3943, 0.5537, 0.5444, 0.4089, 0.7468, 0.7744
|
| 408 |
+
]
|
| 409 |
+
else:
|
| 410 |
+
mean = [
|
| 411 |
+
-0.7571, -0.7089, -0.9113, 0.1075, -0.1745, 0.9653, -0.1517, 1.5508,
|
| 412 |
+
0.4134, -0.0715, 0.5517, -0.3632, -0.1922, -0.9497, 0.2503, -0.2921
|
| 413 |
+
]
|
| 414 |
+
std = [
|
| 415 |
+
2.8184, 1.4541, 2.3275, 2.6558, 1.2196, 1.7708, 2.6052, 2.0743,
|
| 416 |
+
3.2687, 2.1526, 2.8652, 1.5579, 1.6382, 1.1253, 2.8251, 1.9160
|
| 417 |
+
]
|
| 418 |
+
mean = torch.tensor(mean).view(1, latents.shape[1], 1, 1, 1)
|
| 419 |
+
std = torch.tensor(std).view(1, latents.shape[1], 1, 1, 1)
|
| 420 |
+
inv_std = (1.0 / std).view(1, latents.shape[1], 1, 1, 1)
|
| 421 |
+
if direction == "comfy_to_wrapper":
|
| 422 |
+
latents = (latents - mean.to(latents)) * inv_std.to(latents)
|
| 423 |
+
elif direction == "wrapper_to_comfy":
|
| 424 |
+
latents = latents / inv_std.to(latents) + mean.to(latents)
|
| 425 |
+
|
| 426 |
+
samples["samples"] = latents
|
| 427 |
+
|
| 428 |
+
return (samples,)
|
| 429 |
+
|
| 430 |
+
class WanVideoSigmaToStep:
|
| 431 |
+
@classmethod
|
| 432 |
+
def INPUT_TYPES(s):
|
| 433 |
+
return {"required": {
|
| 434 |
+
"sigma": ("FLOAT", {"default": 0.9, "min": 0.0, "max": 1.0, "step": 0.001}),
|
| 435 |
+
},
|
| 436 |
+
}
|
| 437 |
+
|
| 438 |
+
RETURN_TYPES = ("INT", )
|
| 439 |
+
RETURN_NAMES = ("step",)
|
| 440 |
+
FUNCTION = "convert"
|
| 441 |
+
CATEGORY = "WanVideoWrapper"
|
| 442 |
+
DESCRIPTION = "Simply passes a float value as an integer, used to set start/end steps with sigma threshold"
|
| 443 |
+
|
| 444 |
+
def convert(self, sigma):
|
| 445 |
+
return (sigma,)
|
| 446 |
+
|
| 447 |
+
class NormalizeAudioLoudness:
|
| 448 |
+
@classmethod
|
| 449 |
+
def INPUT_TYPES(s):
|
| 450 |
+
return {"required": {
|
| 451 |
+
"audio": ("AUDIO",),
|
| 452 |
+
"lufs": ("FLOAT", {"default": -23.0, "min": -100.0, "max": 0.0, "step": 0.1, "tool": "Loudness Units relative to Full Scale, higher LUFS values (closer to 0) mean louder audio. Lower LUFS values (more negative) mean quieter audio."}),
|
| 453 |
+
},
|
| 454 |
+
}
|
| 455 |
+
|
| 456 |
+
RETURN_TYPES = ("AUDIO", )
|
| 457 |
+
RETURN_NAMES = ("audio", )
|
| 458 |
+
FUNCTION = "normalize"
|
| 459 |
+
CATEGORY = "WanVideoWrapper"
|
| 460 |
+
|
| 461 |
+
def normalize(self, audio, lufs):
|
| 462 |
+
audio_input = audio["waveform"]
|
| 463 |
+
sample_rate = audio["sample_rate"]
|
| 464 |
+
if audio_input.dim() == 3:
|
| 465 |
+
audio_input = audio_input.squeeze(0)
|
| 466 |
+
audio_input_np = audio_input.detach().transpose(0, 1).numpy().astype(np.float32)
|
| 467 |
+
audio_input_np = np.ascontiguousarray(audio_input_np)
|
| 468 |
+
normalized_audio = self.loudness_norm(audio_input_np, sr=sample_rate, lufs=lufs)
|
| 469 |
+
|
| 470 |
+
out_audio = {"waveform": torch.from_numpy(normalized_audio).transpose(0, 1).unsqueeze(0).float(), "sample_rate": sample_rate}
|
| 471 |
+
|
| 472 |
+
return (out_audio, )
|
| 473 |
+
|
| 474 |
+
def loudness_norm(self, audio_array, sr=16000, lufs=-23):
|
| 475 |
+
try:
|
| 476 |
+
import pyloudnorm
|
| 477 |
+
except:
|
| 478 |
+
raise ImportError("pyloudnorm package is not installed")
|
| 479 |
+
meter = pyloudnorm.Meter(sr)
|
| 480 |
+
loudness = meter.integrated_loudness(audio_array)
|
| 481 |
+
if abs(loudness) > 100:
|
| 482 |
+
return audio_array
|
| 483 |
+
normalized_audio = pyloudnorm.normalize.loudness(audio_array, loudness, lufs)
|
| 484 |
+
return normalized_audio
|
| 485 |
+
|
| 486 |
+
class WanVideoPassImagesFromSamples:
|
| 487 |
+
@classmethod
|
| 488 |
+
def INPUT_TYPES(s):
|
| 489 |
+
return {"required": {
|
| 490 |
+
"samples": ("LATENT",),
|
| 491 |
+
}
|
| 492 |
+
}
|
| 493 |
+
|
| 494 |
+
RETURN_TYPES = ("IMAGE", "STRING",)
|
| 495 |
+
RETURN_NAMES = ("images", "output_path",)
|
| 496 |
+
OUTPUT_TOOLTIPS = ("Decoded images from the samples dictionary", "Output path if provided in the samples dictionary",)
|
| 497 |
+
FUNCTION = "decode"
|
| 498 |
+
CATEGORY = "WanVideoWrapper"
|
| 499 |
+
DESCRIPTION = "Gets possible already decoded images from the samples dictionary, used with Multi/InfiniteTalk sampling"
|
| 500 |
+
|
| 501 |
+
def decode(self, samples):
|
| 502 |
+
video = samples.get("video", None)
|
| 503 |
+
video.clamp_(-1.0, 1.0)
|
| 504 |
+
video.add_(1.0).div_(2.0)
|
| 505 |
+
return video.cpu().float(), samples.get("output_path", "")
|
| 506 |
+
|
| 507 |
+
|
| 508 |
+
class FaceMaskFromPoseKeypoints:
|
| 509 |
+
@classmethod
|
| 510 |
+
def INPUT_TYPES(s):
|
| 511 |
+
input_types = {
|
| 512 |
+
"required": {
|
| 513 |
+
"pose_kps": ("POSE_KEYPOINT",),
|
| 514 |
+
"person_index": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1, "tooltip": "Index of the person to start with"}),
|
| 515 |
+
}
|
| 516 |
+
}
|
| 517 |
+
return input_types
|
| 518 |
+
RETURN_TYPES = ("MASK",)
|
| 519 |
+
FUNCTION = "createmask"
|
| 520 |
+
CATEGORY = "ControlNet Preprocessors/Pose Keypoint Postprocess"
|
| 521 |
+
|
| 522 |
+
def createmask(self, pose_kps, person_index):
|
| 523 |
+
pose_frames = pose_kps
|
| 524 |
+
prev_center = None
|
| 525 |
+
np_frames = []
|
| 526 |
+
for i, pose_frame in enumerate(pose_frames):
|
| 527 |
+
selected_idx, prev_center = self.select_closest_person(pose_frame, person_index if i == 0 else prev_center)
|
| 528 |
+
np_frames.append(self.draw_kps(pose_frame, selected_idx))
|
| 529 |
+
|
| 530 |
+
if not np_frames:
|
| 531 |
+
# Handle case where no frames were processed
|
| 532 |
+
log.warning("No valid pose frames found, returning empty mask")
|
| 533 |
+
return (torch.zeros((1, 64, 64), dtype=torch.float32),)
|
| 534 |
+
|
| 535 |
+
np_frames = np.stack(np_frames, axis=0)
|
| 536 |
+
tensor = torch.from_numpy(np_frames).float() / 255.
|
| 537 |
+
print("tensor.shape:", tensor.shape)
|
| 538 |
+
tensor = tensor[:, :, :, 0]
|
| 539 |
+
return (tensor,)
|
| 540 |
+
|
| 541 |
+
def select_closest_person(self, pose_frame, prev_center_or_index):
|
| 542 |
+
people = pose_frame["people"]
|
| 543 |
+
if not people:
|
| 544 |
+
return -1, None
|
| 545 |
+
|
| 546 |
+
centers = []
|
| 547 |
+
valid_people_indices = []
|
| 548 |
+
|
| 549 |
+
for idx, person in enumerate(people):
|
| 550 |
+
# Check if face keypoints exist and are valid
|
| 551 |
+
if "face_keypoints_2d" not in person or not person["face_keypoints_2d"]:
|
| 552 |
+
continue
|
| 553 |
+
|
| 554 |
+
kps = np.array(person["face_keypoints_2d"])
|
| 555 |
+
if len(kps) == 0:
|
| 556 |
+
continue
|
| 557 |
+
|
| 558 |
+
n = len(kps) // 3
|
| 559 |
+
if n == 0:
|
| 560 |
+
continue
|
| 561 |
+
|
| 562 |
+
facial_kps = rearrange(kps, "(n c) -> n c", n=n, c=3)[:, :2]
|
| 563 |
+
|
| 564 |
+
# Check if we have valid coordinates (not all zeros)
|
| 565 |
+
if np.all(facial_kps == 0):
|
| 566 |
+
continue
|
| 567 |
+
|
| 568 |
+
center = facial_kps.mean(axis=0)
|
| 569 |
+
|
| 570 |
+
# Check if center is valid (not NaN or infinite)
|
| 571 |
+
if np.isnan(center).any() or np.isinf(center).any():
|
| 572 |
+
continue
|
| 573 |
+
|
| 574 |
+
centers.append(center)
|
| 575 |
+
valid_people_indices.append(idx)
|
| 576 |
+
|
| 577 |
+
if not centers:
|
| 578 |
+
return -1, None
|
| 579 |
+
|
| 580 |
+
if isinstance(prev_center_or_index, (int, np.integer)):
|
| 581 |
+
# First frame: use person_index, but map to valid people
|
| 582 |
+
if 0 <= prev_center_or_index < len(valid_people_indices):
|
| 583 |
+
idx = valid_people_indices[prev_center_or_index]
|
| 584 |
+
return idx, centers[prev_center_or_index]
|
| 585 |
+
elif valid_people_indices:
|
| 586 |
+
# Fallback to first valid person
|
| 587 |
+
idx = valid_people_indices[0]
|
| 588 |
+
return idx, centers[0]
|
| 589 |
+
else:
|
| 590 |
+
return -1, None
|
| 591 |
+
elif prev_center_or_index is not None:
|
| 592 |
+
# Find closest to previous center
|
| 593 |
+
prev_center = np.array(prev_center_or_index)
|
| 594 |
+
dists = [np.linalg.norm(center - prev_center) for center in centers]
|
| 595 |
+
min_idx = int(np.argmin(dists))
|
| 596 |
+
actual_idx = valid_people_indices[min_idx]
|
| 597 |
+
return actual_idx, centers[min_idx]
|
| 598 |
+
else:
|
| 599 |
+
# prev_center_or_index is None, fallback to first valid person
|
| 600 |
+
if valid_people_indices:
|
| 601 |
+
idx = valid_people_indices[0]
|
| 602 |
+
return idx, centers[0]
|
| 603 |
+
else:
|
| 604 |
+
return -1, None
|
| 605 |
+
|
| 606 |
+
def draw_kps(self, pose_frame, person_index):
|
| 607 |
+
import cv2
|
| 608 |
+
width, height = pose_frame["canvas_width"], pose_frame["canvas_height"]
|
| 609 |
+
canvas = np.zeros((height, width, 3), dtype=np.uint8)
|
| 610 |
+
people = pose_frame["people"]
|
| 611 |
+
|
| 612 |
+
if person_index < 0 or person_index >= len(people):
|
| 613 |
+
return canvas # Out of bounds, return blank
|
| 614 |
+
|
| 615 |
+
person = people[person_index]
|
| 616 |
+
|
| 617 |
+
# Check if face keypoints exist and are valid
|
| 618 |
+
if "face_keypoints_2d" not in person or not person["face_keypoints_2d"]:
|
| 619 |
+
return canvas # No face keypoints, return blank
|
| 620 |
+
|
| 621 |
+
face_kps_data = person["face_keypoints_2d"]
|
| 622 |
+
if len(face_kps_data) == 0:
|
| 623 |
+
return canvas # Empty keypoints, return blank
|
| 624 |
+
|
| 625 |
+
n = len(face_kps_data) // 3
|
| 626 |
+
if n < 17: # Need at least 17 points for outer contour
|
| 627 |
+
return canvas # Not enough keypoints, return blank
|
| 628 |
+
|
| 629 |
+
facial_kps = rearrange(np.array(face_kps_data), "(n c) -> n c", n=n, c=3)[:, :2]
|
| 630 |
+
|
| 631 |
+
# Check if we have valid coordinates (not all zeros)
|
| 632 |
+
if np.all(facial_kps == 0):
|
| 633 |
+
return canvas # All keypoints are zero, return blank
|
| 634 |
+
|
| 635 |
+
# Check for NaN or infinite values
|
| 636 |
+
if np.isnan(facial_kps).any() or np.isinf(facial_kps).any():
|
| 637 |
+
return canvas # Invalid coordinates, return blank
|
| 638 |
+
|
| 639 |
+
# Check for negative coordinates or coordinates that would create streaks
|
| 640 |
+
if np.any(facial_kps < 0):
|
| 641 |
+
return canvas # Negative coordinates, likely bad detection
|
| 642 |
+
|
| 643 |
+
# Check if coordinates are reasonable (not too close to edges which might indicate bad detection)
|
| 644 |
+
min_margin = 5 # Minimum distance from edges
|
| 645 |
+
if (np.any(facial_kps[:, 0] < min_margin) or
|
| 646 |
+
np.any(facial_kps[:, 1] < min_margin) or
|
| 647 |
+
np.any(facial_kps[:, 0] > width - min_margin) or
|
| 648 |
+
np.any(facial_kps[:, 1] > height - min_margin)):
|
| 649 |
+
# Check if this looks like a streak to corner (many points near 0,0)
|
| 650 |
+
corner_points = np.sum((facial_kps[:, 0] < min_margin) & (facial_kps[:, 1] < min_margin))
|
| 651 |
+
if corner_points > 3: # Too many points near corner, likely bad detection
|
| 652 |
+
return canvas
|
| 653 |
+
|
| 654 |
+
facial_kps = facial_kps.astype(np.int32)
|
| 655 |
+
|
| 656 |
+
# Ensure coordinates are within canvas bounds
|
| 657 |
+
facial_kps[:, 0] = np.clip(facial_kps[:, 0], 0, width - 1)
|
| 658 |
+
facial_kps[:, 1] = np.clip(facial_kps[:, 1], 0, height - 1)
|
| 659 |
+
|
| 660 |
+
part_color = (255, 255, 255)
|
| 661 |
+
outer_contour = facial_kps[:17]
|
| 662 |
+
|
| 663 |
+
# Additional validation for the contour before drawing
|
| 664 |
+
# Check if contour points are too spread out (indicating bad detection)
|
| 665 |
+
if len(outer_contour) >= 3:
|
| 666 |
+
# Calculate bounding box of the contour
|
| 667 |
+
min_x, min_y = np.min(outer_contour, axis=0)
|
| 668 |
+
max_x, max_y = np.max(outer_contour, axis=0)
|
| 669 |
+
contour_width = max_x - min_x
|
| 670 |
+
contour_height = max_y - min_y
|
| 671 |
+
|
| 672 |
+
# If contour spans more than 80% of canvas, likely bad detection
|
| 673 |
+
if (contour_width > 0.8 * width or contour_height > 0.8 * height):
|
| 674 |
+
return canvas
|
| 675 |
+
|
| 676 |
+
# Check if we have a valid contour (at least 3 unique points)
|
| 677 |
+
unique_points = np.unique(outer_contour, axis=0)
|
| 678 |
+
if len(unique_points) >= 3:
|
| 679 |
+
# Final check: ensure the contour is reasonable
|
| 680 |
+
# Calculate area to see if it's too large or too small
|
| 681 |
+
contour_area = cv2.contourArea(outer_contour)
|
| 682 |
+
canvas_area = width * height
|
| 683 |
+
|
| 684 |
+
# If contour is less than 0.1% or more than 50% of canvas, skip
|
| 685 |
+
if 0.001 * canvas_area <= contour_area <= 0.5 * canvas_area:
|
| 686 |
+
cv2.fillPoly(canvas, pts=[outer_contour], color=part_color)
|
| 687 |
+
|
| 688 |
+
return canvas
|
| 689 |
+
|
| 690 |
+
|
| 691 |
+
class DrawGaussianNoiseOnImage:
|
| 692 |
+
@classmethod
|
| 693 |
+
def INPUT_TYPES(s):
|
| 694 |
+
return {"required": {
|
| 695 |
+
"image": ("IMAGE", ),
|
| 696 |
+
"mask": ("MASK", ),
|
| 697 |
+
},
|
| 698 |
+
"optional": {
|
| 699 |
+
"device": (["cpu", "gpu"], {"default": "cpu", "tooltip": "Device to use for processing"}),
|
| 700 |
+
"seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
|
| 701 |
+
}
|
| 702 |
+
}
|
| 703 |
+
|
| 704 |
+
RETURN_TYPES = ("IMAGE", )
|
| 705 |
+
RETURN_NAMES = ("images",)
|
| 706 |
+
FUNCTION = "apply"
|
| 707 |
+
CATEGORY = "KJNodes/masking"
|
| 708 |
+
DESCRIPTION = "Fills the background (masked area) with Gaussian noise sampled using the mean and variance of the subject (unmasked) region."
|
| 709 |
+
|
| 710 |
+
def apply(self, image, mask, device="cpu", seed=0):
|
| 711 |
+
B, H, W, C = image.shape
|
| 712 |
+
BM, HM, WM = mask.shape
|
| 713 |
+
|
| 714 |
+
processing_device = main_device if device == "gpu" else torch.device("cpu")
|
| 715 |
+
|
| 716 |
+
in_masks = mask.clone().to(processing_device)
|
| 717 |
+
in_images = image.clone().to(processing_device)
|
| 718 |
+
|
| 719 |
+
# Resize mask to match image dimensions
|
| 720 |
+
if HM != H or WM != W:
|
| 721 |
+
in_masks = F.interpolate(mask.unsqueeze(1), size=(H, W), mode='nearest-exact').squeeze(1)
|
| 722 |
+
|
| 723 |
+
# Match batch sizes
|
| 724 |
+
if B > BM:
|
| 725 |
+
in_masks = in_masks.repeat((B + BM - 1) // BM, 1, 1)[:B]
|
| 726 |
+
elif BM > B:
|
| 727 |
+
in_masks = in_masks[:B]
|
| 728 |
+
|
| 729 |
+
output_images = []
|
| 730 |
+
|
| 731 |
+
# Set random seed for reproducibility
|
| 732 |
+
generator = torch.Generator(device=processing_device).manual_seed(seed)
|
| 733 |
+
|
| 734 |
+
for i in tqdm(range(B), desc="DrawGaussianNoiseOnImage batch"):
|
| 735 |
+
curr_mask = in_masks[i]
|
| 736 |
+
img_idx = min(i, B - 1)
|
| 737 |
+
curr_image = in_images[img_idx]
|
| 738 |
+
|
| 739 |
+
# Expand mask to 3 channels
|
| 740 |
+
mask_expanded = curr_mask.unsqueeze(-1).expand(-1, -1, 3)
|
| 741 |
+
|
| 742 |
+
# Calculate mean and std per channel from the subject region (where mask is 1)
|
| 743 |
+
subject_mask = mask_expanded > 0.5
|
| 744 |
+
|
| 745 |
+
# Initialize noise tensor
|
| 746 |
+
noise = torch.zeros_like(curr_image)
|
| 747 |
+
|
| 748 |
+
for c in range(C):
|
| 749 |
+
channel = curr_image[:, :, c]
|
| 750 |
+
channel_mask = subject_mask[:, :, c]
|
| 751 |
+
|
| 752 |
+
if channel_mask.sum() > 0:
|
| 753 |
+
# Get subject pixels
|
| 754 |
+
subject_pixels = channel[channel_mask]
|
| 755 |
+
|
| 756 |
+
# Calculate statistics
|
| 757 |
+
mean = subject_pixels.mean()
|
| 758 |
+
std = subject_pixels.std()
|
| 759 |
+
|
| 760 |
+
# Generate Gaussian noise for this channel
|
| 761 |
+
noise[:, :, c] = torch.normal(mean=mean.item(), std=std.item(),
|
| 762 |
+
size=(H, W), generator=generator,
|
| 763 |
+
device=processing_device)
|
| 764 |
+
|
| 765 |
+
# Clamp noise to valid range
|
| 766 |
+
noise = torch.clamp(noise, 0.0, 1.0)
|
| 767 |
+
|
| 768 |
+
# Apply: keep subject, fill background with noise
|
| 769 |
+
masked_image = curr_image * mask_expanded + noise * (1 - mask_expanded)
|
| 770 |
+
output_images.append(masked_image)
|
| 771 |
+
|
| 772 |
+
# If no masks were processed, return empty tensor
|
| 773 |
+
if not output_images:
|
| 774 |
+
return (torch.zeros((0, H, W, 3), dtype=image.dtype),)
|
| 775 |
+
|
| 776 |
+
out_rgb = torch.stack(output_images, dim=0).cpu()
|
| 777 |
+
|
| 778 |
+
return (out_rgb, )
|
| 779 |
+
|
| 780 |
+
NODE_CLASS_MAPPINGS = {
|
| 781 |
+
"WanVideoImageResizeToClosest": WanVideoImageResizeToClosest,
|
| 782 |
+
"WanVideoVACEStartToEndFrame": WanVideoVACEStartToEndFrame,
|
| 783 |
+
"ExtractStartFramesForContinuations": ExtractStartFramesForContinuations,
|
| 784 |
+
"CreateCFGScheduleFloatList": CreateCFGScheduleFloatList,
|
| 785 |
+
"DummyComfyWanModelObject": DummyComfyWanModelObject,
|
| 786 |
+
"WanVideoLatentReScale": WanVideoLatentReScale,
|
| 787 |
+
"CreateScheduleFloatList": CreateScheduleFloatList,
|
| 788 |
+
"WanVideoSigmaToStep": WanVideoSigmaToStep,
|
| 789 |
+
"NormalizeAudioLoudness": NormalizeAudioLoudness,
|
| 790 |
+
"WanVideoPassImagesFromSamples": WanVideoPassImagesFromSamples,
|
| 791 |
+
"FaceMaskFromPoseKeypoints": FaceMaskFromPoseKeypoints,
|
| 792 |
+
"DrawGaussianNoiseOnImage": DrawGaussianNoiseOnImage,
|
| 793 |
+
}
|
| 794 |
+
NODE_DISPLAY_NAME_MAPPINGS = {
|
| 795 |
+
"WanVideoImageResizeToClosest": "WanVideo Image Resize To Closest",
|
| 796 |
+
"WanVideoVACEStartToEndFrame": "WanVideo VACE Start To End Frame",
|
| 797 |
+
"ExtractStartFramesForContinuations": "Extract Start Frames For Continuations",
|
| 798 |
+
"CreateCFGScheduleFloatList": "Create CFG Schedule Float List",
|
| 799 |
+
"DummyComfyWanModelObject": "Dummy Comfy Wan Model Object",
|
| 800 |
+
"WanVideoLatentReScale": "WanVideo Latent ReScale",
|
| 801 |
+
"CreateScheduleFloatList": "Create Schedule Float List",
|
| 802 |
+
"WanVideoSigmaToStep": "WanVideo Sigma To Step",
|
| 803 |
+
"NormalizeAudioLoudness": "Normalize Audio Loudness",
|
| 804 |
+
"WanVideoPassImagesFromSamples": "WanVideo Pass Images From Samples",
|
| 805 |
+
"FaceMaskFromPoseKeypoints": "Face Mask From Pose Keypoints",
|
| 806 |
+
"DrawGaussianNoiseOnImage": "Draw Gaussian Noise On Image",
|
| 807 |
+
}
|
Mega-v1/Rapid-AIO-Mega.json
ADDED
|
@@ -0,0 +1,768 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"id": "e6c78bba-ef40-40c6-8b95-20bd0020ddbb",
|
| 3 |
+
"revision": 0,
|
| 4 |
+
"last_node_id": 38,
|
| 5 |
+
"last_link_id": 158,
|
| 6 |
+
"nodes": [
|
| 7 |
+
{
|
| 8 |
+
"id": 8,
|
| 9 |
+
"type": "KSampler",
|
| 10 |
+
"pos": [
|
| 11 |
+
1601.7471923828125,
|
| 12 |
+
985.068603515625
|
| 13 |
+
],
|
| 14 |
+
"size": [
|
| 15 |
+
270,
|
| 16 |
+
262
|
| 17 |
+
],
|
| 18 |
+
"flags": {},
|
| 19 |
+
"order": 10,
|
| 20 |
+
"mode": 0,
|
| 21 |
+
"inputs": [
|
| 22 |
+
{
|
| 23 |
+
"name": "model",
|
| 24 |
+
"type": "MODEL",
|
| 25 |
+
"link": 123
|
| 26 |
+
},
|
| 27 |
+
{
|
| 28 |
+
"name": "positive",
|
| 29 |
+
"type": "CONDITIONING",
|
| 30 |
+
"link": 144
|
| 31 |
+
},
|
| 32 |
+
{
|
| 33 |
+
"name": "negative",
|
| 34 |
+
"type": "CONDITIONING",
|
| 35 |
+
"link": 145
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"name": "latent_image",
|
| 39 |
+
"type": "LATENT",
|
| 40 |
+
"link": 143
|
| 41 |
+
}
|
| 42 |
+
],
|
| 43 |
+
"outputs": [
|
| 44 |
+
{
|
| 45 |
+
"name": "LATENT",
|
| 46 |
+
"type": "LATENT",
|
| 47 |
+
"links": [
|
| 48 |
+
149
|
| 49 |
+
]
|
| 50 |
+
}
|
| 51 |
+
],
|
| 52 |
+
"properties": {
|
| 53 |
+
"Node name for S&R": "KSampler"
|
| 54 |
+
},
|
| 55 |
+
"widgets_values": [
|
| 56 |
+
7567358653673,
|
| 57 |
+
"fixed",
|
| 58 |
+
4,
|
| 59 |
+
1,
|
| 60 |
+
"ipndm",
|
| 61 |
+
"sgm_uniform",
|
| 62 |
+
1
|
| 63 |
+
]
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"id": 34,
|
| 67 |
+
"type": "WanVideoVACEStartToEndFrame",
|
| 68 |
+
"pos": [
|
| 69 |
+
798.5907592773438,
|
| 70 |
+
735.5933837890625
|
| 71 |
+
],
|
| 72 |
+
"size": [
|
| 73 |
+
329.9634704589844,
|
| 74 |
+
190
|
| 75 |
+
],
|
| 76 |
+
"flags": {},
|
| 77 |
+
"order": 5,
|
| 78 |
+
"mode": 0,
|
| 79 |
+
"inputs": [
|
| 80 |
+
{
|
| 81 |
+
"name": "start_image",
|
| 82 |
+
"shape": 7,
|
| 83 |
+
"type": "IMAGE",
|
| 84 |
+
"link": 139
|
| 85 |
+
},
|
| 86 |
+
{
|
| 87 |
+
"name": "end_image",
|
| 88 |
+
"shape": 7,
|
| 89 |
+
"type": "IMAGE",
|
| 90 |
+
"link": 156
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"name": "control_images",
|
| 94 |
+
"shape": 7,
|
| 95 |
+
"type": "IMAGE",
|
| 96 |
+
"link": null
|
| 97 |
+
},
|
| 98 |
+
{
|
| 99 |
+
"name": "inpaint_mask",
|
| 100 |
+
"shape": 7,
|
| 101 |
+
"type": "MASK",
|
| 102 |
+
"link": null
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"name": "num_frames",
|
| 106 |
+
"type": "INT",
|
| 107 |
+
"widget": {
|
| 108 |
+
"name": "num_frames"
|
| 109 |
+
},
|
| 110 |
+
"link": 157
|
| 111 |
+
}
|
| 112 |
+
],
|
| 113 |
+
"outputs": [
|
| 114 |
+
{
|
| 115 |
+
"name": "images",
|
| 116 |
+
"type": "IMAGE",
|
| 117 |
+
"links": [
|
| 118 |
+
141
|
| 119 |
+
]
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"name": "masks",
|
| 123 |
+
"type": "MASK",
|
| 124 |
+
"links": [
|
| 125 |
+
148
|
| 126 |
+
]
|
| 127 |
+
}
|
| 128 |
+
],
|
| 129 |
+
"properties": {
|
| 130 |
+
"Node name for S&R": "WanVideoVACEStartToEndFrame"
|
| 131 |
+
},
|
| 132 |
+
"widgets_values": [
|
| 133 |
+
65,
|
| 134 |
+
0.5,
|
| 135 |
+
0,
|
| 136 |
+
-1
|
| 137 |
+
]
|
| 138 |
+
},
|
| 139 |
+
{
|
| 140 |
+
"id": 11,
|
| 141 |
+
"type": "VAEDecode",
|
| 142 |
+
"pos": [
|
| 143 |
+
1184.343017578125,
|
| 144 |
+
1307.923583984375
|
| 145 |
+
],
|
| 146 |
+
"size": [
|
| 147 |
+
140,
|
| 148 |
+
46
|
| 149 |
+
],
|
| 150 |
+
"flags": {},
|
| 151 |
+
"order": 11,
|
| 152 |
+
"mode": 0,
|
| 153 |
+
"inputs": [
|
| 154 |
+
{
|
| 155 |
+
"name": "samples",
|
| 156 |
+
"type": "LATENT",
|
| 157 |
+
"link": 149
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"name": "vae",
|
| 161 |
+
"type": "VAE",
|
| 162 |
+
"link": 155
|
| 163 |
+
}
|
| 164 |
+
],
|
| 165 |
+
"outputs": [
|
| 166 |
+
{
|
| 167 |
+
"name": "IMAGE",
|
| 168 |
+
"type": "IMAGE",
|
| 169 |
+
"links": [
|
| 170 |
+
16
|
| 171 |
+
]
|
| 172 |
+
}
|
| 173 |
+
],
|
| 174 |
+
"properties": {
|
| 175 |
+
"Node name for S&R": "VAEDecode"
|
| 176 |
+
},
|
| 177 |
+
"widgets_values": []
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"id": 32,
|
| 181 |
+
"type": "ModelSamplingSD3",
|
| 182 |
+
"pos": [
|
| 183 |
+
1258.5234375,
|
| 184 |
+
864.0829467773438
|
| 185 |
+
],
|
| 186 |
+
"size": [
|
| 187 |
+
270,
|
| 188 |
+
58
|
| 189 |
+
],
|
| 190 |
+
"flags": {},
|
| 191 |
+
"order": 6,
|
| 192 |
+
"mode": 0,
|
| 193 |
+
"inputs": [
|
| 194 |
+
{
|
| 195 |
+
"name": "model",
|
| 196 |
+
"type": "MODEL",
|
| 197 |
+
"link": 150
|
| 198 |
+
}
|
| 199 |
+
],
|
| 200 |
+
"outputs": [
|
| 201 |
+
{
|
| 202 |
+
"name": "MODEL",
|
| 203 |
+
"type": "MODEL",
|
| 204 |
+
"links": [
|
| 205 |
+
123
|
| 206 |
+
]
|
| 207 |
+
}
|
| 208 |
+
],
|
| 209 |
+
"properties": {
|
| 210 |
+
"Node name for S&R": "ModelSamplingSD3"
|
| 211 |
+
},
|
| 212 |
+
"widgets_values": [
|
| 213 |
+
8
|
| 214 |
+
]
|
| 215 |
+
},
|
| 216 |
+
{
|
| 217 |
+
"id": 10,
|
| 218 |
+
"type": "CLIPTextEncode",
|
| 219 |
+
"pos": [
|
| 220 |
+
742.6182250976562,
|
| 221 |
+
1414.9295654296875
|
| 222 |
+
],
|
| 223 |
+
"size": [
|
| 224 |
+
391.07098388671875,
|
| 225 |
+
88
|
| 226 |
+
],
|
| 227 |
+
"flags": {},
|
| 228 |
+
"order": 8,
|
| 229 |
+
"mode": 0,
|
| 230 |
+
"inputs": [
|
| 231 |
+
{
|
| 232 |
+
"name": "clip",
|
| 233 |
+
"type": "CLIP",
|
| 234 |
+
"link": 153
|
| 235 |
+
}
|
| 236 |
+
],
|
| 237 |
+
"outputs": [
|
| 238 |
+
{
|
| 239 |
+
"name": "CONDITIONING",
|
| 240 |
+
"type": "CONDITIONING",
|
| 241 |
+
"links": [
|
| 242 |
+
137
|
| 243 |
+
]
|
| 244 |
+
}
|
| 245 |
+
],
|
| 246 |
+
"title": "Negative Prompt (leave blank cuz 1 CFG)",
|
| 247 |
+
"properties": {
|
| 248 |
+
"Node name for S&R": "CLIPTextEncode"
|
| 249 |
+
},
|
| 250 |
+
"widgets_values": [
|
| 251 |
+
""
|
| 252 |
+
]
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"id": 37,
|
| 256 |
+
"type": "LoadImage",
|
| 257 |
+
"pos": [
|
| 258 |
+
320.8406066894531,
|
| 259 |
+
992.9769287109375
|
| 260 |
+
],
|
| 261 |
+
"size": [
|
| 262 |
+
375.5744934082031,
|
| 263 |
+
326
|
| 264 |
+
],
|
| 265 |
+
"flags": {},
|
| 266 |
+
"order": 0,
|
| 267 |
+
"mode": 0,
|
| 268 |
+
"inputs": [],
|
| 269 |
+
"outputs": [
|
| 270 |
+
{
|
| 271 |
+
"name": "IMAGE",
|
| 272 |
+
"type": "IMAGE",
|
| 273 |
+
"links": [
|
| 274 |
+
156
|
| 275 |
+
]
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"name": "MASK",
|
| 279 |
+
"type": "MASK",
|
| 280 |
+
"links": null
|
| 281 |
+
}
|
| 282 |
+
],
|
| 283 |
+
"title": "End Frame (Optional)",
|
| 284 |
+
"properties": {
|
| 285 |
+
"Node name for S&R": "LoadImage"
|
| 286 |
+
},
|
| 287 |
+
"widgets_values": [
|
| 288 |
+
"ComfyUI_temp_zmuag_00002_.png",
|
| 289 |
+
"image"
|
| 290 |
+
]
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"id": 16,
|
| 294 |
+
"type": "LoadImage",
|
| 295 |
+
"pos": [
|
| 296 |
+
335.1852722167969,
|
| 297 |
+
609.4923095703125
|
| 298 |
+
],
|
| 299 |
+
"size": [
|
| 300 |
+
375.5744934082031,
|
| 301 |
+
326
|
| 302 |
+
],
|
| 303 |
+
"flags": {},
|
| 304 |
+
"order": 1,
|
| 305 |
+
"mode": 0,
|
| 306 |
+
"inputs": [],
|
| 307 |
+
"outputs": [
|
| 308 |
+
{
|
| 309 |
+
"name": "IMAGE",
|
| 310 |
+
"type": "IMAGE",
|
| 311 |
+
"links": [
|
| 312 |
+
139
|
| 313 |
+
]
|
| 314 |
+
},
|
| 315 |
+
{
|
| 316 |
+
"name": "MASK",
|
| 317 |
+
"type": "MASK",
|
| 318 |
+
"links": null
|
| 319 |
+
}
|
| 320 |
+
],
|
| 321 |
+
"title": "Start Frame (Optional)",
|
| 322 |
+
"properties": {
|
| 323 |
+
"Node name for S&R": "LoadImage"
|
| 324 |
+
},
|
| 325 |
+
"widgets_values": [
|
| 326 |
+
"example.png",
|
| 327 |
+
"image"
|
| 328 |
+
]
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"id": 38,
|
| 332 |
+
"type": "INTConstant",
|
| 333 |
+
"pos": [
|
| 334 |
+
1258.918701171875,
|
| 335 |
+
722.7567138671875
|
| 336 |
+
],
|
| 337 |
+
"size": [
|
| 338 |
+
260.55755615234375,
|
| 339 |
+
84.0733871459961
|
| 340 |
+
],
|
| 341 |
+
"flags": {},
|
| 342 |
+
"order": 2,
|
| 343 |
+
"mode": 0,
|
| 344 |
+
"inputs": [],
|
| 345 |
+
"outputs": [
|
| 346 |
+
{
|
| 347 |
+
"name": "value",
|
| 348 |
+
"type": "INT",
|
| 349 |
+
"links": [
|
| 350 |
+
157,
|
| 351 |
+
158
|
| 352 |
+
]
|
| 353 |
+
}
|
| 354 |
+
],
|
| 355 |
+
"title": "Number of Frames",
|
| 356 |
+
"properties": {
|
| 357 |
+
"Node name for S&R": "INTConstant"
|
| 358 |
+
},
|
| 359 |
+
"widgets_values": [
|
| 360 |
+
81
|
| 361 |
+
],
|
| 362 |
+
"color": "#1b4669",
|
| 363 |
+
"bgcolor": "#29699c"
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"id": 9,
|
| 367 |
+
"type": "CLIPTextEncode",
|
| 368 |
+
"pos": [
|
| 369 |
+
735.2056274414062,
|
| 370 |
+
1159.9134521484375
|
| 371 |
+
],
|
| 372 |
+
"size": [
|
| 373 |
+
400,
|
| 374 |
+
200
|
| 375 |
+
],
|
| 376 |
+
"flags": {},
|
| 377 |
+
"order": 7,
|
| 378 |
+
"mode": 0,
|
| 379 |
+
"inputs": [
|
| 380 |
+
{
|
| 381 |
+
"name": "clip",
|
| 382 |
+
"type": "CLIP",
|
| 383 |
+
"link": 152
|
| 384 |
+
}
|
| 385 |
+
],
|
| 386 |
+
"outputs": [
|
| 387 |
+
{
|
| 388 |
+
"name": "CONDITIONING",
|
| 389 |
+
"type": "CONDITIONING",
|
| 390 |
+
"links": [
|
| 391 |
+
136
|
| 392 |
+
]
|
| 393 |
+
}
|
| 394 |
+
],
|
| 395 |
+
"properties": {
|
| 396 |
+
"Node name for S&R": "CLIPTextEncode"
|
| 397 |
+
},
|
| 398 |
+
"widgets_values": [
|
| 399 |
+
"A silly cartoon of a girl in a pink shirt transforms into a ninja sneaking down a hallway."
|
| 400 |
+
]
|
| 401 |
+
},
|
| 402 |
+
{
|
| 403 |
+
"id": 28,
|
| 404 |
+
"type": "WanVaceToVideo",
|
| 405 |
+
"pos": [
|
| 406 |
+
1265.000244140625,
|
| 407 |
+
992.2266845703125
|
| 408 |
+
],
|
| 409 |
+
"size": [
|
| 410 |
+
270,
|
| 411 |
+
254
|
| 412 |
+
],
|
| 413 |
+
"flags": {},
|
| 414 |
+
"order": 9,
|
| 415 |
+
"mode": 0,
|
| 416 |
+
"inputs": [
|
| 417 |
+
{
|
| 418 |
+
"name": "positive",
|
| 419 |
+
"type": "CONDITIONING",
|
| 420 |
+
"link": 136
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"name": "negative",
|
| 424 |
+
"type": "CONDITIONING",
|
| 425 |
+
"link": 137
|
| 426 |
+
},
|
| 427 |
+
{
|
| 428 |
+
"name": "vae",
|
| 429 |
+
"type": "VAE",
|
| 430 |
+
"link": 154
|
| 431 |
+
},
|
| 432 |
+
{
|
| 433 |
+
"name": "control_video",
|
| 434 |
+
"shape": 7,
|
| 435 |
+
"type": "IMAGE",
|
| 436 |
+
"link": 141
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"name": "control_masks",
|
| 440 |
+
"shape": 7,
|
| 441 |
+
"type": "MASK",
|
| 442 |
+
"link": 148
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"name": "reference_image",
|
| 446 |
+
"shape": 7,
|
| 447 |
+
"type": "IMAGE",
|
| 448 |
+
"link": null
|
| 449 |
+
},
|
| 450 |
+
{
|
| 451 |
+
"name": "length",
|
| 452 |
+
"type": "INT",
|
| 453 |
+
"widget": {
|
| 454 |
+
"name": "length"
|
| 455 |
+
},
|
| 456 |
+
"link": 158
|
| 457 |
+
}
|
| 458 |
+
],
|
| 459 |
+
"outputs": [
|
| 460 |
+
{
|
| 461 |
+
"name": "positive",
|
| 462 |
+
"type": "CONDITIONING",
|
| 463 |
+
"links": [
|
| 464 |
+
144
|
| 465 |
+
]
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"name": "negative",
|
| 469 |
+
"type": "CONDITIONING",
|
| 470 |
+
"links": [
|
| 471 |
+
145
|
| 472 |
+
]
|
| 473 |
+
},
|
| 474 |
+
{
|
| 475 |
+
"name": "latent",
|
| 476 |
+
"type": "LATENT",
|
| 477 |
+
"links": [
|
| 478 |
+
143
|
| 479 |
+
]
|
| 480 |
+
},
|
| 481 |
+
{
|
| 482 |
+
"name": "trim_latent",
|
| 483 |
+
"type": "INT",
|
| 484 |
+
"links": []
|
| 485 |
+
}
|
| 486 |
+
],
|
| 487 |
+
"properties": {
|
| 488 |
+
"Node name for S&R": "WanVaceToVideo"
|
| 489 |
+
},
|
| 490 |
+
"widgets_values": [
|
| 491 |
+
512,
|
| 492 |
+
512,
|
| 493 |
+
65,
|
| 494 |
+
1,
|
| 495 |
+
1
|
| 496 |
+
]
|
| 497 |
+
},
|
| 498 |
+
{
|
| 499 |
+
"id": 26,
|
| 500 |
+
"type": "CheckpointLoaderSimple",
|
| 501 |
+
"pos": [
|
| 502 |
+
717.7600708007812,
|
| 503 |
+
992.8963623046875
|
| 504 |
+
],
|
| 505 |
+
"size": [
|
| 506 |
+
411.2327880859375,
|
| 507 |
+
98
|
| 508 |
+
],
|
| 509 |
+
"flags": {},
|
| 510 |
+
"order": 3,
|
| 511 |
+
"mode": 0,
|
| 512 |
+
"inputs": [],
|
| 513 |
+
"outputs": [
|
| 514 |
+
{
|
| 515 |
+
"name": "MODEL",
|
| 516 |
+
"type": "MODEL",
|
| 517 |
+
"links": [
|
| 518 |
+
150
|
| 519 |
+
]
|
| 520 |
+
},
|
| 521 |
+
{
|
| 522 |
+
"name": "CLIP",
|
| 523 |
+
"type": "CLIP",
|
| 524 |
+
"links": [
|
| 525 |
+
152,
|
| 526 |
+
153
|
| 527 |
+
]
|
| 528 |
+
},
|
| 529 |
+
{
|
| 530 |
+
"name": "VAE",
|
| 531 |
+
"type": "VAE",
|
| 532 |
+
"links": [
|
| 533 |
+
154,
|
| 534 |
+
155
|
| 535 |
+
]
|
| 536 |
+
}
|
| 537 |
+
],
|
| 538 |
+
"properties": {
|
| 539 |
+
"Node name for S&R": "CheckpointLoaderSimple"
|
| 540 |
+
},
|
| 541 |
+
"widgets_values": [
|
| 542 |
+
"WAN\\wan2.2-rapid-mega-aio-v1.safetensors"
|
| 543 |
+
]
|
| 544 |
+
},
|
| 545 |
+
{
|
| 546 |
+
"id": 12,
|
| 547 |
+
"type": "PreviewImage",
|
| 548 |
+
"pos": [
|
| 549 |
+
1357.249755859375,
|
| 550 |
+
1303.7225341796875
|
| 551 |
+
],
|
| 552 |
+
"size": [
|
| 553 |
+
430.8839416503906,
|
| 554 |
+
367.1125183105469
|
| 555 |
+
],
|
| 556 |
+
"flags": {},
|
| 557 |
+
"order": 12,
|
| 558 |
+
"mode": 0,
|
| 559 |
+
"inputs": [
|
| 560 |
+
{
|
| 561 |
+
"name": "images",
|
| 562 |
+
"type": "IMAGE",
|
| 563 |
+
"link": 16
|
| 564 |
+
}
|
| 565 |
+
],
|
| 566 |
+
"outputs": [],
|
| 567 |
+
"properties": {
|
| 568 |
+
"Node name for S&R": "PreviewImage"
|
| 569 |
+
},
|
| 570 |
+
"widgets_values": []
|
| 571 |
+
},
|
| 572 |
+
{
|
| 573 |
+
"id": 36,
|
| 574 |
+
"type": "Note",
|
| 575 |
+
"pos": [
|
| 576 |
+
1602.0777587890625,
|
| 577 |
+
788.3606567382812
|
| 578 |
+
],
|
| 579 |
+
"size": [
|
| 580 |
+
261.9561462402344,
|
| 581 |
+
140.74337768554688
|
| 582 |
+
],
|
| 583 |
+
"flags": {},
|
| 584 |
+
"order": 4,
|
| 585 |
+
"mode": 0,
|
| 586 |
+
"inputs": [],
|
| 587 |
+
"outputs": [],
|
| 588 |
+
"properties": {},
|
| 589 |
+
"widgets_values": [
|
| 590 |
+
"T2V? WanVaceToVideo strength = 0.0\nI2V? WanVaceToVideo strength = 1.0\n(Can experiment with values 0-1)\n\n\n\n"
|
| 591 |
+
],
|
| 592 |
+
"color": "#432",
|
| 593 |
+
"bgcolor": "#653"
|
| 594 |
+
}
|
| 595 |
+
],
|
| 596 |
+
"links": [
|
| 597 |
+
[
|
| 598 |
+
16,
|
| 599 |
+
11,
|
| 600 |
+
0,
|
| 601 |
+
12,
|
| 602 |
+
0,
|
| 603 |
+
"IMAGE"
|
| 604 |
+
],
|
| 605 |
+
[
|
| 606 |
+
123,
|
| 607 |
+
32,
|
| 608 |
+
0,
|
| 609 |
+
8,
|
| 610 |
+
0,
|
| 611 |
+
"MODEL"
|
| 612 |
+
],
|
| 613 |
+
[
|
| 614 |
+
136,
|
| 615 |
+
9,
|
| 616 |
+
0,
|
| 617 |
+
28,
|
| 618 |
+
0,
|
| 619 |
+
"CONDITIONING"
|
| 620 |
+
],
|
| 621 |
+
[
|
| 622 |
+
137,
|
| 623 |
+
10,
|
| 624 |
+
0,
|
| 625 |
+
28,
|
| 626 |
+
1,
|
| 627 |
+
"CONDITIONING"
|
| 628 |
+
],
|
| 629 |
+
[
|
| 630 |
+
139,
|
| 631 |
+
16,
|
| 632 |
+
0,
|
| 633 |
+
34,
|
| 634 |
+
0,
|
| 635 |
+
"IMAGE"
|
| 636 |
+
],
|
| 637 |
+
[
|
| 638 |
+
141,
|
| 639 |
+
34,
|
| 640 |
+
0,
|
| 641 |
+
28,
|
| 642 |
+
3,
|
| 643 |
+
"IMAGE"
|
| 644 |
+
],
|
| 645 |
+
[
|
| 646 |
+
143,
|
| 647 |
+
28,
|
| 648 |
+
2,
|
| 649 |
+
8,
|
| 650 |
+
3,
|
| 651 |
+
"LATENT"
|
| 652 |
+
],
|
| 653 |
+
[
|
| 654 |
+
144,
|
| 655 |
+
28,
|
| 656 |
+
0,
|
| 657 |
+
8,
|
| 658 |
+
1,
|
| 659 |
+
"CONDITIONING"
|
| 660 |
+
],
|
| 661 |
+
[
|
| 662 |
+
145,
|
| 663 |
+
28,
|
| 664 |
+
1,
|
| 665 |
+
8,
|
| 666 |
+
2,
|
| 667 |
+
"CONDITIONING"
|
| 668 |
+
],
|
| 669 |
+
[
|
| 670 |
+
148,
|
| 671 |
+
34,
|
| 672 |
+
1,
|
| 673 |
+
28,
|
| 674 |
+
4,
|
| 675 |
+
"MASK"
|
| 676 |
+
],
|
| 677 |
+
[
|
| 678 |
+
149,
|
| 679 |
+
8,
|
| 680 |
+
0,
|
| 681 |
+
11,
|
| 682 |
+
0,
|
| 683 |
+
"LATENT"
|
| 684 |
+
],
|
| 685 |
+
[
|
| 686 |
+
150,
|
| 687 |
+
26,
|
| 688 |
+
0,
|
| 689 |
+
32,
|
| 690 |
+
0,
|
| 691 |
+
"MODEL"
|
| 692 |
+
],
|
| 693 |
+
[
|
| 694 |
+
152,
|
| 695 |
+
26,
|
| 696 |
+
1,
|
| 697 |
+
9,
|
| 698 |
+
0,
|
| 699 |
+
"CLIP"
|
| 700 |
+
],
|
| 701 |
+
[
|
| 702 |
+
153,
|
| 703 |
+
26,
|
| 704 |
+
1,
|
| 705 |
+
10,
|
| 706 |
+
0,
|
| 707 |
+
"CLIP"
|
| 708 |
+
],
|
| 709 |
+
[
|
| 710 |
+
154,
|
| 711 |
+
26,
|
| 712 |
+
2,
|
| 713 |
+
28,
|
| 714 |
+
2,
|
| 715 |
+
"VAE"
|
| 716 |
+
],
|
| 717 |
+
[
|
| 718 |
+
155,
|
| 719 |
+
26,
|
| 720 |
+
2,
|
| 721 |
+
11,
|
| 722 |
+
1,
|
| 723 |
+
"VAE"
|
| 724 |
+
],
|
| 725 |
+
[
|
| 726 |
+
156,
|
| 727 |
+
37,
|
| 728 |
+
0,
|
| 729 |
+
34,
|
| 730 |
+
1,
|
| 731 |
+
"IMAGE"
|
| 732 |
+
],
|
| 733 |
+
[
|
| 734 |
+
157,
|
| 735 |
+
38,
|
| 736 |
+
0,
|
| 737 |
+
34,
|
| 738 |
+
4,
|
| 739 |
+
"INT"
|
| 740 |
+
],
|
| 741 |
+
[
|
| 742 |
+
158,
|
| 743 |
+
38,
|
| 744 |
+
0,
|
| 745 |
+
28,
|
| 746 |
+
6,
|
| 747 |
+
"INT"
|
| 748 |
+
]
|
| 749 |
+
],
|
| 750 |
+
"groups": [],
|
| 751 |
+
"config": {},
|
| 752 |
+
"extra": {
|
| 753 |
+
"ds": {
|
| 754 |
+
"scale": 0.8643177759582871,
|
| 755 |
+
"offset": [
|
| 756 |
+
-279.1406868949361,
|
| 757 |
+
-575.7935361426574
|
| 758 |
+
]
|
| 759 |
+
},
|
| 760 |
+
"ue_links": [],
|
| 761 |
+
"frontendVersion": "1.25.11",
|
| 762 |
+
"VHS_latentpreview": false,
|
| 763 |
+
"VHS_latentpreviewrate": 0,
|
| 764 |
+
"VHS_MetadataImage": true,
|
| 765 |
+
"VHS_KeepIntermediate": true
|
| 766 |
+
},
|
| 767 |
+
"version": 0.4
|
| 768 |
+
}
|
Mega-v1/wan2.2-rapid-mega-aio-nsfw-v1.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:de5bebfc749a2662968b3c6a8c43e738cb7af2384036d36f074fb6af46a6a124
|
| 3 |
+
size 24334485143
|
Mega-v1/wan2.2-rapid-mega-aio-v1.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9be8c6ea36b59e2d2c16aa8359f7b40dacd4b127f5578a7722d3b9286852c169
|
| 3 |
+
size 24334481079
|
Mega-v10/.use_mega_v3_workflow
ADDED
|
File without changes
|
Mega-v10/wan2.2-rapid-mega-aio-nsfw-v10.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:101907e2eff6f4b704deab04419524c4efcd6fb3010864952e2a5afd97589780
|
| 3 |
+
size 24334500071
|
Mega-v10/wan2.2-rapid-mega-aio-v10.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e9f38cd990c25d83967a89d0bbf3c33b2a13185d830513bf9852cfb889827505
|
| 3 |
+
size 24334500031
|
Mega-v11/.use_mega_v3_workflow
ADDED
|
File without changes
|
Mega-v11/wan2.2-rapid-mega-aio-nsfw-v11.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:814367648ffb98a32a4252d5fc217902b71166a17f1808bd42741cf3fad41219
|
| 3 |
+
size 24334498751
|
Mega-v11/wan2.2-rapid-mega-aio-v11.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:69aa25d77b7053f1e1718bdf868122c8f96e7398668079374c0c9eaf4fee5259
|
| 3 |
+
size 24334498767
|
Mega-v12/.use_mega_v3_workflow
ADDED
|
File without changes
|
Mega-v12/wan2.2-rapid-mega-aio-nsfw-v12.1.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:77bed3d08166803a6085afbb8a1e2fb749f9710da4e94255022df89a86f6ac41
|
| 3 |
+
size 23284018488
|
Mega-v12/wan2.2-rapid-mega-aio-nsfw-v12.2.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:97c9f3f993c463e22497a6f85bc428dda047bd2d381383a8e77e26ded6312b1e
|
| 3 |
+
size 23284018448
|
Mega-v12/wan2.2-rapid-mega-aio-nsfw-v12.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2a926232c49af7bafeabc2bcbe8650ef5d9857fa980280f8ebdbddadbe260e2a
|
| 3 |
+
size 23284017872
|
Mega-v12/wan2.2-rapid-mega-aio-v12.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f7a5ac03b86bd8fc935fd7fa74b778b2842a80ab3228e5993fbeeab64239a5b8
|
| 3 |
+
size 23284017800
|
Mega-v2/Rapid-AIO-Mega.json
ADDED
|
@@ -0,0 +1,815 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"id": "e6c78bba-ef40-40c6-8b95-20bd0020ddbb",
|
| 3 |
+
"revision": 0,
|
| 4 |
+
"last_node_id": 48,
|
| 5 |
+
"last_link_id": 163,
|
| 6 |
+
"nodes": [
|
| 7 |
+
{
|
| 8 |
+
"id": 34,
|
| 9 |
+
"type": "WanVideoVACEStartToEndFrame",
|
| 10 |
+
"pos": [
|
| 11 |
+
798.5907592773438,
|
| 12 |
+
735.5933837890625
|
| 13 |
+
],
|
| 14 |
+
"size": [
|
| 15 |
+
329.9634704589844,
|
| 16 |
+
190
|
| 17 |
+
],
|
| 18 |
+
"flags": {},
|
| 19 |
+
"order": 5,
|
| 20 |
+
"mode": 4,
|
| 21 |
+
"inputs": [
|
| 22 |
+
{
|
| 23 |
+
"name": "start_image",
|
| 24 |
+
"shape": 7,
|
| 25 |
+
"type": "IMAGE",
|
| 26 |
+
"link": 161
|
| 27 |
+
},
|
| 28 |
+
{
|
| 29 |
+
"name": "end_image",
|
| 30 |
+
"shape": 7,
|
| 31 |
+
"type": "IMAGE",
|
| 32 |
+
"link": 156
|
| 33 |
+
},
|
| 34 |
+
{
|
| 35 |
+
"name": "control_images",
|
| 36 |
+
"shape": 7,
|
| 37 |
+
"type": "IMAGE",
|
| 38 |
+
"link": null
|
| 39 |
+
},
|
| 40 |
+
{
|
| 41 |
+
"name": "inpaint_mask",
|
| 42 |
+
"shape": 7,
|
| 43 |
+
"type": "MASK",
|
| 44 |
+
"link": null
|
| 45 |
+
},
|
| 46 |
+
{
|
| 47 |
+
"name": "num_frames",
|
| 48 |
+
"type": "INT",
|
| 49 |
+
"widget": {
|
| 50 |
+
"name": "num_frames"
|
| 51 |
+
},
|
| 52 |
+
"link": 163
|
| 53 |
+
}
|
| 54 |
+
],
|
| 55 |
+
"outputs": [
|
| 56 |
+
{
|
| 57 |
+
"name": "images",
|
| 58 |
+
"type": "IMAGE",
|
| 59 |
+
"links": [
|
| 60 |
+
141
|
| 61 |
+
]
|
| 62 |
+
},
|
| 63 |
+
{
|
| 64 |
+
"name": "masks",
|
| 65 |
+
"type": "MASK",
|
| 66 |
+
"links": [
|
| 67 |
+
148
|
| 68 |
+
]
|
| 69 |
+
}
|
| 70 |
+
],
|
| 71 |
+
"properties": {
|
| 72 |
+
"Node name for S&R": "WanVideoVACEStartToEndFrame"
|
| 73 |
+
},
|
| 74 |
+
"widgets_values": [
|
| 75 |
+
65,
|
| 76 |
+
0.5,
|
| 77 |
+
0,
|
| 78 |
+
-1
|
| 79 |
+
]
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"id": 32,
|
| 83 |
+
"type": "ModelSamplingSD3",
|
| 84 |
+
"pos": [
|
| 85 |
+
1258.5234375,
|
| 86 |
+
864.0829467773438
|
| 87 |
+
],
|
| 88 |
+
"size": [
|
| 89 |
+
270,
|
| 90 |
+
58
|
| 91 |
+
],
|
| 92 |
+
"flags": {},
|
| 93 |
+
"order": 6,
|
| 94 |
+
"mode": 0,
|
| 95 |
+
"inputs": [
|
| 96 |
+
{
|
| 97 |
+
"name": "model",
|
| 98 |
+
"type": "MODEL",
|
| 99 |
+
"link": 150
|
| 100 |
+
}
|
| 101 |
+
],
|
| 102 |
+
"outputs": [
|
| 103 |
+
{
|
| 104 |
+
"name": "MODEL",
|
| 105 |
+
"type": "MODEL",
|
| 106 |
+
"links": [
|
| 107 |
+
123
|
| 108 |
+
]
|
| 109 |
+
}
|
| 110 |
+
],
|
| 111 |
+
"properties": {
|
| 112 |
+
"Node name for S&R": "ModelSamplingSD3"
|
| 113 |
+
},
|
| 114 |
+
"widgets_values": [
|
| 115 |
+
8
|
| 116 |
+
]
|
| 117 |
+
},
|
| 118 |
+
{
|
| 119 |
+
"id": 10,
|
| 120 |
+
"type": "CLIPTextEncode",
|
| 121 |
+
"pos": [
|
| 122 |
+
742.6182250976562,
|
| 123 |
+
1414.9295654296875
|
| 124 |
+
],
|
| 125 |
+
"size": [
|
| 126 |
+
391.07098388671875,
|
| 127 |
+
88
|
| 128 |
+
],
|
| 129 |
+
"flags": {},
|
| 130 |
+
"order": 8,
|
| 131 |
+
"mode": 0,
|
| 132 |
+
"inputs": [
|
| 133 |
+
{
|
| 134 |
+
"name": "clip",
|
| 135 |
+
"type": "CLIP",
|
| 136 |
+
"link": 153
|
| 137 |
+
}
|
| 138 |
+
],
|
| 139 |
+
"outputs": [
|
| 140 |
+
{
|
| 141 |
+
"name": "CONDITIONING",
|
| 142 |
+
"type": "CONDITIONING",
|
| 143 |
+
"links": [
|
| 144 |
+
137
|
| 145 |
+
]
|
| 146 |
+
}
|
| 147 |
+
],
|
| 148 |
+
"title": "Negative Prompt (leave blank cuz 1 CFG)",
|
| 149 |
+
"properties": {
|
| 150 |
+
"Node name for S&R": "CLIPTextEncode"
|
| 151 |
+
},
|
| 152 |
+
"widgets_values": [
|
| 153 |
+
""
|
| 154 |
+
]
|
| 155 |
+
},
|
| 156 |
+
{
|
| 157 |
+
"id": 11,
|
| 158 |
+
"type": "VAEDecode",
|
| 159 |
+
"pos": [
|
| 160 |
+
1184.343017578125,
|
| 161 |
+
1307.923583984375
|
| 162 |
+
],
|
| 163 |
+
"size": [
|
| 164 |
+
140,
|
| 165 |
+
46
|
| 166 |
+
],
|
| 167 |
+
"flags": {},
|
| 168 |
+
"order": 11,
|
| 169 |
+
"mode": 0,
|
| 170 |
+
"inputs": [
|
| 171 |
+
{
|
| 172 |
+
"name": "samples",
|
| 173 |
+
"type": "LATENT",
|
| 174 |
+
"link": 149
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"name": "vae",
|
| 178 |
+
"type": "VAE",
|
| 179 |
+
"link": 155
|
| 180 |
+
}
|
| 181 |
+
],
|
| 182 |
+
"outputs": [
|
| 183 |
+
{
|
| 184 |
+
"name": "IMAGE",
|
| 185 |
+
"type": "IMAGE",
|
| 186 |
+
"links": [
|
| 187 |
+
159
|
| 188 |
+
]
|
| 189 |
+
}
|
| 190 |
+
],
|
| 191 |
+
"properties": {
|
| 192 |
+
"Node name for S&R": "VAEDecode"
|
| 193 |
+
},
|
| 194 |
+
"widgets_values": []
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"id": 48,
|
| 198 |
+
"type": "PrimitiveInt",
|
| 199 |
+
"pos": [
|
| 200 |
+
1256.8662109375,
|
| 201 |
+
726.9954833984375
|
| 202 |
+
],
|
| 203 |
+
"size": [
|
| 204 |
+
270,
|
| 205 |
+
82
|
| 206 |
+
],
|
| 207 |
+
"flags": {},
|
| 208 |
+
"order": 0,
|
| 209 |
+
"mode": 0,
|
| 210 |
+
"inputs": [],
|
| 211 |
+
"outputs": [
|
| 212 |
+
{
|
| 213 |
+
"name": "INT",
|
| 214 |
+
"type": "INT",
|
| 215 |
+
"links": [
|
| 216 |
+
162,
|
| 217 |
+
163
|
| 218 |
+
]
|
| 219 |
+
}
|
| 220 |
+
],
|
| 221 |
+
"title": "Number of Frames",
|
| 222 |
+
"properties": {
|
| 223 |
+
"Node name for S&R": "PrimitiveInt"
|
| 224 |
+
},
|
| 225 |
+
"widgets_values": [
|
| 226 |
+
81,
|
| 227 |
+
"fixed"
|
| 228 |
+
]
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"id": 39,
|
| 232 |
+
"type": "VHS_VideoCombine",
|
| 233 |
+
"pos": [
|
| 234 |
+
1345.189453125,
|
| 235 |
+
1296.6009521484375
|
| 236 |
+
],
|
| 237 |
+
"size": [
|
| 238 |
+
315.8479309082031,
|
| 239 |
+
563.162109375
|
| 240 |
+
],
|
| 241 |
+
"flags": {},
|
| 242 |
+
"order": 12,
|
| 243 |
+
"mode": 0,
|
| 244 |
+
"inputs": [
|
| 245 |
+
{
|
| 246 |
+
"name": "images",
|
| 247 |
+
"type": "IMAGE",
|
| 248 |
+
"link": 159
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"name": "audio",
|
| 252 |
+
"shape": 7,
|
| 253 |
+
"type": "AUDIO",
|
| 254 |
+
"link": null
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"name": "meta_batch",
|
| 258 |
+
"shape": 7,
|
| 259 |
+
"type": "VHS_BatchManager",
|
| 260 |
+
"link": null
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"name": "vae",
|
| 264 |
+
"shape": 7,
|
| 265 |
+
"type": "VAE",
|
| 266 |
+
"link": null
|
| 267 |
+
}
|
| 268 |
+
],
|
| 269 |
+
"outputs": [
|
| 270 |
+
{
|
| 271 |
+
"name": "Filenames",
|
| 272 |
+
"type": "VHS_FILENAMES",
|
| 273 |
+
"links": null
|
| 274 |
+
}
|
| 275 |
+
],
|
| 276 |
+
"properties": {
|
| 277 |
+
"Node name for S&R": "VHS_VideoCombine"
|
| 278 |
+
},
|
| 279 |
+
"widgets_values": {
|
| 280 |
+
"frame_rate": 16,
|
| 281 |
+
"loop_count": 0,
|
| 282 |
+
"filename_prefix": "rapid-mega-out/vid",
|
| 283 |
+
"format": "video/h264-mp4",
|
| 284 |
+
"pix_fmt": "yuv420p",
|
| 285 |
+
"crf": 19,
|
| 286 |
+
"save_metadata": true,
|
| 287 |
+
"trim_to_audio": false,
|
| 288 |
+
"pingpong": false,
|
| 289 |
+
"save_output": true,
|
| 290 |
+
"videopreview": {
|
| 291 |
+
"hidden": false,
|
| 292 |
+
"paused": false,
|
| 293 |
+
"params": {
|
| 294 |
+
"filename": "vid_00070.mp4",
|
| 295 |
+
"subfolder": "rapid-mega-out",
|
| 296 |
+
"type": "output",
|
| 297 |
+
"format": "video/h264-mp4",
|
| 298 |
+
"frame_rate": 16,
|
| 299 |
+
"workflow": "vid_00070.png",
|
| 300 |
+
"fullpath": "D:\\ComfyUI2\\ComfyUI\\output\\rapid-mega-out\\vid_00070.mp4"
|
| 301 |
+
}
|
| 302 |
+
}
|
| 303 |
+
}
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"id": 16,
|
| 307 |
+
"type": "LoadImage",
|
| 308 |
+
"pos": [
|
| 309 |
+
335.1852722167969,
|
| 310 |
+
609.4923095703125
|
| 311 |
+
],
|
| 312 |
+
"size": [
|
| 313 |
+
375.5744934082031,
|
| 314 |
+
326
|
| 315 |
+
],
|
| 316 |
+
"flags": {},
|
| 317 |
+
"order": 1,
|
| 318 |
+
"mode": 4,
|
| 319 |
+
"inputs": [],
|
| 320 |
+
"outputs": [
|
| 321 |
+
{
|
| 322 |
+
"name": "IMAGE",
|
| 323 |
+
"type": "IMAGE",
|
| 324 |
+
"links": [
|
| 325 |
+
161
|
| 326 |
+
]
|
| 327 |
+
},
|
| 328 |
+
{
|
| 329 |
+
"name": "MASK",
|
| 330 |
+
"type": "MASK",
|
| 331 |
+
"links": null
|
| 332 |
+
}
|
| 333 |
+
],
|
| 334 |
+
"title": "Start Frame (Optional)",
|
| 335 |
+
"properties": {
|
| 336 |
+
"Node name for S&R": "LoadImage"
|
| 337 |
+
},
|
| 338 |
+
"widgets_values": [
|
| 339 |
+
"ComfyUI_temp_zmuag_00006_.png",
|
| 340 |
+
"image"
|
| 341 |
+
]
|
| 342 |
+
},
|
| 343 |
+
{
|
| 344 |
+
"id": 37,
|
| 345 |
+
"type": "LoadImage",
|
| 346 |
+
"pos": [
|
| 347 |
+
320.8406066894531,
|
| 348 |
+
992.9769287109375
|
| 349 |
+
],
|
| 350 |
+
"size": [
|
| 351 |
+
375.5744934082031,
|
| 352 |
+
326
|
| 353 |
+
],
|
| 354 |
+
"flags": {},
|
| 355 |
+
"order": 2,
|
| 356 |
+
"mode": 4,
|
| 357 |
+
"inputs": [],
|
| 358 |
+
"outputs": [
|
| 359 |
+
{
|
| 360 |
+
"name": "IMAGE",
|
| 361 |
+
"type": "IMAGE",
|
| 362 |
+
"links": [
|
| 363 |
+
156
|
| 364 |
+
]
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"name": "MASK",
|
| 368 |
+
"type": "MASK",
|
| 369 |
+
"links": null
|
| 370 |
+
}
|
| 371 |
+
],
|
| 372 |
+
"title": "End Frame (Optional)",
|
| 373 |
+
"properties": {
|
| 374 |
+
"Node name for S&R": "LoadImage"
|
| 375 |
+
},
|
| 376 |
+
"widgets_values": [
|
| 377 |
+
"ComfyUI_temp_zmuag_00002_.png",
|
| 378 |
+
"image"
|
| 379 |
+
]
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"id": 36,
|
| 383 |
+
"type": "Note",
|
| 384 |
+
"pos": [
|
| 385 |
+
1602.0777587890625,
|
| 386 |
+
788.3606567382812
|
| 387 |
+
],
|
| 388 |
+
"size": [
|
| 389 |
+
261.9561462402344,
|
| 390 |
+
140.74337768554688
|
| 391 |
+
],
|
| 392 |
+
"flags": {},
|
| 393 |
+
"order": 3,
|
| 394 |
+
"mode": 0,
|
| 395 |
+
"inputs": [],
|
| 396 |
+
"outputs": [],
|
| 397 |
+
"properties": {},
|
| 398 |
+
"widgets_values": [
|
| 399 |
+
"T2V? WanVaceToVideo strength = 0.0\n (and bypass images/VACEStarToEnd)\n\nI2V? WanVaceToVideo strength = 1.0\n (include first/last/or both images)"
|
| 400 |
+
],
|
| 401 |
+
"color": "#432",
|
| 402 |
+
"bgcolor": "#653"
|
| 403 |
+
},
|
| 404 |
+
{
|
| 405 |
+
"id": 8,
|
| 406 |
+
"type": "KSampler",
|
| 407 |
+
"pos": [
|
| 408 |
+
1601.7471923828125,
|
| 409 |
+
985.068603515625
|
| 410 |
+
],
|
| 411 |
+
"size": [
|
| 412 |
+
270,
|
| 413 |
+
262
|
| 414 |
+
],
|
| 415 |
+
"flags": {},
|
| 416 |
+
"order": 10,
|
| 417 |
+
"mode": 0,
|
| 418 |
+
"inputs": [
|
| 419 |
+
{
|
| 420 |
+
"name": "model",
|
| 421 |
+
"type": "MODEL",
|
| 422 |
+
"link": 123
|
| 423 |
+
},
|
| 424 |
+
{
|
| 425 |
+
"name": "positive",
|
| 426 |
+
"type": "CONDITIONING",
|
| 427 |
+
"link": 144
|
| 428 |
+
},
|
| 429 |
+
{
|
| 430 |
+
"name": "negative",
|
| 431 |
+
"type": "CONDITIONING",
|
| 432 |
+
"link": 145
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"name": "latent_image",
|
| 436 |
+
"type": "LATENT",
|
| 437 |
+
"link": 143
|
| 438 |
+
}
|
| 439 |
+
],
|
| 440 |
+
"outputs": [
|
| 441 |
+
{
|
| 442 |
+
"name": "LATENT",
|
| 443 |
+
"type": "LATENT",
|
| 444 |
+
"links": [
|
| 445 |
+
149
|
| 446 |
+
]
|
| 447 |
+
}
|
| 448 |
+
],
|
| 449 |
+
"properties": {
|
| 450 |
+
"Node name for S&R": "KSampler"
|
| 451 |
+
},
|
| 452 |
+
"widgets_values": [
|
| 453 |
+
3564573457345,
|
| 454 |
+
"fixed",
|
| 455 |
+
4,
|
| 456 |
+
1,
|
| 457 |
+
"ipndm",
|
| 458 |
+
"beta",
|
| 459 |
+
1
|
| 460 |
+
]
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"id": 9,
|
| 464 |
+
"type": "CLIPTextEncode",
|
| 465 |
+
"pos": [
|
| 466 |
+
735.2056274414062,
|
| 467 |
+
1159.9134521484375
|
| 468 |
+
],
|
| 469 |
+
"size": [
|
| 470 |
+
400,
|
| 471 |
+
200
|
| 472 |
+
],
|
| 473 |
+
"flags": {},
|
| 474 |
+
"order": 7,
|
| 475 |
+
"mode": 0,
|
| 476 |
+
"inputs": [
|
| 477 |
+
{
|
| 478 |
+
"name": "clip",
|
| 479 |
+
"type": "CLIP",
|
| 480 |
+
"link": 152
|
| 481 |
+
}
|
| 482 |
+
],
|
| 483 |
+
"outputs": [
|
| 484 |
+
{
|
| 485 |
+
"name": "CONDITIONING",
|
| 486 |
+
"type": "CONDITIONING",
|
| 487 |
+
"links": [
|
| 488 |
+
136
|
| 489 |
+
]
|
| 490 |
+
}
|
| 491 |
+
],
|
| 492 |
+
"properties": {
|
| 493 |
+
"Node name for S&R": "CLIPTextEncode"
|
| 494 |
+
},
|
| 495 |
+
"widgets_values": [
|
| 496 |
+
"A cat chasing a dog."
|
| 497 |
+
]
|
| 498 |
+
},
|
| 499 |
+
{
|
| 500 |
+
"id": 28,
|
| 501 |
+
"type": "WanVaceToVideo",
|
| 502 |
+
"pos": [
|
| 503 |
+
1265.000244140625,
|
| 504 |
+
992.2266845703125
|
| 505 |
+
],
|
| 506 |
+
"size": [
|
| 507 |
+
270,
|
| 508 |
+
254
|
| 509 |
+
],
|
| 510 |
+
"flags": {},
|
| 511 |
+
"order": 9,
|
| 512 |
+
"mode": 0,
|
| 513 |
+
"inputs": [
|
| 514 |
+
{
|
| 515 |
+
"name": "positive",
|
| 516 |
+
"type": "CONDITIONING",
|
| 517 |
+
"link": 136
|
| 518 |
+
},
|
| 519 |
+
{
|
| 520 |
+
"name": "negative",
|
| 521 |
+
"type": "CONDITIONING",
|
| 522 |
+
"link": 137
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"name": "vae",
|
| 526 |
+
"type": "VAE",
|
| 527 |
+
"link": 154
|
| 528 |
+
},
|
| 529 |
+
{
|
| 530 |
+
"name": "control_video",
|
| 531 |
+
"shape": 7,
|
| 532 |
+
"type": "IMAGE",
|
| 533 |
+
"link": 141
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"name": "control_masks",
|
| 537 |
+
"shape": 7,
|
| 538 |
+
"type": "MASK",
|
| 539 |
+
"link": 148
|
| 540 |
+
},
|
| 541 |
+
{
|
| 542 |
+
"name": "reference_image",
|
| 543 |
+
"shape": 7,
|
| 544 |
+
"type": "IMAGE",
|
| 545 |
+
"link": null
|
| 546 |
+
},
|
| 547 |
+
{
|
| 548 |
+
"name": "length",
|
| 549 |
+
"type": "INT",
|
| 550 |
+
"widget": {
|
| 551 |
+
"name": "length"
|
| 552 |
+
},
|
| 553 |
+
"link": 162
|
| 554 |
+
}
|
| 555 |
+
],
|
| 556 |
+
"outputs": [
|
| 557 |
+
{
|
| 558 |
+
"name": "positive",
|
| 559 |
+
"type": "CONDITIONING",
|
| 560 |
+
"links": [
|
| 561 |
+
144
|
| 562 |
+
]
|
| 563 |
+
},
|
| 564 |
+
{
|
| 565 |
+
"name": "negative",
|
| 566 |
+
"type": "CONDITIONING",
|
| 567 |
+
"links": [
|
| 568 |
+
145
|
| 569 |
+
]
|
| 570 |
+
},
|
| 571 |
+
{
|
| 572 |
+
"name": "latent",
|
| 573 |
+
"type": "LATENT",
|
| 574 |
+
"links": [
|
| 575 |
+
143
|
| 576 |
+
]
|
| 577 |
+
},
|
| 578 |
+
{
|
| 579 |
+
"name": "trim_latent",
|
| 580 |
+
"type": "INT",
|
| 581 |
+
"links": []
|
| 582 |
+
}
|
| 583 |
+
],
|
| 584 |
+
"properties": {
|
| 585 |
+
"Node name for S&R": "WanVaceToVideo"
|
| 586 |
+
},
|
| 587 |
+
"widgets_values": [
|
| 588 |
+
704,
|
| 589 |
+
512,
|
| 590 |
+
65,
|
| 591 |
+
1,
|
| 592 |
+
0
|
| 593 |
+
]
|
| 594 |
+
},
|
| 595 |
+
{
|
| 596 |
+
"id": 26,
|
| 597 |
+
"type": "CheckpointLoaderSimple",
|
| 598 |
+
"pos": [
|
| 599 |
+
717.7600708007812,
|
| 600 |
+
992.8963623046875
|
| 601 |
+
],
|
| 602 |
+
"size": [
|
| 603 |
+
411.2327880859375,
|
| 604 |
+
98
|
| 605 |
+
],
|
| 606 |
+
"flags": {},
|
| 607 |
+
"order": 4,
|
| 608 |
+
"mode": 0,
|
| 609 |
+
"inputs": [],
|
| 610 |
+
"outputs": [
|
| 611 |
+
{
|
| 612 |
+
"name": "MODEL",
|
| 613 |
+
"type": "MODEL",
|
| 614 |
+
"links": [
|
| 615 |
+
150
|
| 616 |
+
]
|
| 617 |
+
},
|
| 618 |
+
{
|
| 619 |
+
"name": "CLIP",
|
| 620 |
+
"type": "CLIP",
|
| 621 |
+
"links": [
|
| 622 |
+
152,
|
| 623 |
+
153
|
| 624 |
+
]
|
| 625 |
+
},
|
| 626 |
+
{
|
| 627 |
+
"name": "VAE",
|
| 628 |
+
"type": "VAE",
|
| 629 |
+
"links": [
|
| 630 |
+
154,
|
| 631 |
+
155
|
| 632 |
+
]
|
| 633 |
+
}
|
| 634 |
+
],
|
| 635 |
+
"properties": {
|
| 636 |
+
"Node name for S&R": "CheckpointLoaderSimple"
|
| 637 |
+
},
|
| 638 |
+
"widgets_values": [
|
| 639 |
+
"WAN\\wan2.2-rapid-mega-aio-v2.safetensors"
|
| 640 |
+
]
|
| 641 |
+
}
|
| 642 |
+
],
|
| 643 |
+
"links": [
|
| 644 |
+
[
|
| 645 |
+
123,
|
| 646 |
+
32,
|
| 647 |
+
0,
|
| 648 |
+
8,
|
| 649 |
+
0,
|
| 650 |
+
"MODEL"
|
| 651 |
+
],
|
| 652 |
+
[
|
| 653 |
+
136,
|
| 654 |
+
9,
|
| 655 |
+
0,
|
| 656 |
+
28,
|
| 657 |
+
0,
|
| 658 |
+
"CONDITIONING"
|
| 659 |
+
],
|
| 660 |
+
[
|
| 661 |
+
137,
|
| 662 |
+
10,
|
| 663 |
+
0,
|
| 664 |
+
28,
|
| 665 |
+
1,
|
| 666 |
+
"CONDITIONING"
|
| 667 |
+
],
|
| 668 |
+
[
|
| 669 |
+
141,
|
| 670 |
+
34,
|
| 671 |
+
0,
|
| 672 |
+
28,
|
| 673 |
+
3,
|
| 674 |
+
"IMAGE"
|
| 675 |
+
],
|
| 676 |
+
[
|
| 677 |
+
143,
|
| 678 |
+
28,
|
| 679 |
+
2,
|
| 680 |
+
8,
|
| 681 |
+
3,
|
| 682 |
+
"LATENT"
|
| 683 |
+
],
|
| 684 |
+
[
|
| 685 |
+
144,
|
| 686 |
+
28,
|
| 687 |
+
0,
|
| 688 |
+
8,
|
| 689 |
+
1,
|
| 690 |
+
"CONDITIONING"
|
| 691 |
+
],
|
| 692 |
+
[
|
| 693 |
+
145,
|
| 694 |
+
28,
|
| 695 |
+
1,
|
| 696 |
+
8,
|
| 697 |
+
2,
|
| 698 |
+
"CONDITIONING"
|
| 699 |
+
],
|
| 700 |
+
[
|
| 701 |
+
148,
|
| 702 |
+
34,
|
| 703 |
+
1,
|
| 704 |
+
28,
|
| 705 |
+
4,
|
| 706 |
+
"MASK"
|
| 707 |
+
],
|
| 708 |
+
[
|
| 709 |
+
149,
|
| 710 |
+
8,
|
| 711 |
+
0,
|
| 712 |
+
11,
|
| 713 |
+
0,
|
| 714 |
+
"LATENT"
|
| 715 |
+
],
|
| 716 |
+
[
|
| 717 |
+
150,
|
| 718 |
+
26,
|
| 719 |
+
0,
|
| 720 |
+
32,
|
| 721 |
+
0,
|
| 722 |
+
"MODEL"
|
| 723 |
+
],
|
| 724 |
+
[
|
| 725 |
+
152,
|
| 726 |
+
26,
|
| 727 |
+
1,
|
| 728 |
+
9,
|
| 729 |
+
0,
|
| 730 |
+
"CLIP"
|
| 731 |
+
],
|
| 732 |
+
[
|
| 733 |
+
153,
|
| 734 |
+
26,
|
| 735 |
+
1,
|
| 736 |
+
10,
|
| 737 |
+
0,
|
| 738 |
+
"CLIP"
|
| 739 |
+
],
|
| 740 |
+
[
|
| 741 |
+
154,
|
| 742 |
+
26,
|
| 743 |
+
2,
|
| 744 |
+
28,
|
| 745 |
+
2,
|
| 746 |
+
"VAE"
|
| 747 |
+
],
|
| 748 |
+
[
|
| 749 |
+
155,
|
| 750 |
+
26,
|
| 751 |
+
2,
|
| 752 |
+
11,
|
| 753 |
+
1,
|
| 754 |
+
"VAE"
|
| 755 |
+
],
|
| 756 |
+
[
|
| 757 |
+
156,
|
| 758 |
+
37,
|
| 759 |
+
0,
|
| 760 |
+
34,
|
| 761 |
+
1,
|
| 762 |
+
"IMAGE"
|
| 763 |
+
],
|
| 764 |
+
[
|
| 765 |
+
159,
|
| 766 |
+
11,
|
| 767 |
+
0,
|
| 768 |
+
39,
|
| 769 |
+
0,
|
| 770 |
+
"IMAGE"
|
| 771 |
+
],
|
| 772 |
+
[
|
| 773 |
+
161,
|
| 774 |
+
16,
|
| 775 |
+
0,
|
| 776 |
+
34,
|
| 777 |
+
0,
|
| 778 |
+
"IMAGE"
|
| 779 |
+
],
|
| 780 |
+
[
|
| 781 |
+
162,
|
| 782 |
+
48,
|
| 783 |
+
0,
|
| 784 |
+
28,
|
| 785 |
+
6,
|
| 786 |
+
"INT"
|
| 787 |
+
],
|
| 788 |
+
[
|
| 789 |
+
163,
|
| 790 |
+
48,
|
| 791 |
+
0,
|
| 792 |
+
34,
|
| 793 |
+
4,
|
| 794 |
+
"INT"
|
| 795 |
+
]
|
| 796 |
+
],
|
| 797 |
+
"groups": [],
|
| 798 |
+
"config": {},
|
| 799 |
+
"extra": {
|
| 800 |
+
"ds": {
|
| 801 |
+
"scale": 0.9460125269296099,
|
| 802 |
+
"offset": [
|
| 803 |
+
-347.0172147278162,
|
| 804 |
+
-673.2084774757514
|
| 805 |
+
]
|
| 806 |
+
},
|
| 807 |
+
"ue_links": [],
|
| 808 |
+
"frontendVersion": "1.26.11",
|
| 809 |
+
"VHS_latentpreview": false,
|
| 810 |
+
"VHS_latentpreviewrate": 0,
|
| 811 |
+
"VHS_MetadataImage": true,
|
| 812 |
+
"VHS_KeepIntermediate": true
|
| 813 |
+
},
|
| 814 |
+
"version": 0.4
|
| 815 |
+
}
|
Mega-v2/wan2.2-rapid-mega-aio-nsfw-v2.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f6917a018f2334f2811235fbac12ac26b77055559cea88fa208f54e028f5ed5c
|
| 3 |
+
size 24334484191
|
Mega-v2/wan2.2-rapid-mega-aio-v2.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f55c364c7b17c9d786b8b7d14b544083d36f60f07620d4fa467e183b8f51185e
|
| 3 |
+
size 24334482407
|
Mega-v3/Rapid-AIO-Mega.json
ADDED
|
@@ -0,0 +1,794 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"id": "e6c78bba-ef40-40c6-8b95-20bd0020ddbb",
|
| 3 |
+
"revision": 0,
|
| 4 |
+
"last_node_id": 50,
|
| 5 |
+
"last_link_id": 168,
|
| 6 |
+
"nodes": [
|
| 7 |
+
{
|
| 8 |
+
"id": 37,
|
| 9 |
+
"type": "LoadImage",
|
| 10 |
+
"pos": [
|
| 11 |
+
320.8406066894531,
|
| 12 |
+
992.9769287109375
|
| 13 |
+
],
|
| 14 |
+
"size": [
|
| 15 |
+
375.5744934082031,
|
| 16 |
+
326
|
| 17 |
+
],
|
| 18 |
+
"flags": {},
|
| 19 |
+
"order": 0,
|
| 20 |
+
"mode": 4,
|
| 21 |
+
"inputs": [],
|
| 22 |
+
"outputs": [
|
| 23 |
+
{
|
| 24 |
+
"name": "IMAGE",
|
| 25 |
+
"type": "IMAGE",
|
| 26 |
+
"links": [
|
| 27 |
+
156
|
| 28 |
+
]
|
| 29 |
+
},
|
| 30 |
+
{
|
| 31 |
+
"name": "MASK",
|
| 32 |
+
"type": "MASK",
|
| 33 |
+
"links": null
|
| 34 |
+
}
|
| 35 |
+
],
|
| 36 |
+
"title": "End Frame (Optional)",
|
| 37 |
+
"properties": {
|
| 38 |
+
"Node name for S&R": "LoadImage"
|
| 39 |
+
},
|
| 40 |
+
"widgets_values": [
|
| 41 |
+
"ComfyUI_temp_zmuag_00002_.png",
|
| 42 |
+
"image"
|
| 43 |
+
]
|
| 44 |
+
},
|
| 45 |
+
{
|
| 46 |
+
"id": 10,
|
| 47 |
+
"type": "CLIPTextEncode",
|
| 48 |
+
"pos": [
|
| 49 |
+
742.6182250976562,
|
| 50 |
+
1414.9295654296875
|
| 51 |
+
],
|
| 52 |
+
"size": [
|
| 53 |
+
396.2566833496094,
|
| 54 |
+
88
|
| 55 |
+
],
|
| 56 |
+
"flags": {},
|
| 57 |
+
"order": 7,
|
| 58 |
+
"mode": 0,
|
| 59 |
+
"inputs": [
|
| 60 |
+
{
|
| 61 |
+
"name": "clip",
|
| 62 |
+
"type": "CLIP",
|
| 63 |
+
"link": 168
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"outputs": [
|
| 67 |
+
{
|
| 68 |
+
"name": "CONDITIONING",
|
| 69 |
+
"type": "CONDITIONING",
|
| 70 |
+
"links": [
|
| 71 |
+
137
|
| 72 |
+
]
|
| 73 |
+
}
|
| 74 |
+
],
|
| 75 |
+
"title": "Negative Prompt (leave blank cuz 1 CFG)",
|
| 76 |
+
"properties": {
|
| 77 |
+
"Node name for S&R": "CLIPTextEncode"
|
| 78 |
+
},
|
| 79 |
+
"widgets_values": [
|
| 80 |
+
""
|
| 81 |
+
]
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"id": 16,
|
| 85 |
+
"type": "LoadImage",
|
| 86 |
+
"pos": [
|
| 87 |
+
335.1852722167969,
|
| 88 |
+
609.4923095703125
|
| 89 |
+
],
|
| 90 |
+
"size": [
|
| 91 |
+
375.5744934082031,
|
| 92 |
+
326
|
| 93 |
+
],
|
| 94 |
+
"flags": {},
|
| 95 |
+
"order": 1,
|
| 96 |
+
"mode": 4,
|
| 97 |
+
"inputs": [],
|
| 98 |
+
"outputs": [
|
| 99 |
+
{
|
| 100 |
+
"name": "IMAGE",
|
| 101 |
+
"type": "IMAGE",
|
| 102 |
+
"links": [
|
| 103 |
+
161
|
| 104 |
+
]
|
| 105 |
+
},
|
| 106 |
+
{
|
| 107 |
+
"name": "MASK",
|
| 108 |
+
"type": "MASK",
|
| 109 |
+
"links": null
|
| 110 |
+
}
|
| 111 |
+
],
|
| 112 |
+
"title": "Start Frame (Optional)",
|
| 113 |
+
"properties": {
|
| 114 |
+
"Node name for S&R": "LoadImage"
|
| 115 |
+
},
|
| 116 |
+
"widgets_values": [
|
| 117 |
+
"Untitled.png",
|
| 118 |
+
"image"
|
| 119 |
+
]
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"id": 9,
|
| 123 |
+
"type": "CLIPTextEncode",
|
| 124 |
+
"pos": [
|
| 125 |
+
735.2056274414062,
|
| 126 |
+
1159.9134521484375
|
| 127 |
+
],
|
| 128 |
+
"size": [
|
| 129 |
+
400,
|
| 130 |
+
200
|
| 131 |
+
],
|
| 132 |
+
"flags": {},
|
| 133 |
+
"order": 6,
|
| 134 |
+
"mode": 0,
|
| 135 |
+
"inputs": [
|
| 136 |
+
{
|
| 137 |
+
"name": "clip",
|
| 138 |
+
"type": "CLIP",
|
| 139 |
+
"link": 167
|
| 140 |
+
}
|
| 141 |
+
],
|
| 142 |
+
"outputs": [
|
| 143 |
+
{
|
| 144 |
+
"name": "CONDITIONING",
|
| 145 |
+
"type": "CONDITIONING",
|
| 146 |
+
"links": [
|
| 147 |
+
136
|
| 148 |
+
]
|
| 149 |
+
}
|
| 150 |
+
],
|
| 151 |
+
"properties": {
|
| 152 |
+
"Node name for S&R": "CLIPTextEncode"
|
| 153 |
+
},
|
| 154 |
+
"widgets_values": [
|
| 155 |
+
"A cat chasing a dog. Outside. High quality, sharp details."
|
| 156 |
+
]
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"id": 34,
|
| 160 |
+
"type": "WanVideoVACEStartToEndFrame",
|
| 161 |
+
"pos": [
|
| 162 |
+
798.5907592773438,
|
| 163 |
+
735.5933837890625
|
| 164 |
+
],
|
| 165 |
+
"size": [
|
| 166 |
+
329.9634704589844,
|
| 167 |
+
190
|
| 168 |
+
],
|
| 169 |
+
"flags": {},
|
| 170 |
+
"order": 4,
|
| 171 |
+
"mode": 4,
|
| 172 |
+
"inputs": [
|
| 173 |
+
{
|
| 174 |
+
"name": "start_image",
|
| 175 |
+
"shape": 7,
|
| 176 |
+
"type": "IMAGE",
|
| 177 |
+
"link": 161
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"name": "end_image",
|
| 181 |
+
"shape": 7,
|
| 182 |
+
"type": "IMAGE",
|
| 183 |
+
"link": 156
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"name": "control_images",
|
| 187 |
+
"shape": 7,
|
| 188 |
+
"type": "IMAGE",
|
| 189 |
+
"link": null
|
| 190 |
+
},
|
| 191 |
+
{
|
| 192 |
+
"name": "inpaint_mask",
|
| 193 |
+
"shape": 7,
|
| 194 |
+
"type": "MASK",
|
| 195 |
+
"link": null
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"name": "num_frames",
|
| 199 |
+
"type": "INT",
|
| 200 |
+
"widget": {
|
| 201 |
+
"name": "num_frames"
|
| 202 |
+
},
|
| 203 |
+
"link": 163
|
| 204 |
+
}
|
| 205 |
+
],
|
| 206 |
+
"outputs": [
|
| 207 |
+
{
|
| 208 |
+
"name": "images",
|
| 209 |
+
"type": "IMAGE",
|
| 210 |
+
"links": [
|
| 211 |
+
141
|
| 212 |
+
]
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"name": "masks",
|
| 216 |
+
"type": "MASK",
|
| 217 |
+
"links": [
|
| 218 |
+
148
|
| 219 |
+
]
|
| 220 |
+
}
|
| 221 |
+
],
|
| 222 |
+
"title": "Bypass for T2V, use for I2V",
|
| 223 |
+
"properties": {
|
| 224 |
+
"Node name for S&R": "WanVideoVACEStartToEndFrame"
|
| 225 |
+
},
|
| 226 |
+
"widgets_values": [
|
| 227 |
+
65,
|
| 228 |
+
0.5,
|
| 229 |
+
0,
|
| 230 |
+
-1
|
| 231 |
+
]
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"id": 32,
|
| 235 |
+
"type": "ModelSamplingSD3",
|
| 236 |
+
"pos": [
|
| 237 |
+
1229.26806640625,
|
| 238 |
+
765.2486572265625
|
| 239 |
+
],
|
| 240 |
+
"size": [
|
| 241 |
+
270,
|
| 242 |
+
58
|
| 243 |
+
],
|
| 244 |
+
"flags": {},
|
| 245 |
+
"order": 5,
|
| 246 |
+
"mode": 0,
|
| 247 |
+
"inputs": [
|
| 248 |
+
{
|
| 249 |
+
"name": "model",
|
| 250 |
+
"type": "MODEL",
|
| 251 |
+
"link": 150
|
| 252 |
+
}
|
| 253 |
+
],
|
| 254 |
+
"outputs": [
|
| 255 |
+
{
|
| 256 |
+
"name": "MODEL",
|
| 257 |
+
"type": "MODEL",
|
| 258 |
+
"links": [
|
| 259 |
+
123
|
| 260 |
+
]
|
| 261 |
+
}
|
| 262 |
+
],
|
| 263 |
+
"properties": {
|
| 264 |
+
"Node name for S&R": "ModelSamplingSD3"
|
| 265 |
+
},
|
| 266 |
+
"widgets_values": [
|
| 267 |
+
8
|
| 268 |
+
]
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"id": 48,
|
| 272 |
+
"type": "PrimitiveInt",
|
| 273 |
+
"pos": [
|
| 274 |
+
1224.4476318359375,
|
| 275 |
+
885.9221801757812
|
| 276 |
+
],
|
| 277 |
+
"size": [
|
| 278 |
+
270,
|
| 279 |
+
82
|
| 280 |
+
],
|
| 281 |
+
"flags": {},
|
| 282 |
+
"order": 2,
|
| 283 |
+
"mode": 0,
|
| 284 |
+
"inputs": [],
|
| 285 |
+
"outputs": [
|
| 286 |
+
{
|
| 287 |
+
"name": "INT",
|
| 288 |
+
"type": "INT",
|
| 289 |
+
"links": [
|
| 290 |
+
162,
|
| 291 |
+
163
|
| 292 |
+
]
|
| 293 |
+
}
|
| 294 |
+
],
|
| 295 |
+
"title": "Number of Frames",
|
| 296 |
+
"properties": {
|
| 297 |
+
"Node name for S&R": "PrimitiveInt"
|
| 298 |
+
},
|
| 299 |
+
"widgets_values": [
|
| 300 |
+
81,
|
| 301 |
+
"fixed"
|
| 302 |
+
]
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"id": 11,
|
| 306 |
+
"type": "VAEDecode",
|
| 307 |
+
"pos": [
|
| 308 |
+
1285.5494384765625,
|
| 309 |
+
1329.27197265625
|
| 310 |
+
],
|
| 311 |
+
"size": [
|
| 312 |
+
140,
|
| 313 |
+
46
|
| 314 |
+
],
|
| 315 |
+
"flags": {},
|
| 316 |
+
"order": 10,
|
| 317 |
+
"mode": 0,
|
| 318 |
+
"inputs": [
|
| 319 |
+
{
|
| 320 |
+
"name": "samples",
|
| 321 |
+
"type": "LATENT",
|
| 322 |
+
"link": 149
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"name": "vae",
|
| 326 |
+
"type": "VAE",
|
| 327 |
+
"link": 155
|
| 328 |
+
}
|
| 329 |
+
],
|
| 330 |
+
"outputs": [
|
| 331 |
+
{
|
| 332 |
+
"name": "IMAGE",
|
| 333 |
+
"type": "IMAGE",
|
| 334 |
+
"links": [
|
| 335 |
+
159
|
| 336 |
+
]
|
| 337 |
+
}
|
| 338 |
+
],
|
| 339 |
+
"properties": {
|
| 340 |
+
"Node name for S&R": "VAEDecode"
|
| 341 |
+
},
|
| 342 |
+
"widgets_values": []
|
| 343 |
+
},
|
| 344 |
+
{
|
| 345 |
+
"id": 28,
|
| 346 |
+
"type": "WanVaceToVideo",
|
| 347 |
+
"pos": [
|
| 348 |
+
1231.000732421875,
|
| 349 |
+
1019.9004516601562
|
| 350 |
+
],
|
| 351 |
+
"size": [
|
| 352 |
+
270,
|
| 353 |
+
254
|
| 354 |
+
],
|
| 355 |
+
"flags": {},
|
| 356 |
+
"order": 8,
|
| 357 |
+
"mode": 0,
|
| 358 |
+
"inputs": [
|
| 359 |
+
{
|
| 360 |
+
"name": "positive",
|
| 361 |
+
"type": "CONDITIONING",
|
| 362 |
+
"link": 136
|
| 363 |
+
},
|
| 364 |
+
{
|
| 365 |
+
"name": "negative",
|
| 366 |
+
"type": "CONDITIONING",
|
| 367 |
+
"link": 137
|
| 368 |
+
},
|
| 369 |
+
{
|
| 370 |
+
"name": "vae",
|
| 371 |
+
"type": "VAE",
|
| 372 |
+
"link": 154
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"name": "control_video",
|
| 376 |
+
"shape": 7,
|
| 377 |
+
"type": "IMAGE",
|
| 378 |
+
"link": 141
|
| 379 |
+
},
|
| 380 |
+
{
|
| 381 |
+
"name": "control_masks",
|
| 382 |
+
"shape": 7,
|
| 383 |
+
"type": "MASK",
|
| 384 |
+
"link": 148
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"name": "reference_image",
|
| 388 |
+
"shape": 7,
|
| 389 |
+
"type": "IMAGE",
|
| 390 |
+
"link": null
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"name": "length",
|
| 394 |
+
"type": "INT",
|
| 395 |
+
"widget": {
|
| 396 |
+
"name": "length"
|
| 397 |
+
},
|
| 398 |
+
"link": 162
|
| 399 |
+
}
|
| 400 |
+
],
|
| 401 |
+
"outputs": [
|
| 402 |
+
{
|
| 403 |
+
"name": "positive",
|
| 404 |
+
"type": "CONDITIONING",
|
| 405 |
+
"links": [
|
| 406 |
+
144
|
| 407 |
+
]
|
| 408 |
+
},
|
| 409 |
+
{
|
| 410 |
+
"name": "negative",
|
| 411 |
+
"type": "CONDITIONING",
|
| 412 |
+
"links": [
|
| 413 |
+
145
|
| 414 |
+
]
|
| 415 |
+
},
|
| 416 |
+
{
|
| 417 |
+
"name": "latent",
|
| 418 |
+
"type": "LATENT",
|
| 419 |
+
"links": [
|
| 420 |
+
143
|
| 421 |
+
]
|
| 422 |
+
},
|
| 423 |
+
{
|
| 424 |
+
"name": "trim_latent",
|
| 425 |
+
"type": "INT",
|
| 426 |
+
"links": []
|
| 427 |
+
}
|
| 428 |
+
],
|
| 429 |
+
"title": "T2V=Strength 0, I2V=Strength 1",
|
| 430 |
+
"properties": {
|
| 431 |
+
"Node name for S&R": "WanVaceToVideo"
|
| 432 |
+
},
|
| 433 |
+
"widgets_values": [
|
| 434 |
+
768,
|
| 435 |
+
768,
|
| 436 |
+
65,
|
| 437 |
+
1,
|
| 438 |
+
1
|
| 439 |
+
]
|
| 440 |
+
},
|
| 441 |
+
{
|
| 442 |
+
"id": 8,
|
| 443 |
+
"type": "KSampler",
|
| 444 |
+
"pos": [
|
| 445 |
+
1545.6077880859375,
|
| 446 |
+
660.8905639648438
|
| 447 |
+
],
|
| 448 |
+
"size": [
|
| 449 |
+
270,
|
| 450 |
+
262
|
| 451 |
+
],
|
| 452 |
+
"flags": {},
|
| 453 |
+
"order": 9,
|
| 454 |
+
"mode": 0,
|
| 455 |
+
"inputs": [
|
| 456 |
+
{
|
| 457 |
+
"name": "model",
|
| 458 |
+
"type": "MODEL",
|
| 459 |
+
"link": 123
|
| 460 |
+
},
|
| 461 |
+
{
|
| 462 |
+
"name": "positive",
|
| 463 |
+
"type": "CONDITIONING",
|
| 464 |
+
"link": 144
|
| 465 |
+
},
|
| 466 |
+
{
|
| 467 |
+
"name": "negative",
|
| 468 |
+
"type": "CONDITIONING",
|
| 469 |
+
"link": 145
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"name": "latent_image",
|
| 473 |
+
"type": "LATENT",
|
| 474 |
+
"link": 143
|
| 475 |
+
}
|
| 476 |
+
],
|
| 477 |
+
"outputs": [
|
| 478 |
+
{
|
| 479 |
+
"name": "LATENT",
|
| 480 |
+
"type": "LATENT",
|
| 481 |
+
"links": [
|
| 482 |
+
149
|
| 483 |
+
]
|
| 484 |
+
}
|
| 485 |
+
],
|
| 486 |
+
"properties": {
|
| 487 |
+
"Node name for S&R": "KSampler"
|
| 488 |
+
},
|
| 489 |
+
"widgets_values": [
|
| 490 |
+
6456545463455,
|
| 491 |
+
"fixed",
|
| 492 |
+
4,
|
| 493 |
+
1,
|
| 494 |
+
"ipndm",
|
| 495 |
+
"beta",
|
| 496 |
+
1
|
| 497 |
+
]
|
| 498 |
+
},
|
| 499 |
+
{
|
| 500 |
+
"id": 39,
|
| 501 |
+
"type": "VHS_VideoCombine",
|
| 502 |
+
"pos": [
|
| 503 |
+
1546.0302734375,
|
| 504 |
+
982.9334106445312
|
| 505 |
+
],
|
| 506 |
+
"size": [
|
| 507 |
+
315.8479309082031,
|
| 508 |
+
643.847900390625
|
| 509 |
+
],
|
| 510 |
+
"flags": {},
|
| 511 |
+
"order": 11,
|
| 512 |
+
"mode": 0,
|
| 513 |
+
"inputs": [
|
| 514 |
+
{
|
| 515 |
+
"name": "images",
|
| 516 |
+
"type": "IMAGE",
|
| 517 |
+
"link": 159
|
| 518 |
+
},
|
| 519 |
+
{
|
| 520 |
+
"name": "audio",
|
| 521 |
+
"shape": 7,
|
| 522 |
+
"type": "AUDIO",
|
| 523 |
+
"link": null
|
| 524 |
+
},
|
| 525 |
+
{
|
| 526 |
+
"name": "meta_batch",
|
| 527 |
+
"shape": 7,
|
| 528 |
+
"type": "VHS_BatchManager",
|
| 529 |
+
"link": null
|
| 530 |
+
},
|
| 531 |
+
{
|
| 532 |
+
"name": "vae",
|
| 533 |
+
"shape": 7,
|
| 534 |
+
"type": "VAE",
|
| 535 |
+
"link": null
|
| 536 |
+
}
|
| 537 |
+
],
|
| 538 |
+
"outputs": [
|
| 539 |
+
{
|
| 540 |
+
"name": "Filenames",
|
| 541 |
+
"type": "VHS_FILENAMES",
|
| 542 |
+
"links": null
|
| 543 |
+
}
|
| 544 |
+
],
|
| 545 |
+
"properties": {
|
| 546 |
+
"Node name for S&R": "VHS_VideoCombine"
|
| 547 |
+
},
|
| 548 |
+
"widgets_values": {
|
| 549 |
+
"frame_rate": 16,
|
| 550 |
+
"loop_count": 0,
|
| 551 |
+
"filename_prefix": "rapid-mega-out/vid",
|
| 552 |
+
"format": "video/h264-mp4",
|
| 553 |
+
"pix_fmt": "yuv420p",
|
| 554 |
+
"crf": 19,
|
| 555 |
+
"save_metadata": true,
|
| 556 |
+
"trim_to_audio": false,
|
| 557 |
+
"pingpong": false,
|
| 558 |
+
"save_output": true,
|
| 559 |
+
"videopreview": {
|
| 560 |
+
"hidden": false,
|
| 561 |
+
"paused": false,
|
| 562 |
+
"params": {
|
| 563 |
+
"filename": "vid_00017.mp4",
|
| 564 |
+
"subfolder": "rapid-mega-out",
|
| 565 |
+
"type": "temp",
|
| 566 |
+
"format": "video/nvenc_hevc-mp4",
|
| 567 |
+
"frame_rate": 16,
|
| 568 |
+
"workflow": "vid_00017.png",
|
| 569 |
+
"fullpath": "D:\\ComfyUI2\\ComfyUI\\temp\\rapid-mega-out\\vid_00017.mp4"
|
| 570 |
+
}
|
| 571 |
+
}
|
| 572 |
+
}
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"id": 26,
|
| 576 |
+
"type": "CheckpointLoaderSimple",
|
| 577 |
+
"pos": [
|
| 578 |
+
717.7600708007812,
|
| 579 |
+
992.8963623046875
|
| 580 |
+
],
|
| 581 |
+
"size": [
|
| 582 |
+
411.2327880859375,
|
| 583 |
+
98
|
| 584 |
+
],
|
| 585 |
+
"flags": {},
|
| 586 |
+
"order": 3,
|
| 587 |
+
"mode": 0,
|
| 588 |
+
"inputs": [],
|
| 589 |
+
"outputs": [
|
| 590 |
+
{
|
| 591 |
+
"name": "MODEL",
|
| 592 |
+
"type": "MODEL",
|
| 593 |
+
"links": [
|
| 594 |
+
150
|
| 595 |
+
]
|
| 596 |
+
},
|
| 597 |
+
{
|
| 598 |
+
"name": "CLIP",
|
| 599 |
+
"type": "CLIP",
|
| 600 |
+
"links": [
|
| 601 |
+
167,
|
| 602 |
+
168
|
| 603 |
+
]
|
| 604 |
+
},
|
| 605 |
+
{
|
| 606 |
+
"name": "VAE",
|
| 607 |
+
"type": "VAE",
|
| 608 |
+
"links": [
|
| 609 |
+
154,
|
| 610 |
+
155
|
| 611 |
+
]
|
| 612 |
+
}
|
| 613 |
+
],
|
| 614 |
+
"properties": {
|
| 615 |
+
"Node name for S&R": "CheckpointLoaderSimple"
|
| 616 |
+
},
|
| 617 |
+
"widgets_values": [
|
| 618 |
+
"WAN\\wan2.2-rapid-mega-aio-v3.safetensors"
|
| 619 |
+
]
|
| 620 |
+
}
|
| 621 |
+
],
|
| 622 |
+
"links": [
|
| 623 |
+
[
|
| 624 |
+
123,
|
| 625 |
+
32,
|
| 626 |
+
0,
|
| 627 |
+
8,
|
| 628 |
+
0,
|
| 629 |
+
"MODEL"
|
| 630 |
+
],
|
| 631 |
+
[
|
| 632 |
+
136,
|
| 633 |
+
9,
|
| 634 |
+
0,
|
| 635 |
+
28,
|
| 636 |
+
0,
|
| 637 |
+
"CONDITIONING"
|
| 638 |
+
],
|
| 639 |
+
[
|
| 640 |
+
137,
|
| 641 |
+
10,
|
| 642 |
+
0,
|
| 643 |
+
28,
|
| 644 |
+
1,
|
| 645 |
+
"CONDITIONING"
|
| 646 |
+
],
|
| 647 |
+
[
|
| 648 |
+
141,
|
| 649 |
+
34,
|
| 650 |
+
0,
|
| 651 |
+
28,
|
| 652 |
+
3,
|
| 653 |
+
"IMAGE"
|
| 654 |
+
],
|
| 655 |
+
[
|
| 656 |
+
143,
|
| 657 |
+
28,
|
| 658 |
+
2,
|
| 659 |
+
8,
|
| 660 |
+
3,
|
| 661 |
+
"LATENT"
|
| 662 |
+
],
|
| 663 |
+
[
|
| 664 |
+
144,
|
| 665 |
+
28,
|
| 666 |
+
0,
|
| 667 |
+
8,
|
| 668 |
+
1,
|
| 669 |
+
"CONDITIONING"
|
| 670 |
+
],
|
| 671 |
+
[
|
| 672 |
+
145,
|
| 673 |
+
28,
|
| 674 |
+
1,
|
| 675 |
+
8,
|
| 676 |
+
2,
|
| 677 |
+
"CONDITIONING"
|
| 678 |
+
],
|
| 679 |
+
[
|
| 680 |
+
148,
|
| 681 |
+
34,
|
| 682 |
+
1,
|
| 683 |
+
28,
|
| 684 |
+
4,
|
| 685 |
+
"MASK"
|
| 686 |
+
],
|
| 687 |
+
[
|
| 688 |
+
149,
|
| 689 |
+
8,
|
| 690 |
+
0,
|
| 691 |
+
11,
|
| 692 |
+
0,
|
| 693 |
+
"LATENT"
|
| 694 |
+
],
|
| 695 |
+
[
|
| 696 |
+
150,
|
| 697 |
+
26,
|
| 698 |
+
0,
|
| 699 |
+
32,
|
| 700 |
+
0,
|
| 701 |
+
"MODEL"
|
| 702 |
+
],
|
| 703 |
+
[
|
| 704 |
+
154,
|
| 705 |
+
26,
|
| 706 |
+
2,
|
| 707 |
+
28,
|
| 708 |
+
2,
|
| 709 |
+
"VAE"
|
| 710 |
+
],
|
| 711 |
+
[
|
| 712 |
+
155,
|
| 713 |
+
26,
|
| 714 |
+
2,
|
| 715 |
+
11,
|
| 716 |
+
1,
|
| 717 |
+
"VAE"
|
| 718 |
+
],
|
| 719 |
+
[
|
| 720 |
+
156,
|
| 721 |
+
37,
|
| 722 |
+
0,
|
| 723 |
+
34,
|
| 724 |
+
1,
|
| 725 |
+
"IMAGE"
|
| 726 |
+
],
|
| 727 |
+
[
|
| 728 |
+
159,
|
| 729 |
+
11,
|
| 730 |
+
0,
|
| 731 |
+
39,
|
| 732 |
+
0,
|
| 733 |
+
"IMAGE"
|
| 734 |
+
],
|
| 735 |
+
[
|
| 736 |
+
161,
|
| 737 |
+
16,
|
| 738 |
+
0,
|
| 739 |
+
34,
|
| 740 |
+
0,
|
| 741 |
+
"IMAGE"
|
| 742 |
+
],
|
| 743 |
+
[
|
| 744 |
+
162,
|
| 745 |
+
48,
|
| 746 |
+
0,
|
| 747 |
+
28,
|
| 748 |
+
6,
|
| 749 |
+
"INT"
|
| 750 |
+
],
|
| 751 |
+
[
|
| 752 |
+
163,
|
| 753 |
+
48,
|
| 754 |
+
0,
|
| 755 |
+
34,
|
| 756 |
+
4,
|
| 757 |
+
"INT"
|
| 758 |
+
],
|
| 759 |
+
[
|
| 760 |
+
167,
|
| 761 |
+
26,
|
| 762 |
+
1,
|
| 763 |
+
9,
|
| 764 |
+
0,
|
| 765 |
+
"CLIP"
|
| 766 |
+
],
|
| 767 |
+
[
|
| 768 |
+
168,
|
| 769 |
+
26,
|
| 770 |
+
1,
|
| 771 |
+
10,
|
| 772 |
+
0,
|
| 773 |
+
"CLIP"
|
| 774 |
+
]
|
| 775 |
+
],
|
| 776 |
+
"groups": [],
|
| 777 |
+
"config": {},
|
| 778 |
+
"extra": {
|
| 779 |
+
"ds": {
|
| 780 |
+
"scale": 0.9033839039317166,
|
| 781 |
+
"offset": [
|
| 782 |
+
-30.956710659442255,
|
| 783 |
+
-631.2294646665323
|
| 784 |
+
]
|
| 785 |
+
},
|
| 786 |
+
"frontendVersion": "1.26.11",
|
| 787 |
+
"ue_links": [],
|
| 788 |
+
"VHS_latentpreview": false,
|
| 789 |
+
"VHS_latentpreviewrate": 0,
|
| 790 |
+
"VHS_MetadataImage": true,
|
| 791 |
+
"VHS_KeepIntermediate": true
|
| 792 |
+
},
|
| 793 |
+
"version": 0.4
|
| 794 |
+
}
|
Mega-v3/wan2.2-rapid-mega-aio-v3.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:37f4326e2996d3d16b7be584f82b5f6f31fdbf3653f08fd0b5416d9eb03201a4
|
| 3 |
+
size 24334430563
|
Mega-v3/wan2.2-rapid-mega-nsfw-aio-v3.1.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:086f08ec0f9bb094982d06d532488423987a052f592f5a3ee3f625018889b440
|
| 3 |
+
size 24334431379
|
Mega-v3/wan2.2-rapid-mega-nsfw-aio-v3.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:733cb34fabbac8ceb0fbcaa7725e1ae7d4ae0ee8e6ab9e9cecf4049ce30a1c7b
|
| 3 |
+
size 24334430691
|
Mega-v4/.use_mega_v3_workflow
ADDED
|
File without changes
|
Mega-v4/wan2.2-rapid-mega-aio-v4.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4915233aa73b65bf973fb1e96b483269a8293d35bc4c6c6d51de3bc9a2911e37
|
| 3 |
+
size 24334432035
|
Mega-v5/.use_mega_v3_workflow
ADDED
|
File without changes
|
Mega-v5/wan2.2-rapid-mega-aio-nsfw-v5.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7391a4d1a79367ecbe45813b545b0e7e3157e23ab9d619118184ac2d0f05e952
|
| 3 |
+
size 24334440659
|
Mega-v5/wan2.2-rapid-mega-aio-v5.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f6980bd85601d6a70885b9dc28b081a7b36aaf4178e2f044a95ba11b19a64084
|
| 3 |
+
size 24334440659
|
Mega-v6/wan2.2-rapid-mega-aio-nsfw-v6.1.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9f988d46cf8a1c8be6328f0cd837f482fa96a0f97c9fe8e9fece1ece18b631d4
|
| 3 |
+
size 24334442115
|
Mega-v6/wan2.2-rapid-mega-aio-nsfw-v6.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c27ef1011355479e187f3de0da27a2603ba5c86ba6d28ed79132d47208842cfc
|
| 3 |
+
size 24334443019
|
Mega-v6/wan2.2-rapid-mega-aio-v6.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cfe7ab6ecac7f5c76c9dcbadab6520689a050f3679a5ebb7f2453a096c429a75
|
| 3 |
+
size 24334443307
|
Mega-v7/.use_mega_v3_workflow
ADDED
|
File without changes
|
Mega-v7/wan2.2-rapid-mega-aio-nsfw-v7.1.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:83ee14bc26a6604d63b58d264bbba8713902a4c4960439a73b58b6f3a6d63dcd
|
| 3 |
+
size 24334441939
|
Mega-v7/wan2.2-rapid-mega-aio-nsfw-v7.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:69ad419ba602675eab70d5be11edf692434ef12ec422c0abffec568df10df088
|
| 3 |
+
size 24334442323
|
Mega-v7/wan2.2-rapid-mega-aio-v7.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0c53d30edd48aaf95db3c48c1d1f0c7fec0ec9d87389a2105b302f06294fd4d4
|
| 3 |
+
size 24334442203
|
Mega-v8/.use_mega_v3_workflow
ADDED
|
File without changes
|
Mega-v8/wan2.2-rapid-mega-aio-nsfw-v8.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:68ba84224a5fc9dcef8ee886987f8589bfdae25892eba7f51c3c94ee6566e499
|
| 3 |
+
size 24334442051
|
Mega-v8/wan2.2-rapid-mega-aio-v8.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:732bf3b784008a4cb24ab5373bab7e492f700be83e1814b7b7871099d1c7a0d9
|
| 3 |
+
size 24334442043
|
Mega-v9/.use_mega_v3_workflow
ADDED
|
File without changes
|
Mega-v9/wan2.2-rapid-mega-aio-nsfw-v9.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b75ee40dc7f9b5cef1398c8985ac213794876f786e678a75b2ad89ca54d4e982
|
| 3 |
+
size 24334499567
|
Mega-v9/wan2.2-rapid-mega-aio-v9.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9695189a8ab520a367f3772bc7d06590caefa836f067f75dd99a5b22128f1b16
|
| 3 |
+
size 24334499815
|
README.md
ADDED
|
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model:
|
| 3 |
+
- Wan-AI/Wan2.2-I2V-A14B
|
| 4 |
+
- Wan-AI/Wan2.2-T2V-A14B
|
| 5 |
+
tags:
|
| 6 |
+
- wan
|
| 7 |
+
- wan2.2
|
| 8 |
+
- accelerator
|
| 9 |
+
pipeline_tag: image-to-video
|
| 10 |
+
license: apache-2.0
|
| 11 |
+
---
|
| 12 |
+
**I do not maintain this anymore. I've moved on to LTX2 which I find faster, more versatile and with far better quality than this ever had. Head over to https://huggingface.co/Phr00t/LTX2-Rapid-Merges for that.**
|
| 13 |
+
|
| 14 |
+
These are mixtures of WAN 2.2 and other WAN-like models and accelerators (with CLIP and VAE also included) to provide a fast, "all in one" solution for making videos as easily and quickly as possible. FP8 precision. Generally the latest version available for each type of model (image to video or text to video) is recommended.
|
| 15 |
+
|
| 16 |
+
**MEGA Merge:** This is the "one model to rule them all" version which pretty much does everything. It can handle text to video, image to video, and first frame to last frame and last frame only (because it includes VACE). There is a specific workflow to use these merges included in the mega-v3/ folder, as it is slightly more complicated (but shouldn't be slower) due to its flexibility. See below for a screenshot of "mega" being used.
|
| 17 |
+
|
| 18 |
+
**NSFW Merges:** Degenerates should steer clear of these merges, as they are only for the most civilized people of culture or scientific researchers. These merge various spicy WAN 2.1+2.2 LORAs at generally low strengths to provide a "jack of all trades, master of none" all in one despicable solution. If you are not getting the results you want, add more LORAs or just use the non-NSFW versions with hand-picked LORAs.
|
| 19 |
+
|
| 20 |
+
You just need to use the basic ComfyUI "Load Checkpoint" node with these, as you can take the VAE, CLIP and Model all from one AIO safetensors (saved in your 'checkpoints' folder). All models are intended to use 1 CFG and 4 steps. See sampler recommendations for each version below.
|
| 21 |
+
|
| 22 |
+
WAN 2.1 LORA compatibility is generally still good, along with "low noise" WAN 2.2 LORA compatibility (do not use "high noise" LORAs). You might need to adjust LORA strengths (up or down) to get results you want, though.
|
| 23 |
+
|
| 24 |
+
**MEGA version workflow screenshot (you can use VideoCombine instead of Preview Image):**
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
|
| 28 |
+
**MEGA I2V:** Just bypass the "end frame" so the "start frame" will be your I2V starting frame. Keep everything else the same.
|
| 29 |
+
|
| 30 |
+
**MEGA T2V:** Bypass "end frame", "start frame" and the "VACEFirstToLastFrame" node. Set strength to 0 for WanVaceToVideo.
|
| 31 |
+
|
| 32 |
+
**MEGA Last Frame:** Just bypass the "start frame" and keep "end frame". Keep everything else the same as in the picture.
|
| 33 |
+
|
| 34 |
+
**MEGA First->Last Frame:** Use it like shown in the picture above.
|
| 35 |
+
|
| 36 |
+
**Older non-MEGA workflows (v10 and below):**
|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
|
| 40 |
+

|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
|
| 44 |
+

|
| 45 |
+
|
| 46 |
+
Seems to work even on 8GB VRAM:
|
| 47 |
+
|
| 48 |
+

|
| 49 |
+
|
| 50 |
+
**CHANGELOG/VERSIONS:**
|
| 51 |
+
|
| 52 |
+
**base:** This is the first attempt and very "stable", but mostly WAN 2.1 with few WAN 2.2 features. sa_solver recommended.
|
| 53 |
+
|
| 54 |
+
**V2:** This is a more dynamic mixture with more WAN 2.2 features. sa_solver OR euler_a sampler recommended. Suffers from minor color shifts and noise in I2V, typically just at the start.
|
| 55 |
+
|
| 56 |
+
**V3:** This is a mixture of SkyReels and WAN 2.2, which should improve prompt adherence and quality. euler_a sampler recommended, beta scheduler. Suffers from minor color shifts and noise in I2V, typically just at the start.
|
| 57 |
+
|
| 58 |
+
**V4:** WAN 2.2 Lightning in the mix! euler_a/beta recommended. I2V noise and color shifting generally improved, but motion is a bit overexaggerated.
|
| 59 |
+
|
| 60 |
+
**V5:** Improved overexaggeration of I2V model. euler_a/beta recommended.
|
| 61 |
+
|
| 62 |
+
**V6:** New merging structure and overall significantly improved quality. I2V noise for the first 1-2 frames still exists, but it clears up much better than previous versions. Some WAN 2.1 LORAs at heavy strengths may cause up to 5 poor early frames with T2V, where discarding (or lowering strengths) may help. sa_solver/beta recommended. I2V rarely suffers from some dramatic scene shifts.
|
| 63 |
+
|
| 64 |
+
**V7:** I2V scene shifting should be fixed, but some I2V noise persists (generally for just the first 1-2 frames). No changes needed for the T2V model, so that remains at V6. sa_solver/beta recommended.
|
| 65 |
+
|
| 66 |
+
**V8:** T2V is now based entirely off of WAN 2.2 "low" (with PUSA, SkyReels and Lightning accelerators mixed in), which should resolve noise problems with it (8.1 adds more SkyReels). I2V scaled back some of the WAN 2.2 mix, which was contributing to noise problems. There still is some minor I2V noise, but more of a delicate balance of WAN 2.2 + SkyReels to keep decent motion and flexibility. Euler_a/beta recommended.
|
| 67 |
+
|
| 68 |
+
**V9:** Removed PUSA and SkyReels from the WAN 2.2-side of I2V (and completely from T2V). as I think PUSA/SkyReels wasn't consistently helping (and sometimes hurting) when applied to WAN 2.2. This should provide a more reliable base to work from. **euler_a/beta** recommended, but feel free to experiment with sa_solver/beta or others!
|
| 69 |
+
|
| 70 |
+
**V10:** Fixes wrong accelerators being used (now WAN 2.2 Lightning in I2V and an an adaptive rank Lightx2v along with WAN 2.2 lightning in T2V). I2V now has a tendency to zoom into whatever is going on in your prompt, which I believe comes from increased camera movement from Wan 2.2 Lightning and being less tied to your initial image as the video progresses (so, prompt accordingly). Euler_a/beta still seems good.
|
| 71 |
+
|
| 72 |
+
**MEGA v1:** This is likely how I will continue making models, as I don't need separate I2V and T2V versions. No noise problems with I2V anymore! MEGA v1 is based off of WAN 2.2 "low T2V", then adds VACE Fun, SkyReels, FunReward and the usual accelerator/CLIP/VAE mix. Use the included workflow. ipndm/sgm_uniform sampler/scheduler recommended.
|
| 73 |
+
|
| 74 |
+
**MEGA v2:** Removed the FunReward LORA, which was causing faces to shift. I did notice some minor face shifting in the NSFW merge remaining, which I think is due to the LORA mixture, but it has been improved. Also reduced some of the SkyReels LORA a bit. ipndm/beta recommended.
|
| 75 |
+
|
| 76 |
+
**MEGA v3:** Very different merging method using SkyReels 2.1 33% base and WAN 2.2 66% on top. I now also match accelerators for each version (2.1 and 2.2), then merge. I think this gets a better result by basing "mega" on models designed for 1 sampler (2.1) but then bringing in most of WAN 2.2 to lay on top. Camera control and prompt following is better, but keeping facial features still struggles compared to v10 I2V (might be a VACE limitation). ipndm/beta recommended. euler_a/beta seems to work better with the NSFW v3.1 merge, though.
|
| 77 |
+
|
| 78 |
+
**MEGA v4:** Uses the WAN 2.2 finetune from https://huggingface.co/eddy1111111/WAN22.XX_Palingenesis (also slight tweaks to accelerator strengths)
|
| 79 |
+
|
| 80 |
+
**MEGA v5:** New merging method with very experimental accelerator mix! I include small amounts of many I2V and T2V accelerators on top of WAN22.XX_Palingenesis and SkyReels 720p, plus VACE. The goal is to improve I2V consistency without hurting T2V. I think quality, detail and consistency has improved, but I do wish camera control was better. euler_a/beta recommended.
|
| 81 |
+
|
| 82 |
+
**MEGA v6:** Adjusted accelerators, bringing in more of the older Lightx2v as relying too much on the newest WAN 2.2 Lightning was hurting motion. I'm seeing better camera movement and prompt adherence in my testing than v5. NSFW v6.1 version has newer LORAs included and tweaked parameters. sa_solver/beta recommended.
|
| 83 |
+
|
| 84 |
+
**MEGA v7:** Now uses 3 different accelerators mixed together: lightx2v, WAN 2.2 Lightning (250928) and rCM. Motion seems to be improved further. euler_a/beta seems to work pretty good.
|
| 85 |
+
|
| 86 |
+
**MEGA v8:** Updated rCM 720p accelerator, which is now the biggest accelerator in the mix, reducing lightx2v and WAN 2.2 Lightning. Updated NSFW LORAs a bit. euler_a/beta still recommended.
|
| 87 |
+
|
| 88 |
+
**MEGA v9:** Removed SkyReels 2.1 720p completely. This is now based completely on WAN22.XX_Palingenesis T2V + VACE, using mostly rCM 720p for acceleration. Updated MysticXXX v2 for the NSFW merge among other tweaks. Motion should be better, hopefully. euler_a/beta recommended.
|
| 89 |
+
|
| 90 |
+
**MEGA v10:** Packed the models a bit differently, tweaked acclerators and NSFW LORAs some more. I tried to test this version a bit more and was getting better results. **euler_a/beta recommended**.
|
| 91 |
+
|
| 92 |
+
**MEGA v11:** Mostly the same as v10, but pulled in the latest WAN 2.1 distill from lightx2v. **euler_a/beta recommended**.
|
| 93 |
+
|
| 94 |
+
**MEGA v12:** Big update! Using bf16 Fun VACE WAN 2.2 as a base now, getting rid of "fp8 scaled" issues. Significantly tweaked NSFW LORAs, even putting a dash of "high noise" Dreamlay into the mix. Only uses rCM and Lightx2V accelerators now for better motion. v12.1 improves cumshots. **dpmpp_sde/beta recommended**.
|
| 95 |
+
|
| 96 |
+
Looking for GGUFs? Check the sidebar for quants.
|
| 97 |
+
|
| 98 |
+
Looking for FP16 precision? TekeshiX has been helping me build variants in FP16 format (but they are kinda outdated):
|
| 99 |
+
|
| 100 |
+
https://huggingface.co/TekeshiX/RAPID-AIO-FP16/tree/main
|
| 101 |
+
|
| 102 |
+
**DISCLAIMER:** As you may expect, some compromises had to be made to reach this level of speed and simplicity. If you want more complex workflows and longer generation times to run "full WAN 2.2"'s pair of models (which will give higher quality results), or control over accelerator LORAs included in this merge, there are many resources elsewhere to do that.
|
v10/wan2.2-i2v-rapid-aio-v10-nsfw.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3ead1d9bfd24cc650052dcd310be32bcfd0dd6175418323c41a9208bbc027081
|
| 3 |
+
size 23387046339
|
v10/wan2.2-i2v-rapid-aio-v10.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fe9cb6ce4abfa2beed7a954d33550828d68dd48ff2d17cbb5e583b962dea7809
|
| 3 |
+
size 23387046451
|
v10/wan2.2-t2v-rapid-aio-v10-nsfw.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f0c372b0d45fb4888aaebb7ac534b7106dd93e5bff26b53def4c2f8bac0af994
|
| 3 |
+
size 21279448811
|
v10/wan2.2-t2v-rapid-aio-v10.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3759940a8a3591defbad23950269ff3120a7fdc71c22695161a449418e82c193
|
| 3 |
+
size 21279448923
|
v2/wan2.2-i2v-aio-v2.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:37d0bd15108718ff702bcaa216d0dd5f64a3ae068871718f6f30505de1363307
|
| 3 |
+
size 23387033219
|
v2/wan2.2-t2v-aio-v2.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:258eb720a5ee2b0632917792f79b01168df017a60e7f775174fba42aed27bc32
|
| 3 |
+
size 21279389643
|