Upload folder using huggingface_hub
Browse files- README.md +8 -0
- workflows/workflow_ltx2_head_swap_drag_and_drop_v3.0.json +317 -317
README.md
CHANGED
|
@@ -268,6 +268,14 @@ Unlike previous versions, which relied primarily on the identity being establish
|
|
| 268 |
|
| 269 |
This results in a much stronger and more persistent identity signal during inference.
|
| 270 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 271 |
---
|
| 272 |
|
| 273 |
## πΉ How V3 Works
|
|
|
|
| 268 |
|
| 269 |
This results in a much stronger and more persistent identity signal during inference.
|
| 270 |
|
| 271 |
+
# π Acknowledgements
|
| 272 |
+
|
| 273 |
+
Special thanks to **facy.ai** for sponsoring the GPU used to train this model.
|
| 274 |
+
|
| 275 |
+
If you want to check their platform, you can use my referral link:
|
| 276 |
+
|
| 277 |
+
[https://facy.ai/a/headswap](https://facy.ai/a/headswap)
|
| 278 |
+
|
| 279 |
---
|
| 280 |
|
| 281 |
## πΉ How V3 Works
|
workflows/workflow_ltx2_head_swap_drag_and_drop_v3.0.json
CHANGED
|
@@ -4116,13 +4116,13 @@
|
|
| 4116 |
"hidden": false,
|
| 4117 |
"paused": false,
|
| 4118 |
"params": {
|
| 4119 |
-
"filename": "
|
| 4120 |
"subfolder": "",
|
| 4121 |
"type": "output",
|
| 4122 |
"format": "video/h264-mp4",
|
| 4123 |
"frame_rate": 24,
|
| 4124 |
-
"workflow": "
|
| 4125 |
-
"fullpath": "/home/alissonerdx/tools/ComfyUI/output/
|
| 4126 |
}
|
| 4127 |
}
|
| 4128 |
}
|
|
@@ -4552,36 +4552,6 @@
|
|
| 4552 |
"automatic_prompt"
|
| 4553 |
]
|
| 4554 |
},
|
| 4555 |
-
{
|
| 4556 |
-
"id": 103,
|
| 4557 |
-
"type": "MarkdownNote",
|
| 4558 |
-
"pos": [
|
| 4559 |
-
-2068.514636368842,
|
| 4560 |
-
1723.668729434968
|
| 4561 |
-
],
|
| 4562 |
-
"size": [
|
| 4563 |
-
734.4661458333334,
|
| 4564 |
-
1067.265625
|
| 4565 |
-
],
|
| 4566 |
-
"flags": {},
|
| 4567 |
-
"order": 46,
|
| 4568 |
-
"mode": 0,
|
| 4569 |
-
"inputs": [],
|
| 4570 |
-
"outputs": [],
|
| 4571 |
-
"title": "Model Links",
|
| 4572 |
-
"properties": {
|
| 4573 |
-
"ue_properties": {
|
| 4574 |
-
"widget_ue_connectable": {},
|
| 4575 |
-
"version": "7.5.1",
|
| 4576 |
-
"input_ue_unconnectable": {}
|
| 4577 |
-
}
|
| 4578 |
-
},
|
| 4579 |
-
"widgets_values": [
|
| 4580 |
-
"# LTX-2.3\n\n* Hugging Face: [Lightricks/LTX-2.3](https://huggingface.co/Lightricks/LTX-2.3/)\n* GitHub: [LTX-2](https://github.com/Lightricks/LTX-2)\n\n## LTX-2.3 Prompting Tips\n\n1. **Core Actions**: describe events and actions as they happen over time\n2. **Visual Details**: describe all visual details you want to appear in the video\n3. **Audio**: describe any sounds and dialogue needed for the scene\n\n## Report LTX-2.3 Issues\n\nTo report issues when running this workflow, go here:\n[https://github.com/Lightricks/ComfyUI-LTXVideo/issues](https://github.com/Lightricks/ComfyUI-LTXVideo/issues)\n\n---\n\n## Required Models and Files\n\n### diffusion_models\n\n**Option 1**\n\n* [ltx-2-3-22b-dev_transformer_only_fp8_input_scaled.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/diffusion_models/ltx-2-3-22b-dev_transformer_only_fp8_input_scaled.safetensors)\n\n> This model requires the **distilled LoRA** if you want to generate videos in **8 steps**.\n\n**Option 2**\n\n* [ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/diffusion_models/ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensors)\n\n> This model **does not require** the distilled LoRA.\n\n---\n\n### vaes\n\n* [LTX23_audio_vae_bf16.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/vae/LTX23_audio_vae_bf16.safetensors)\n* [LTX23_video_vae_bf16.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/vae/LTX23_video_vae_bf16.safetensors)\n\n**For preview**\n\n* [taeltx2_3.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/vae/taeltx2_3.safetensors)\n\n---\n\n### projection text encoder\n\n* [ltx-2.3_text_projection_bf16.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/text_encoders/ltx-2.3_text_projection_bf16.safetensors)\n\n---\n\n### text encoder\n\nYou can download the text encoder here:\n\n* [gemma_3_12B_it_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/ltx-2/blob/main/split_files/text_encoders/gemma_3_12B_it_fp8_scaled.safetensors)\n\n---\n\n### loras\n\n* [ltx-2.3-22b-distilled-lora-dynamic_fro09_avg_rank_105_bf16.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/loras/ltx-2.3-22b-distilled-lora-dynamic_fro09_avg_rank_105_bf16.safetensors)\n\n> If you download\n> **ltx-2-3-22b-dev_transformer_only_fp8_input_scaled.safetensors**,\n> use the LoRA above.\n\n---\n\n### upscalers\n\n**Spatial upscaler**\n\n* [ltx-2.3-spatial-upscaler-x2-1.1.safetensors](https://huggingface.co/Lightricks/LTX-2.3/blob/main/ltx-2.3-spatial-upscaler-x2-1.1.safetensors)\n\n**Temporal upscaler**\n\n* [ltx-2.3-temporal-upscaler-x2-1.0.safetensors](https://huggingface.co/Lightricks/LTX-2.3/blob/main/ltx-2.3-temporal-upscaler-x2-1.0.safetensors)\n\n---\n\n## Model Folder Structure\n\n```text\nπ ComfyUI/\nβββ π models/\nβ βββ π diffusion_models/\nβ β βββ ltx-2-3-22b-dev_transformer_only_fp8_input_scaled.safetensors\nβ β βββ ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensors\nβ βββ π vae/\nβ β βββ LTX23_audio_vae_bf16.safetensors\nβ β βββ LTX23_video_vae_bf16.safetensors\nβ β βββ taeltx2_3.safetensors\nβ βββ π text_encoders/\nβ β βββ ltx-2.3_text_projection_bf16.safetensors\nβ β βββ gemma_3_12B_it_fp8_scaled.safetensors\nβ βββ π loras/\nβ β βββ ltx-2.3-22b-distilled-lora-dynamic_fro09_avg_rank_105_bf16.safetensors\nβ βββ π latent_upscale_models/\nβ βββ ltx-2.3-spatial-upscaler-x2-1.1.safetensors\nβ βββ ltx-2.3-temporal-upscaler-x2-1.0.safetensors\n```\n\n---\n\n## Important Notes\n\n* If you use **`ltx-2-3-22b-dev_transformer_only_fp8_input_scaled.safetensors`**, you will also need the **distilled LoRA** to generate videos in **8 steps**.\n* If you use **`ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensors`**, the **LoRA is not required**.\n* **`taeltx2_3.safetensors`** is used for **preview**.\n* Both **spatial** and **temporal** upscalers are optional.\n\n---\n\n## Report Issues\n\nBefore reporting any issue, make sure to update ComfyUI first:\n[ComfyUI update guide](https://docs.comfy.org/installation/update_comfyui)\n\n> Note: Desktop and Cloud releases follow stable builds, so some models supported in nightly versions may not be available yet.\n\n### Where to report each issue type\n\n* Cannot run / runtime errors: [ComfyUI/issues](https://github.com/Comfy-Org/ComfyUI/issues)\n* UI / frontend issues: [ComfyUI_frontend/issues](https://github.com/Comfy-Org/ComfyUI_frontend/issues)\n* Workflow issues: [workflow_templates/issues](https://github.com/Comfy-Org/workflow_templates/issues)\n\n"
|
| 4581 |
-
],
|
| 4582 |
-
"color": "#222",
|
| 4583 |
-
"bgcolor": "#000"
|
| 4584 |
-
},
|
| 4585 |
{
|
| 4586 |
"id": 485,
|
| 4587 |
"type": "SetNode",
|
|
@@ -4707,13 +4677,13 @@
|
|
| 4707 |
"hidden": false,
|
| 4708 |
"paused": false,
|
| 4709 |
"params": {
|
| 4710 |
-
"filename": "
|
| 4711 |
"subfolder": "",
|
| 4712 |
"type": "output",
|
| 4713 |
"format": "video/h264-mp4",
|
| 4714 |
"frame_rate": 24,
|
| 4715 |
-
"workflow": "
|
| 4716 |
-
"fullpath": "/home/alissonerdx/tools/ComfyUI/output/
|
| 4717 |
}
|
| 4718 |
}
|
| 4719 |
}
|
|
@@ -4853,7 +4823,7 @@
|
|
| 4853 |
99.54722764810117
|
| 4854 |
],
|
| 4855 |
"flags": {},
|
| 4856 |
-
"order":
|
| 4857 |
"mode": 0,
|
| 4858 |
"inputs": [],
|
| 4859 |
"outputs": [
|
|
@@ -4894,7 +4864,7 @@
|
|
| 4894 |
88
|
| 4895 |
],
|
| 4896 |
"flags": {},
|
| 4897 |
-
"order":
|
| 4898 |
"mode": 0,
|
| 4899 |
"inputs": [],
|
| 4900 |
"outputs": [],
|
|
@@ -4923,7 +4893,7 @@
|
|
| 4923 |
234.34097313493703
|
| 4924 |
],
|
| 4925 |
"flags": {},
|
| 4926 |
-
"order":
|
| 4927 |
"mode": 0,
|
| 4928 |
"inputs": [],
|
| 4929 |
"outputs": [
|
|
@@ -4965,7 +4935,7 @@
|
|
| 4965 |
115.40288225207814
|
| 4966 |
],
|
| 4967 |
"flags": {},
|
| 4968 |
-
"order":
|
| 4969 |
"mode": 0,
|
| 4970 |
"inputs": [],
|
| 4971 |
"outputs": [
|
|
@@ -5007,7 +4977,7 @@
|
|
| 5007 |
58
|
| 5008 |
],
|
| 5009 |
"flags": {},
|
| 5010 |
-
"order":
|
| 5011 |
"mode": 0,
|
| 5012 |
"inputs": [],
|
| 5013 |
"outputs": [
|
|
@@ -5048,7 +5018,7 @@
|
|
| 5048 |
67.06517454371169
|
| 5049 |
],
|
| 5050 |
"flags": {},
|
| 5051 |
-
"order":
|
| 5052 |
"mode": 0,
|
| 5053 |
"inputs": [],
|
| 5054 |
"outputs": [
|
|
@@ -5142,7 +5112,7 @@
|
|
| 5142 |
82
|
| 5143 |
],
|
| 5144 |
"flags": {},
|
| 5145 |
-
"order":
|
| 5146 |
"mode": 0,
|
| 5147 |
"inputs": [],
|
| 5148 |
"outputs": [
|
|
@@ -5186,7 +5156,7 @@
|
|
| 5186 |
"flags": {
|
| 5187 |
"collapsed": true
|
| 5188 |
},
|
| 5189 |
-
"order":
|
| 5190 |
"mode": 0,
|
| 5191 |
"inputs": [],
|
| 5192 |
"outputs": [
|
|
@@ -5307,7 +5277,7 @@
|
|
| 5307 |
"flags": {
|
| 5308 |
"collapsed": true
|
| 5309 |
},
|
| 5310 |
-
"order":
|
| 5311 |
"mode": 0,
|
| 5312 |
"inputs": [],
|
| 5313 |
"outputs": [
|
|
@@ -5346,7 +5316,7 @@
|
|
| 5346 |
70
|
| 5347 |
],
|
| 5348 |
"flags": {},
|
| 5349 |
-
"order":
|
| 5350 |
"mode": 0,
|
| 5351 |
"inputs": [],
|
| 5352 |
"outputs": [
|
|
@@ -5605,7 +5575,7 @@
|
|
| 5605 |
"flags": {
|
| 5606 |
"collapsed": false
|
| 5607 |
},
|
| 5608 |
-
"order":
|
| 5609 |
"mode": 0,
|
| 5610 |
"inputs": [],
|
| 5611 |
"outputs": [
|
|
@@ -5702,421 +5672,451 @@
|
|
| 5702 |
"bgcolor": "#653"
|
| 5703 |
},
|
| 5704 |
{
|
| 5705 |
-
"id":
|
| 5706 |
-
"type": "
|
| 5707 |
"pos": [
|
| 5708 |
-
-
|
| 5709 |
-
|
| 5710 |
],
|
| 5711 |
"size": [
|
| 5712 |
-
|
| 5713 |
-
|
| 5714 |
],
|
| 5715 |
"flags": {},
|
| 5716 |
-
"order":
|
| 5717 |
-
"mode":
|
| 5718 |
"inputs": [
|
| 5719 |
{
|
| 5720 |
-
"name": "
|
| 5721 |
-
"
|
| 5722 |
-
"
|
| 5723 |
-
"link": null
|
| 5724 |
-
},
|
| 5725 |
-
{
|
| 5726 |
-
"name": "vae",
|
| 5727 |
-
"shape": 7,
|
| 5728 |
-
"type": "VAE",
|
| 5729 |
-
"link": null
|
| 5730 |
-
},
|
| 5731 |
-
{
|
| 5732 |
-
"name": "force_rate",
|
| 5733 |
-
"type": "FLOAT",
|
| 5734 |
-
"widget": {
|
| 5735 |
-
"name": "force_rate"
|
| 5736 |
-
},
|
| 5737 |
-
"link": 756
|
| 5738 |
-
},
|
| 5739 |
-
{
|
| 5740 |
-
"name": "frame_load_cap",
|
| 5741 |
-
"type": "INT",
|
| 5742 |
-
"widget": {
|
| 5743 |
-
"name": "frame_load_cap"
|
| 5744 |
-
},
|
| 5745 |
-
"link": 757
|
| 5746 |
}
|
| 5747 |
],
|
| 5748 |
"outputs": [
|
| 5749 |
{
|
| 5750 |
-
"name": "
|
| 5751 |
-
"type": "
|
| 5752 |
-
"links": [
|
| 5753 |
-
762
|
| 5754 |
-
]
|
| 5755 |
-
},
|
| 5756 |
-
{
|
| 5757 |
-
"name": "frame_count",
|
| 5758 |
-
"type": "INT",
|
| 5759 |
-
"links": null
|
| 5760 |
-
},
|
| 5761 |
-
{
|
| 5762 |
-
"name": "audio",
|
| 5763 |
-
"type": "AUDIO",
|
| 5764 |
"links": [
|
| 5765 |
-
|
| 5766 |
]
|
| 5767 |
-
},
|
| 5768 |
-
{
|
| 5769 |
-
"name": "video_info",
|
| 5770 |
-
"type": "VHS_VIDEOINFO",
|
| 5771 |
-
"links": null
|
| 5772 |
}
|
| 5773 |
],
|
| 5774 |
-
"title": "Body Reference",
|
| 5775 |
"properties": {
|
| 5776 |
-
"cnr_id": "
|
| 5777 |
-
"ver": "
|
| 5778 |
-
"Node name for S&R": "
|
| 5779 |
"ue_properties": {
|
| 5780 |
"widget_ue_connectable": {},
|
| 5781 |
"version": "7.5.1",
|
| 5782 |
"input_ue_unconnectable": {}
|
| 5783 |
-
}
|
| 5784 |
-
|
| 5785 |
-
|
| 5786 |
-
|
| 5787 |
-
|
| 5788 |
-
|
| 5789 |
-
"custom_height": 0,
|
| 5790 |
-
"frame_load_cap": 121,
|
| 5791 |
-
"skip_first_frames": 28,
|
| 5792 |
-
"select_every_nth": 1,
|
| 5793 |
-
"format": "AnimateDiff",
|
| 5794 |
-
"videopreview": {
|
| 5795 |
-
"hidden": false,
|
| 5796 |
-
"paused": false,
|
| 5797 |
-
"params": {
|
| 5798 |
-
"filename": "antes de pensar em matar pense na educaΓ§Γ£o oruam kkkk #oruam #viral #cantor #naoflopa #trend (1024p_29fps_H264-128kbit_AAC).mp4",
|
| 5799 |
-
"type": "input",
|
| 5800 |
-
"format": "video/mp4",
|
| 5801 |
-
"force_rate": 24,
|
| 5802 |
-
"custom_width": 0,
|
| 5803 |
-
"custom_height": 0,
|
| 5804 |
-
"frame_load_cap": 121,
|
| 5805 |
-
"skip_first_frames": 28,
|
| 5806 |
-
"select_every_nth": 1
|
| 5807 |
}
|
| 5808 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5809 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5810 |
"color": "#432",
|
| 5811 |
"bgcolor": "#653"
|
| 5812 |
},
|
| 5813 |
{
|
| 5814 |
-
"id":
|
| 5815 |
-
"type": "
|
| 5816 |
"pos": [
|
| 5817 |
-
|
| 5818 |
-
|
| 5819 |
],
|
| 5820 |
"size": [
|
| 5821 |
-
|
| 5822 |
-
|
| 5823 |
],
|
| 5824 |
"flags": {},
|
| 5825 |
-
"order":
|
| 5826 |
"mode": 0,
|
| 5827 |
"inputs": [
|
| 5828 |
{
|
| 5829 |
-
"name": "
|
| 5830 |
-
"type": "
|
| 5831 |
-
"link":
|
| 5832 |
-
},
|
| 5833 |
-
{
|
| 5834 |
-
"name": "structured_output_format",
|
| 5835 |
-
"shape": 7,
|
| 5836 |
-
"type": "STRING",
|
| 5837 |
-
"link": null
|
| 5838 |
}
|
| 5839 |
],
|
| 5840 |
"outputs": [
|
| 5841 |
{
|
| 5842 |
-
"name": "
|
| 5843 |
-
"type": "
|
| 5844 |
"links": [
|
| 5845 |
-
|
| 5846 |
]
|
| 5847 |
}
|
| 5848 |
],
|
| 5849 |
"properties": {
|
| 5850 |
-
"cnr_id": "
|
| 5851 |
-
"ver": "
|
| 5852 |
-
"Node name for S&R": "
|
| 5853 |
"ue_properties": {
|
| 5854 |
"widget_ue_connectable": {},
|
| 5855 |
-
"
|
| 5856 |
-
"
|
| 5857 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5858 |
},
|
| 5859 |
"widgets_values": [
|
| 5860 |
-
"
|
| 5861 |
-
|
| 5862 |
-
|
| 5863 |
-
|
| 5864 |
-
0.7,
|
| 5865 |
-
40,
|
| 5866 |
-
0.9,
|
| 5867 |
-
1.1,
|
| 5868 |
-
42,
|
| 5869 |
-
4096,
|
| 5870 |
-
4096,
|
| 5871 |
-
0,
|
| 5872 |
-
5,
|
| 5873 |
-
16,
|
| 5874 |
-
"You are a helpful AI assistant specialized in analyzing a sequence of video frames and generating a detailed and accurate textual description of the events. Describe the actions, people, objects, and how the scene evolves across the frames.",
|
| 5875 |
-
"Analyze this composite video.\n\nThe video contains:\n1. a side chroma-key panel with a reference face image\n2. a main performance video showing the body, clothing, movement, hand actions, objects, framing, and environment\n\nYour task is to extract:\n- the target face identity from the side panel\n- the performance/action from the main video\n\nCritical rules:\n- The side-panel face is the only valid source for identity traits and head-level accessories.\n- Ignore the visible face and head appearance in the main video completely.\n- Do not describe any face, hair, hairstyle, hair color, eye color, makeup, facial features, facial expression, attractiveness, headwear, hood, hat, or accessories from the main video.\n- In the ACTION section, describe the performer only as \"a person\" and focus only on body movement, clothing, hand actions, objects, framing, and environment.\n- Do not mention the chroma panel, green background, split layout, or editing structure.\n- Be factual and non-creative.\n- Do not guess uncertain details. If a detail is not clearly visible, omit it.\n\nReturn exactly in this format:\nhead_swap:\n\nFACE:\nA brief but detailed objective identity description from the side-panel face only. Include, when clearly visible: apparent gender, apparent ethnicity, skin tone or complexion, approximate age range, head shape, hair or baldness pattern, hair color, eye color, facial hair, visible skin details, headwear or head covering, visible facial accessories, and any especially distinctive facial trait. Prioritize the eyes when they are a strong defining feature.\n\nACTION:\nA concise performance description from the main video. Include only: visible clothing, body position, movement, hand actions, objects being shown or handled, camera-facing behavior, framing, and environment. Do not include any face or head appearance from the main video.\n\nGood example:\nFACE:\nFemale, fair skin, approximately 20-30 years old, oval head shape, long wavy vivid blue-violet hair, bright golden-amber eyes with dark defined pupils, no facial hair, smooth skin, and pink flower hair accessories as a distinctive head adornment.\n\nACTION:\nA person in a dark top faces the camera indoors, holds a package of false eyelashes close to the lens, peels one lash from the backing, brings it near the eye area, and examines it while making small hand movements.\n\nBad example:\nACTION:\nA person with long curly blonde braids holds a pair of false eyelashes..."
|
| 5876 |
-
]
|
| 5877 |
},
|
| 5878 |
{
|
| 5879 |
-
"id":
|
| 5880 |
-
"type": "
|
| 5881 |
"pos": [
|
| 5882 |
-
-
|
| 5883 |
-
|
| 5884 |
],
|
| 5885 |
"size": [
|
| 5886 |
-
|
| 5887 |
-
|
| 5888 |
],
|
| 5889 |
-
"flags": {
|
| 5890 |
-
|
|
|
|
|
|
|
| 5891 |
"mode": 0,
|
| 5892 |
-
"inputs": [
|
| 5893 |
-
"outputs": [
|
| 5894 |
{
|
| 5895 |
-
"name": "
|
| 5896 |
-
"type": "
|
| 5897 |
-
"
|
| 5898 |
-
|
| 5899 |
-
|
| 5900 |
-
|
| 5901 |
{
|
| 5902 |
-
"name": "
|
| 5903 |
-
"type": "
|
| 5904 |
"links": []
|
| 5905 |
}
|
| 5906 |
],
|
| 5907 |
-
"title": "
|
| 5908 |
"properties": {
|
| 5909 |
-
"
|
| 5910 |
-
"ver": "0.16.3",
|
| 5911 |
-
"Node name for S&R": "LoadImage",
|
| 5912 |
-
"enableTabs": false,
|
| 5913 |
-
"tabWidth": 65,
|
| 5914 |
-
"tabXOffset": 10,
|
| 5915 |
-
"hasSecondTab": false,
|
| 5916 |
-
"secondTabText": "Send Back",
|
| 5917 |
-
"secondTabOffset": 80,
|
| 5918 |
-
"secondTabWidth": 65,
|
| 5919 |
"ue_properties": {
|
| 5920 |
"widget_ue_connectable": {},
|
| 5921 |
"version": "7.5.1",
|
| 5922 |
"input_ue_unconnectable": {}
|
| 5923 |
-
}
|
| 5924 |
-
"image": "clipspace/clipspace-painted-masked-1773855419084.png [input]"
|
| 5925 |
},
|
| 5926 |
"widgets_values": [
|
| 5927 |
-
"
|
| 5928 |
-
"image"
|
| 5929 |
],
|
| 5930 |
-
"color": "#
|
| 5931 |
-
"bgcolor": "#
|
| 5932 |
},
|
| 5933 |
{
|
| 5934 |
-
"id":
|
| 5935 |
-
"type": "
|
| 5936 |
"pos": [
|
| 5937 |
-
-
|
| 5938 |
-
|
| 5939 |
],
|
| 5940 |
"size": [
|
| 5941 |
-
|
| 5942 |
-
|
| 5943 |
],
|
| 5944 |
"flags": {},
|
| 5945 |
-
"order":
|
| 5946 |
-
"mode":
|
| 5947 |
-
"inputs": [
|
| 5948 |
-
|
| 5949 |
-
"name": "model",
|
| 5950 |
-
"type": "MODEL",
|
| 5951 |
-
"link": 924
|
| 5952 |
-
}
|
| 5953 |
-
],
|
| 5954 |
-
"outputs": [
|
| 5955 |
-
{
|
| 5956 |
-
"name": "MODEL",
|
| 5957 |
-
"type": "MODEL",
|
| 5958 |
-
"links": [
|
| 5959 |
-
994
|
| 5960 |
-
]
|
| 5961 |
-
}
|
| 5962 |
-
],
|
| 5963 |
"properties": {
|
| 5964 |
-
"cnr_id": "comfy-core",
|
| 5965 |
-
"ver": "0.3.75",
|
| 5966 |
-
"Node name for S&R": "LoraLoaderModelOnly",
|
| 5967 |
"ue_properties": {
|
| 5968 |
"widget_ue_connectable": {},
|
| 5969 |
"version": "7.5.1",
|
| 5970 |
"input_ue_unconnectable": {}
|
| 5971 |
-
}
|
| 5972 |
-
"models": [
|
| 5973 |
-
{
|
| 5974 |
-
"name": "ltx-2.3-22b-distilled-lora-384.safetensors",
|
| 5975 |
-
"url": "https://huggingface.co/Lightricks/LTX-2.3/resolve/main/ltx-2.3-22b-distilled-lora-384.safetensors",
|
| 5976 |
-
"directory": "loras"
|
| 5977 |
-
}
|
| 5978 |
-
],
|
| 5979 |
-
"enableTabs": false,
|
| 5980 |
-
"tabWidth": 65,
|
| 5981 |
-
"tabXOffset": 10,
|
| 5982 |
-
"hasSecondTab": false,
|
| 5983 |
-
"secondTabText": "Send Back",
|
| 5984 |
-
"secondTabOffset": 80,
|
| 5985 |
-
"secondTabWidth": 65
|
| 5986 |
},
|
| 5987 |
"widgets_values": [
|
| 5988 |
-
"
|
| 5989 |
-
1
|
| 5990 |
],
|
| 5991 |
-
"color": "#
|
| 5992 |
-
"bgcolor": "#
|
| 5993 |
},
|
| 5994 |
{
|
| 5995 |
-
"id":
|
| 5996 |
-
"type": "
|
| 5997 |
"pos": [
|
| 5998 |
-
|
| 5999 |
-
|
| 6000 |
],
|
| 6001 |
"size": [
|
| 6002 |
-
|
| 6003 |
-
|
| 6004 |
],
|
| 6005 |
"flags": {},
|
| 6006 |
-
"order":
|
| 6007 |
"mode": 0,
|
| 6008 |
"inputs": [
|
| 6009 |
{
|
| 6010 |
-
"name": "
|
| 6011 |
-
"type": "
|
| 6012 |
-
"link":
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6013 |
}
|
| 6014 |
],
|
| 6015 |
"outputs": [
|
| 6016 |
{
|
| 6017 |
-
"name": "
|
| 6018 |
-
"type": "
|
| 6019 |
"links": [
|
| 6020 |
-
|
| 6021 |
]
|
| 6022 |
}
|
| 6023 |
],
|
| 6024 |
"properties": {
|
| 6025 |
-
"cnr_id": "
|
| 6026 |
-
"ver": "
|
| 6027 |
-
"Node name for S&R": "
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6028 |
"ue_properties": {
|
| 6029 |
"widget_ue_connectable": {},
|
| 6030 |
"version": "7.5.1",
|
| 6031 |
"input_ue_unconnectable": {}
|
| 6032 |
-
}
|
| 6033 |
-
"enableTabs": false,
|
| 6034 |
-
"tabWidth": 65,
|
| 6035 |
-
"tabXOffset": 10,
|
| 6036 |
-
"hasSecondTab": false,
|
| 6037 |
-
"secondTabText": "Send Back",
|
| 6038 |
-
"secondTabOffset": 80,
|
| 6039 |
-
"secondTabWidth": 65
|
| 6040 |
},
|
| 6041 |
"widgets_values": [
|
| 6042 |
-
"
|
| 6043 |
],
|
| 6044 |
-
"color": "#
|
| 6045 |
-
"bgcolor": "#
|
| 6046 |
},
|
| 6047 |
{
|
| 6048 |
-
"id":
|
| 6049 |
-
"type": "
|
| 6050 |
"pos": [
|
| 6051 |
-
-
|
| 6052 |
-
|
| 6053 |
],
|
| 6054 |
"size": [
|
| 6055 |
-
|
| 6056 |
-
|
| 6057 |
],
|
| 6058 |
-
"flags": {
|
| 6059 |
-
|
| 6060 |
-
},
|
| 6061 |
-
"order": 80,
|
| 6062 |
"mode": 0,
|
| 6063 |
"inputs": [
|
| 6064 |
{
|
| 6065 |
-
"name": "
|
| 6066 |
-
"
|
| 6067 |
-
"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6068 |
}
|
| 6069 |
],
|
| 6070 |
"outputs": [
|
| 6071 |
{
|
| 6072 |
-
"name": "
|
| 6073 |
-
"type": "
|
| 6074 |
-
"links": [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6075 |
}
|
| 6076 |
],
|
| 6077 |
-
"title": "
|
| 6078 |
"properties": {
|
| 6079 |
-
"
|
|
|
|
|
|
|
| 6080 |
"ue_properties": {
|
| 6081 |
"widget_ue_connectable": {},
|
| 6082 |
"version": "7.5.1",
|
| 6083 |
"input_ue_unconnectable": {}
|
| 6084 |
}
|
| 6085 |
},
|
| 6086 |
-
"widgets_values":
|
| 6087 |
-
"
|
| 6088 |
-
|
| 6089 |
-
|
| 6090 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6091 |
},
|
| 6092 |
{
|
| 6093 |
-
"id":
|
| 6094 |
-
"type": "
|
| 6095 |
"pos": [
|
| 6096 |
-
-
|
| 6097 |
-
|
| 6098 |
],
|
| 6099 |
"size": [
|
| 6100 |
-
|
| 6101 |
-
|
| 6102 |
],
|
| 6103 |
"flags": {},
|
| 6104 |
"order": 59,
|
| 6105 |
"mode": 0,
|
| 6106 |
"inputs": [],
|
| 6107 |
-
"outputs": [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6108 |
"properties": {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6109 |
"ue_properties": {
|
| 6110 |
"widget_ue_connectable": {},
|
| 6111 |
"version": "7.5.1",
|
| 6112 |
"input_ue_unconnectable": {}
|
| 6113 |
-
}
|
|
|
|
| 6114 |
},
|
| 6115 |
"widgets_values": [
|
| 6116 |
-
"
|
|
|
|
| 6117 |
],
|
| 6118 |
-
"color": "#
|
| 6119 |
-
"bgcolor": "#
|
| 6120 |
}
|
| 6121 |
],
|
| 6122 |
"links": [
|
|
@@ -7099,10 +7099,10 @@
|
|
| 7099 |
"config": {},
|
| 7100 |
"extra": {
|
| 7101 |
"ds": {
|
| 7102 |
-
"scale": 0.
|
| 7103 |
"offset": [
|
| 7104 |
-
|
| 7105 |
-
-
|
| 7106 |
]
|
| 7107 |
},
|
| 7108 |
"frontendVersion": "1.41.20",
|
|
|
|
| 4116 |
"hidden": false,
|
| 4117 |
"paused": false,
|
| 4118 |
"params": {
|
| 4119 |
+
"filename": "comparison_00467-audio.mp4",
|
| 4120 |
"subfolder": "",
|
| 4121 |
"type": "output",
|
| 4122 |
"format": "video/h264-mp4",
|
| 4123 |
"frame_rate": 24,
|
| 4124 |
+
"workflow": "comparison_00467.png",
|
| 4125 |
+
"fullpath": "/home/alissonerdx/tools/ComfyUI/output/comparison_00467-audio.mp4"
|
| 4126 |
}
|
| 4127 |
}
|
| 4128 |
}
|
|
|
|
| 4552 |
"automatic_prompt"
|
| 4553 |
]
|
| 4554 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4555 |
{
|
| 4556 |
"id": 485,
|
| 4557 |
"type": "SetNode",
|
|
|
|
| 4677 |
"hidden": false,
|
| 4678 |
"paused": false,
|
| 4679 |
"params": {
|
| 4680 |
+
"filename": "comparison_00468-audio.mp4",
|
| 4681 |
"subfolder": "",
|
| 4682 |
"type": "output",
|
| 4683 |
"format": "video/h264-mp4",
|
| 4684 |
"frame_rate": 24,
|
| 4685 |
+
"workflow": "comparison_00468.png",
|
| 4686 |
+
"fullpath": "/home/alissonerdx/tools/ComfyUI/output/comparison_00468-audio.mp4"
|
| 4687 |
}
|
| 4688 |
}
|
| 4689 |
}
|
|
|
|
| 4823 |
99.54722764810117
|
| 4824 |
],
|
| 4825 |
"flags": {},
|
| 4826 |
+
"order": 46,
|
| 4827 |
"mode": 0,
|
| 4828 |
"inputs": [],
|
| 4829 |
"outputs": [
|
|
|
|
| 4864 |
88
|
| 4865 |
],
|
| 4866 |
"flags": {},
|
| 4867 |
+
"order": 47,
|
| 4868 |
"mode": 0,
|
| 4869 |
"inputs": [],
|
| 4870 |
"outputs": [],
|
|
|
|
| 4893 |
234.34097313493703
|
| 4894 |
],
|
| 4895 |
"flags": {},
|
| 4896 |
+
"order": 48,
|
| 4897 |
"mode": 0,
|
| 4898 |
"inputs": [],
|
| 4899 |
"outputs": [
|
|
|
|
| 4935 |
115.40288225207814
|
| 4936 |
],
|
| 4937 |
"flags": {},
|
| 4938 |
+
"order": 49,
|
| 4939 |
"mode": 0,
|
| 4940 |
"inputs": [],
|
| 4941 |
"outputs": [
|
|
|
|
| 4977 |
58
|
| 4978 |
],
|
| 4979 |
"flags": {},
|
| 4980 |
+
"order": 50,
|
| 4981 |
"mode": 0,
|
| 4982 |
"inputs": [],
|
| 4983 |
"outputs": [
|
|
|
|
| 5018 |
67.06517454371169
|
| 5019 |
],
|
| 5020 |
"flags": {},
|
| 5021 |
+
"order": 51,
|
| 5022 |
"mode": 0,
|
| 5023 |
"inputs": [],
|
| 5024 |
"outputs": [
|
|
|
|
| 5112 |
82
|
| 5113 |
],
|
| 5114 |
"flags": {},
|
| 5115 |
+
"order": 52,
|
| 5116 |
"mode": 0,
|
| 5117 |
"inputs": [],
|
| 5118 |
"outputs": [
|
|
|
|
| 5156 |
"flags": {
|
| 5157 |
"collapsed": true
|
| 5158 |
},
|
| 5159 |
+
"order": 53,
|
| 5160 |
"mode": 0,
|
| 5161 |
"inputs": [],
|
| 5162 |
"outputs": [
|
|
|
|
| 5277 |
"flags": {
|
| 5278 |
"collapsed": true
|
| 5279 |
},
|
| 5280 |
+
"order": 54,
|
| 5281 |
"mode": 0,
|
| 5282 |
"inputs": [],
|
| 5283 |
"outputs": [
|
|
|
|
| 5316 |
70
|
| 5317 |
],
|
| 5318 |
"flags": {},
|
| 5319 |
+
"order": 55,
|
| 5320 |
"mode": 0,
|
| 5321 |
"inputs": [],
|
| 5322 |
"outputs": [
|
|
|
|
| 5575 |
"flags": {
|
| 5576 |
"collapsed": false
|
| 5577 |
},
|
| 5578 |
+
"order": 56,
|
| 5579 |
"mode": 0,
|
| 5580 |
"inputs": [],
|
| 5581 |
"outputs": [
|
|
|
|
| 5672 |
"bgcolor": "#653"
|
| 5673 |
},
|
| 5674 |
{
|
| 5675 |
+
"id": 419,
|
| 5676 |
+
"type": "LoraLoaderModelOnly",
|
| 5677 |
"pos": [
|
| 5678 |
+
-412.68540847850346,
|
| 5679 |
+
1795.5732210770548
|
| 5680 |
],
|
| 5681 |
"size": [
|
| 5682 |
+
420,
|
| 5683 |
+
95.546875
|
| 5684 |
],
|
| 5685 |
"flags": {},
|
| 5686 |
+
"order": 61,
|
| 5687 |
+
"mode": 4,
|
| 5688 |
"inputs": [
|
| 5689 |
{
|
| 5690 |
+
"name": "model",
|
| 5691 |
+
"type": "MODEL",
|
| 5692 |
+
"link": 924
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5693 |
}
|
| 5694 |
],
|
| 5695 |
"outputs": [
|
| 5696 |
{
|
| 5697 |
+
"name": "MODEL",
|
| 5698 |
+
"type": "MODEL",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5699 |
"links": [
|
| 5700 |
+
994
|
| 5701 |
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5702 |
}
|
| 5703 |
],
|
|
|
|
| 5704 |
"properties": {
|
| 5705 |
+
"cnr_id": "comfy-core",
|
| 5706 |
+
"ver": "0.3.75",
|
| 5707 |
+
"Node name for S&R": "LoraLoaderModelOnly",
|
| 5708 |
"ue_properties": {
|
| 5709 |
"widget_ue_connectable": {},
|
| 5710 |
"version": "7.5.1",
|
| 5711 |
"input_ue_unconnectable": {}
|
| 5712 |
+
},
|
| 5713 |
+
"models": [
|
| 5714 |
+
{
|
| 5715 |
+
"name": "ltx-2.3-22b-distilled-lora-384.safetensors",
|
| 5716 |
+
"url": "https://huggingface.co/Lightricks/LTX-2.3/resolve/main/ltx-2.3-22b-distilled-lora-384.safetensors",
|
| 5717 |
+
"directory": "loras"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5718 |
}
|
| 5719 |
+
],
|
| 5720 |
+
"enableTabs": false,
|
| 5721 |
+
"tabWidth": 65,
|
| 5722 |
+
"tabXOffset": 10,
|
| 5723 |
+
"hasSecondTab": false,
|
| 5724 |
+
"secondTabText": "Send Back",
|
| 5725 |
+
"secondTabOffset": 80,
|
| 5726 |
+
"secondTabWidth": 65
|
| 5727 |
},
|
| 5728 |
+
"widgets_values": [
|
| 5729 |
+
"ltx-2/ltx-2.3-22b-distilled-lora-dynamic_fro09_avg_rank_105_bf16.safetensors",
|
| 5730 |
+
1
|
| 5731 |
+
],
|
| 5732 |
"color": "#432",
|
| 5733 |
"bgcolor": "#653"
|
| 5734 |
},
|
| 5735 |
{
|
| 5736 |
+
"id": 395,
|
| 5737 |
+
"type": "CLIPTextEncode",
|
| 5738 |
"pos": [
|
| 5739 |
+
105.06267988767536,
|
| 5740 |
+
2345.8241948975415
|
| 5741 |
],
|
| 5742 |
"size": [
|
| 5743 |
+
442.98350960610696,
|
| 5744 |
+
152.85336505029727
|
| 5745 |
],
|
| 5746 |
"flags": {},
|
| 5747 |
+
"order": 66,
|
| 5748 |
"mode": 0,
|
| 5749 |
"inputs": [
|
| 5750 |
{
|
| 5751 |
+
"name": "clip",
|
| 5752 |
+
"type": "CLIP",
|
| 5753 |
+
"link": 976
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5754 |
}
|
| 5755 |
],
|
| 5756 |
"outputs": [
|
| 5757 |
{
|
| 5758 |
+
"name": "CONDITIONING",
|
| 5759 |
+
"type": "CONDITIONING",
|
| 5760 |
"links": [
|
| 5761 |
+
977
|
| 5762 |
]
|
| 5763 |
}
|
| 5764 |
],
|
| 5765 |
"properties": {
|
| 5766 |
+
"cnr_id": "comfy-core",
|
| 5767 |
+
"ver": "0.3.56",
|
| 5768 |
+
"Node name for S&R": "CLIPTextEncode",
|
| 5769 |
"ue_properties": {
|
| 5770 |
"widget_ue_connectable": {},
|
| 5771 |
+
"version": "7.5.1",
|
| 5772 |
+
"input_ue_unconnectable": {}
|
| 5773 |
+
},
|
| 5774 |
+
"enableTabs": false,
|
| 5775 |
+
"tabWidth": 65,
|
| 5776 |
+
"tabXOffset": 10,
|
| 5777 |
+
"hasSecondTab": false,
|
| 5778 |
+
"secondTabText": "Send Back",
|
| 5779 |
+
"secondTabOffset": 80,
|
| 5780 |
+
"secondTabWidth": 65
|
| 5781 |
},
|
| 5782 |
"widgets_values": [
|
| 5783 |
+
"pc game, console game, video game, cartoon, childish, ugly, artifacts, low resolution, blurry, jagged edges"
|
| 5784 |
+
],
|
| 5785 |
+
"color": "#322",
|
| 5786 |
+
"bgcolor": "#533"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5787 |
},
|
| 5788 |
{
|
| 5789 |
+
"id": 448,
|
| 5790 |
+
"type": "SetNode",
|
| 5791 |
"pos": [
|
| 5792 |
+
-732.6252550321485,
|
| 5793 |
+
1824.779061263387
|
| 5794 |
],
|
| 5795 |
"size": [
|
| 5796 |
+
210,
|
| 5797 |
+
60
|
| 5798 |
],
|
| 5799 |
+
"flags": {
|
| 5800 |
+
"collapsed": true
|
| 5801 |
+
},
|
| 5802 |
+
"order": 80,
|
| 5803 |
"mode": 0,
|
| 5804 |
+
"inputs": [
|
|
|
|
| 5805 |
{
|
| 5806 |
+
"name": "MODEL",
|
| 5807 |
+
"type": "MODEL",
|
| 5808 |
+
"link": 940
|
| 5809 |
+
}
|
| 5810 |
+
],
|
| 5811 |
+
"outputs": [
|
| 5812 |
{
|
| 5813 |
+
"name": "*",
|
| 5814 |
+
"type": "*",
|
| 5815 |
"links": []
|
| 5816 |
}
|
| 5817 |
],
|
| 5818 |
+
"title": "Set_base_model",
|
| 5819 |
"properties": {
|
| 5820 |
+
"previousName": "base_model",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5821 |
"ue_properties": {
|
| 5822 |
"widget_ue_connectable": {},
|
| 5823 |
"version": "7.5.1",
|
| 5824 |
"input_ue_unconnectable": {}
|
| 5825 |
+
}
|
|
|
|
| 5826 |
},
|
| 5827 |
"widgets_values": [
|
| 5828 |
+
"base_model"
|
|
|
|
| 5829 |
],
|
| 5830 |
+
"color": "#223",
|
| 5831 |
+
"bgcolor": "#335"
|
| 5832 |
},
|
| 5833 |
{
|
| 5834 |
+
"id": 617,
|
| 5835 |
+
"type": "Note",
|
| 5836 |
"pos": [
|
| 5837 |
+
-653.4013205940807,
|
| 5838 |
+
4215.040495346636
|
| 5839 |
],
|
| 5840 |
"size": [
|
| 5841 |
+
334.65543028292257,
|
| 5842 |
+
115.60227384836162
|
| 5843 |
],
|
| 5844 |
"flags": {},
|
| 5845 |
+
"order": 57,
|
| 5846 |
+
"mode": 0,
|
| 5847 |
+
"inputs": [],
|
| 5848 |
+
"outputs": [],
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5849 |
"properties": {
|
|
|
|
|
|
|
|
|
|
| 5850 |
"ue_properties": {
|
| 5851 |
"widget_ue_connectable": {},
|
| 5852 |
"version": "7.5.1",
|
| 5853 |
"input_ue_unconnectable": {}
|
| 5854 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5855 |
},
|
| 5856 |
"widgets_values": [
|
| 5857 |
+
"Clone this repository https://github.com/alisson-anjos/ComfyUI-BFSNodes into your custom nodes, or try finding it in the manager; it contains the node mentioned above."
|
|
|
|
| 5858 |
],
|
| 5859 |
+
"color": "#322",
|
| 5860 |
+
"bgcolor": "#533"
|
| 5861 |
},
|
| 5862 |
{
|
| 5863 |
+
"id": 586,
|
| 5864 |
+
"type": "OllamaVideoDescriber",
|
| 5865 |
"pos": [
|
| 5866 |
+
618.4054441273289,
|
| 5867 |
+
1992.375622227452
|
| 5868 |
],
|
| 5869 |
"size": [
|
| 5870 |
+
404.4671263117698,
|
| 5871 |
+
578.4082736118526
|
| 5872 |
],
|
| 5873 |
"flags": {},
|
| 5874 |
+
"order": 79,
|
| 5875 |
"mode": 0,
|
| 5876 |
"inputs": [
|
| 5877 |
{
|
| 5878 |
+
"name": "video_frames",
|
| 5879 |
+
"type": "IMAGE",
|
| 5880 |
+
"link": 1053
|
| 5881 |
+
},
|
| 5882 |
+
{
|
| 5883 |
+
"name": "structured_output_format",
|
| 5884 |
+
"shape": 7,
|
| 5885 |
+
"type": "STRING",
|
| 5886 |
+
"link": null
|
| 5887 |
}
|
| 5888 |
],
|
| 5889 |
"outputs": [
|
| 5890 |
{
|
| 5891 |
+
"name": "result",
|
| 5892 |
+
"type": "STRING",
|
| 5893 |
"links": [
|
| 5894 |
+
1054
|
| 5895 |
]
|
| 5896 |
}
|
| 5897 |
],
|
| 5898 |
"properties": {
|
| 5899 |
+
"cnr_id": "comfyui-ollama-describer",
|
| 5900 |
+
"ver": "ffb35632a89f1a10a8dd8d4489266a996778b471",
|
| 5901 |
+
"Node name for S&R": "OllamaVideoDescriber",
|
| 5902 |
+
"ue_properties": {
|
| 5903 |
+
"widget_ue_connectable": {},
|
| 5904 |
+
"input_ue_unconnectable": {},
|
| 5905 |
+
"version": "7.5.1"
|
| 5906 |
+
}
|
| 5907 |
+
},
|
| 5908 |
+
"widgets_values": [
|
| 5909 |
+
"qwen3.5:9b (6.6GB)",
|
| 5910 |
+
"",
|
| 5911 |
+
"http://localhost:11434",
|
| 5912 |
+
300,
|
| 5913 |
+
0.7,
|
| 5914 |
+
40,
|
| 5915 |
+
0.9,
|
| 5916 |
+
1.1,
|
| 5917 |
+
42,
|
| 5918 |
+
4096,
|
| 5919 |
+
4096,
|
| 5920 |
+
0,
|
| 5921 |
+
5,
|
| 5922 |
+
16,
|
| 5923 |
+
"You are a helpful AI assistant specialized in analyzing a sequence of video frames and generating a detailed and accurate textual description of the events. Describe the actions, people, objects, and how the scene evolves across the frames.",
|
| 5924 |
+
"Analyze this composite video.\n\nThe video contains:\n1. a side chroma-key panel with a reference face image\n2. a main performance video showing the body, clothing, movement, hand actions, objects, framing, and environment\n\nYour task is to extract:\n- the target face identity from the side panel\n- the performance/action from the main video\n\nCritical rules:\n- The side-panel face is the only valid source for identity traits and head-level accessories.\n- Ignore the visible face and head appearance in the main video completely.\n- Do not describe any face, hair, hairstyle, hair color, eye color, makeup, facial features, facial expression, attractiveness, headwear, hood, hat, or accessories from the main video.\n- In the ACTION section, describe the performer only as \"a person\" and focus only on body movement, clothing, hand actions, objects, framing, and environment.\n- Do not mention the chroma panel, green background, split layout, or editing structure.\n- Be factual and non-creative.\n- Do not guess uncertain details. If a detail is not clearly visible, omit it.\n\nReturn exactly in this format:\nhead_swap:\n\nFACE:\nA brief but detailed objective identity description from the side-panel face only. Include, when clearly visible: apparent gender, apparent ethnicity, skin tone or complexion, approximate age range, head shape, hair or baldness pattern, hair color, eye color, facial hair, visible skin details, headwear or head covering, visible facial accessories, and any especially distinctive facial trait. Prioritize the eyes when they are a strong defining feature.\n\nACTION:\nA concise performance description from the main video. Include only: visible clothing, body position, movement, hand actions, objects being shown or handled, camera-facing behavior, framing, and environment. Do not include any face or head appearance from the main video.\n\nGood example:\nFACE:\nFemale, fair skin, approximately 20-30 years old, oval head shape, long wavy vivid blue-violet hair, bright golden-amber eyes with dark defined pupils, no facial hair, smooth skin, and pink flower hair accessories as a distinctive head adornment.\n\nACTION:\nA person in a dark top faces the camera indoors, holds a package of false eyelashes close to the lens, peels one lash from the backing, brings it near the eye area, and examines it while making small hand movements.\n\nBad example:\nACTION:\nA person with long curly blonde braids holds a pair of false eyelashes..."
|
| 5925 |
+
]
|
| 5926 |
+
},
|
| 5927 |
+
{
|
| 5928 |
+
"id": 103,
|
| 5929 |
+
"type": "MarkdownNote",
|
| 5930 |
+
"pos": [
|
| 5931 |
+
-2068.514636368842,
|
| 5932 |
+
1723.668729434968
|
| 5933 |
+
],
|
| 5934 |
+
"size": [
|
| 5935 |
+
734.4661458333334,
|
| 5936 |
+
1067.265625
|
| 5937 |
+
],
|
| 5938 |
+
"flags": {},
|
| 5939 |
+
"order": 58,
|
| 5940 |
+
"mode": 0,
|
| 5941 |
+
"inputs": [],
|
| 5942 |
+
"outputs": [],
|
| 5943 |
+
"title": "Model Links",
|
| 5944 |
+
"properties": {
|
| 5945 |
"ue_properties": {
|
| 5946 |
"widget_ue_connectable": {},
|
| 5947 |
"version": "7.5.1",
|
| 5948 |
"input_ue_unconnectable": {}
|
| 5949 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5950 |
},
|
| 5951 |
"widgets_values": [
|
| 5952 |
+
"# LTX-2.3\n\n* Hugging Face: [Lightricks/LTX-2.3](https://huggingface.co/Lightricks/LTX-2.3/)\n* GitHub: [LTX-2](https://github.com/Lightricks/LTX-2)\n\n## LTX-2.3 Prompting Tips\n\n1. **Core Actions**: describe events and actions as they happen over time\n2. **Visual Details**: describe all visual details you want to appear in the video\n3. **Audio**: describe any sounds and dialogue needed for the scene\n\n## Report LTX-2.3 Issues\n\nTo report issues when running this workflow, go here:\n[https://github.com/Lightricks/ComfyUI-LTXVideo/issues](https://github.com/Lightricks/ComfyUI-LTXVideo/issues)\n\n---\n\n## Required Models and Files\n\n### diffusion_models\n\n**Option 1**\n\n* [ltx-2-3-22b-dev_transformer_only_fp8_input_scaled.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/diffusion_models/ltx-2-3-22b-dev_transformer_only_fp8_input_scaled.safetensors)\n\n> This model requires the **distilled LoRA** if you want to generate videos in **8 steps**.\n\n**Option 2**\n\n* [ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/diffusion_models/ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensors)\n\n> This model **does not require** the distilled LoRA.\n\n---\n\n### vaes\n\n* [LTX23_audio_vae_bf16.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/vae/LTX23_audio_vae_bf16.safetensors)\n* [LTX23_video_vae_bf16.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/vae/LTX23_video_vae_bf16.safetensors)\n\n**For preview**\n\n* [taeltx2_3.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/vae/taeltx2_3.safetensors)\n\n---\n\n### projection text encoder\n\n* [ltx-2.3_text_projection_bf16.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/text_encoders/ltx-2.3_text_projection_bf16.safetensors)\n\n---\n\n### text encoder\n\nYou can download the text encoder here:\n\n* [gemma_3_12B_it_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/ltx-2/blob/main/split_files/text_encoders/gemma_3_12B_it_fp8_scaled.safetensors)\n\n---\n\n### loras\n\n* [ltx-2.3-22b-distilled-lora-dynamic_fro09_avg_rank_105_bf16.safetensors](https://huggingface.co/Kijai/LTX2.3_comfy/blob/main/loras/ltx-2.3-22b-distilled-lora-dynamic_fro09_avg_rank_105_bf16.safetensors)\n\n* [head_swap_v3_rank_adaptive_fro_098.safetensors](https://huggingface.co/Alissonerdx/BFS-Best-Face-Swap-Video/blob/main/ltx-2.3/head_swap_v3_rank_adaptive_fro_098.safetensors)\n\n* [head_swap_v3_rank_64.safetensors](https://huggingface.co/Alissonerdx/BFS-Best-Face-Swap-Video/resolve/main/ltx-2.3/head_swap_v3_rank_64.safetensors)\n\n> If you download\n> **ltx-2-3-22b-dev_transformer_only_fp8_input_scaled.safetensors**,\n> use the LoRA above.\n\n---\n\n### upscalers\n\n**Spatial upscaler**\n\n* [ltx-2.3-spatial-upscaler-x2-1.1.safetensors](https://huggingface.co/Lightricks/LTX-2.3/blob/main/ltx-2.3-spatial-upscaler-x2-1.1.safetensors)\n\n**Temporal upscaler**\n\n* [ltx-2.3-temporal-upscaler-x2-1.0.safetensors](https://huggingface.co/Lightricks/LTX-2.3/blob/main/ltx-2.3-temporal-upscaler-x2-1.0.safetensors)\n\n---\n\n## Model Folder Structure\n\n```text\nπ ComfyUI/\nβββ π models/\nβ βββ π diffusion_models/\nβ β βββ ltx-2-3-22b-dev_transformer_only_fp8_input_scaled.safetensors\nβ β βββ ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensors\nβ βββ π vae/\nβ β βββ LTX23_audio_vae_bf16.safetensors\nβ β βββ LTX23_video_vae_bf16.safetensors\nβ β βββ taeltx2_3.safetensors\nβ βββ π text_encoders/\nβ β βββ ltx-2.3_text_projection_bf16.safetensors\nβ β βββ gemma_3_12B_it_fp8_scaled.safetensors\nβ βββ π loras/\nβ β βββ ltx-2.3-22b-distilled-lora-dynamic_fro09_avg_rank_105_bf16.safetensors\nβ βββ π latent_upscale_models/\nβ βββ ltx-2.3-spatial-upscaler-x2-1.1.safetensors\nβ βββ ltx-2.3-temporal-upscaler-x2-1.0.safetensors\n```\n\n---\n\n## Important Notes\n\n* If you use **`ltx-2-3-22b-dev_transformer_only_fp8_input_scaled.safetensors`**, you will also need the **distilled LoRA** to generate videos in **8 steps**.\n* If you use **`ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled_v3.safetensors`**, the **LoRA is not required**.\n* **`taeltx2_3.safetensors`** is used for **preview**.\n* Both **spatial** and **temporal** upscalers are optional.\n\n---\n\n## Report Issues\n\nBefore reporting any issue, make sure to update ComfyUI first:\n[ComfyUI update guide](https://docs.comfy.org/installation/update_comfyui)\n\n> Note: Desktop and Cloud releases follow stable builds, so some models supported in nightly versions may not be available yet.\n\n### Where to report each issue type\n\n* Cannot run / runtime errors: [ComfyUI/issues](https://github.com/Comfy-Org/ComfyUI/issues)\n* UI / frontend issues: [ComfyUI_frontend/issues](https://github.com/Comfy-Org/ComfyUI_frontend/issues)\n* Workflow issues: [workflow_templates/issues](https://github.com/Comfy-Org/workflow_templates/issues)\n\n"
|
| 5953 |
],
|
| 5954 |
+
"color": "#222",
|
| 5955 |
+
"bgcolor": "#000"
|
| 5956 |
},
|
| 5957 |
{
|
| 5958 |
+
"id": 345,
|
| 5959 |
+
"type": "VHS_LoadVideo",
|
| 5960 |
"pos": [
|
| 5961 |
+
-1257.865153821111,
|
| 5962 |
+
2962.074064582823
|
| 5963 |
],
|
| 5964 |
"size": [
|
| 5965 |
+
377.9932063472643,
|
| 5966 |
+
959.9879308292182
|
| 5967 |
],
|
| 5968 |
+
"flags": {},
|
| 5969 |
+
"order": 72,
|
|
|
|
|
|
|
| 5970 |
"mode": 0,
|
| 5971 |
"inputs": [
|
| 5972 |
{
|
| 5973 |
+
"name": "meta_batch",
|
| 5974 |
+
"shape": 7,
|
| 5975 |
+
"type": "VHS_BatchManager",
|
| 5976 |
+
"link": null
|
| 5977 |
+
},
|
| 5978 |
+
{
|
| 5979 |
+
"name": "vae",
|
| 5980 |
+
"shape": 7,
|
| 5981 |
+
"type": "VAE",
|
| 5982 |
+
"link": null
|
| 5983 |
+
},
|
| 5984 |
+
{
|
| 5985 |
+
"name": "force_rate",
|
| 5986 |
+
"type": "FLOAT",
|
| 5987 |
+
"widget": {
|
| 5988 |
+
"name": "force_rate"
|
| 5989 |
+
},
|
| 5990 |
+
"link": 756
|
| 5991 |
+
},
|
| 5992 |
+
{
|
| 5993 |
+
"name": "frame_load_cap",
|
| 5994 |
+
"type": "INT",
|
| 5995 |
+
"widget": {
|
| 5996 |
+
"name": "frame_load_cap"
|
| 5997 |
+
},
|
| 5998 |
+
"link": 757
|
| 5999 |
}
|
| 6000 |
],
|
| 6001 |
"outputs": [
|
| 6002 |
{
|
| 6003 |
+
"name": "IMAGE",
|
| 6004 |
+
"type": "IMAGE",
|
| 6005 |
+
"links": [
|
| 6006 |
+
762
|
| 6007 |
+
]
|
| 6008 |
+
},
|
| 6009 |
+
{
|
| 6010 |
+
"name": "frame_count",
|
| 6011 |
+
"type": "INT",
|
| 6012 |
+
"links": null
|
| 6013 |
+
},
|
| 6014 |
+
{
|
| 6015 |
+
"name": "audio",
|
| 6016 |
+
"type": "AUDIO",
|
| 6017 |
+
"links": [
|
| 6018 |
+
766
|
| 6019 |
+
]
|
| 6020 |
+
},
|
| 6021 |
+
{
|
| 6022 |
+
"name": "video_info",
|
| 6023 |
+
"type": "VHS_VIDEOINFO",
|
| 6024 |
+
"links": null
|
| 6025 |
}
|
| 6026 |
],
|
| 6027 |
+
"title": "Body Reference",
|
| 6028 |
"properties": {
|
| 6029 |
+
"cnr_id": "comfyui-videohelpersuite",
|
| 6030 |
+
"ver": "1.7.9",
|
| 6031 |
+
"Node name for S&R": "VHS_LoadVideo",
|
| 6032 |
"ue_properties": {
|
| 6033 |
"widget_ue_connectable": {},
|
| 6034 |
"version": "7.5.1",
|
| 6035 |
"input_ue_unconnectable": {}
|
| 6036 |
}
|
| 6037 |
},
|
| 6038 |
+
"widgets_values": {
|
| 6039 |
+
"video": "8936899257439066.mp4",
|
| 6040 |
+
"force_rate": 24,
|
| 6041 |
+
"custom_width": 0,
|
| 6042 |
+
"custom_height": 0,
|
| 6043 |
+
"frame_load_cap": 121,
|
| 6044 |
+
"skip_first_frames": 28,
|
| 6045 |
+
"select_every_nth": 1,
|
| 6046 |
+
"format": "AnimateDiff",
|
| 6047 |
+
"videopreview": {
|
| 6048 |
+
"hidden": false,
|
| 6049 |
+
"paused": false,
|
| 6050 |
+
"params": {
|
| 6051 |
+
"filename": "8936899257439066.mp4",
|
| 6052 |
+
"type": "input",
|
| 6053 |
+
"format": "video/mp4",
|
| 6054 |
+
"force_rate": 24,
|
| 6055 |
+
"custom_width": 0,
|
| 6056 |
+
"custom_height": 0,
|
| 6057 |
+
"frame_load_cap": 121,
|
| 6058 |
+
"skip_first_frames": 28,
|
| 6059 |
+
"select_every_nth": 1
|
| 6060 |
+
}
|
| 6061 |
+
}
|
| 6062 |
+
},
|
| 6063 |
+
"color": "#432",
|
| 6064 |
+
"bgcolor": "#653"
|
| 6065 |
},
|
| 6066 |
{
|
| 6067 |
+
"id": 269,
|
| 6068 |
+
"type": "LoadImage",
|
| 6069 |
"pos": [
|
| 6070 |
+
-731.0480815783342,
|
| 6071 |
+
3148.0437192024465
|
| 6072 |
],
|
| 6073 |
"size": [
|
| 6074 |
+
413.37792652217263,
|
| 6075 |
+
544.3087931787361
|
| 6076 |
],
|
| 6077 |
"flags": {},
|
| 6078 |
"order": 59,
|
| 6079 |
"mode": 0,
|
| 6080 |
"inputs": [],
|
| 6081 |
+
"outputs": [
|
| 6082 |
+
{
|
| 6083 |
+
"name": "IMAGE",
|
| 6084 |
+
"type": "IMAGE",
|
| 6085 |
+
"links": [
|
| 6086 |
+
1083
|
| 6087 |
+
]
|
| 6088 |
+
},
|
| 6089 |
+
{
|
| 6090 |
+
"name": "MASK",
|
| 6091 |
+
"type": "MASK",
|
| 6092 |
+
"links": []
|
| 6093 |
+
}
|
| 6094 |
+
],
|
| 6095 |
+
"title": "Face Reference",
|
| 6096 |
"properties": {
|
| 6097 |
+
"cnr_id": "comfy-core",
|
| 6098 |
+
"ver": "0.16.3",
|
| 6099 |
+
"Node name for S&R": "LoadImage",
|
| 6100 |
+
"enableTabs": false,
|
| 6101 |
+
"tabWidth": 65,
|
| 6102 |
+
"tabXOffset": 10,
|
| 6103 |
+
"hasSecondTab": false,
|
| 6104 |
+
"secondTabText": "Send Back",
|
| 6105 |
+
"secondTabOffset": 80,
|
| 6106 |
+
"secondTabWidth": 65,
|
| 6107 |
"ue_properties": {
|
| 6108 |
"widget_ue_connectable": {},
|
| 6109 |
"version": "7.5.1",
|
| 6110 |
"input_ue_unconnectable": {}
|
| 6111 |
+
},
|
| 6112 |
+
"image": "clipspace/clipspace-painted-masked-1773855419084.png [input]"
|
| 6113 |
},
|
| 6114 |
"widgets_values": [
|
| 6115 |
+
"03e55e2c-c63e-4bad-86b1-ac78d2557ee8.jpg",
|
| 6116 |
+
"image"
|
| 6117 |
],
|
| 6118 |
+
"color": "#432",
|
| 6119 |
+
"bgcolor": "#653"
|
| 6120 |
}
|
| 6121 |
],
|
| 6122 |
"links": [
|
|
|
|
| 7099 |
"config": {},
|
| 7100 |
"extra": {
|
| 7101 |
"ds": {
|
| 7102 |
+
"scale": 0.3138428376721013,
|
| 7103 |
"offset": [
|
| 7104 |
+
2893.07577969285,
|
| 7105 |
+
-599.2866900931124
|
| 7106 |
]
|
| 7107 |
},
|
| 7108 |
"frontendVersion": "1.41.20",
|