File size: 4,050 Bytes
358e6f8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5844da9
358e6f8
62769ce
27b7827
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6af4d77
 
27b7827
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f430ad2
 
70d9aee
 
 
27b7827
 
 
 
 
 
 
 
 
797218f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
tags:
- comfyui
- workflow
- image-generation
- text-to-image
- image-to-video
- video-generation
- wan
- qwen
- z-image
- zit
- sdxl
- ai-art
- generative-ai
pretty_name: echosingularity workflows
---
license: creativeml-openrail-m
---

WAN 2.2 – I2V Workflow (Optimized for 12GB GPUs)

A fast, clean, and VRAM-efficient Image-to-Video workflow built around WAN 2.2. Fast render times on mid-range GPUs. I tried to keep this simple and easy to use, while maintaining good results. Utilizing well known nodes, and minimizing node bloat. The workflow also has comments everywhere and clear flow.

Ver 1.0 - Base workflow, can do 5 second clips in one iteration. (very fast for 12gb)

Ver 1.1 - More stability, can run 100 times consecutively in 8hrs

Ver 1.2 - Renders 20 second videos. Cleanup of wires.

Ver 1.3 - MMAudio added.

Ver 1.4 - 2x Upscaling, color correction, & sharpening in between passes for quality consistency.

Ver 1.5 - Fixed MMAudio, Updated controls & ability to do 5, 10, 15, & 20 second videos easy. Split RIFE between phases. Fixed prompts. Cleaned up workflow.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

QWEN Image Edit workflow (Optimized for 12GB GPUs)

Designed to run large AIO QWEN checkpoints (≈28GB) while still generating high-resolution outputs on 12GB VRAM GPUs.

The focus here is:

    Image editing / guided edits

    Very low step counts

    Stable results at low CFG

    Aggressive memory management

    Clean upscale + post polish

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Z-Image Turbo workflow (Optimized multi-phase)

Designed to extract maximum detail, edge fidelity, and material realism on 12GB VRAM GPUs. This workflow also includes seed variance to the conditioning so that outputs with the same prompt have more variety similarly to SDXL, Pony, IL models. 

This workflow uses controlled sigma shaping, Res-2 samplers, and phased refinement passes to stabilize detail while avoiding common ZIT artifacts like:

    Over-etched hair

    Shimmering edges

    Checkerboard blockiness

    CFG-induced harshness

The result is clean, high-contrast outputs that scale well across portraits, fashion, cinematic scenes, and hard-surface material tests.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Auto IMG Batch Caption workflow automatically generates clean, structured image captions by combining WD14 tagging, Florence-style natural language descriptions, and a custom trigger token for training consistency. The idea behind this workflow is to deliver proven results for easily (one-click) captioning datasets for training. I have made many high quality LORA from the datasets this workflow outputs.

    Uses WD14 to extract high-quality tag metadata

    Uses Florence to generate a natural-language image description

    Injects a custom trigger token at the start of every caption

    Outputs both tags + descriptive text in a single caption block

    Saves captions to a user-defined folder inside ComfyUI/output


Important Setup Note (VERY IMPORTANT)

You must create a folder inside:

ComfyUI/input/

Example:

ComfyUI/input/Captions

Then select that folder in the caption loader node. 

Captions follow this format:

TRIGGER, wd14_tags_here,
florence_generated_description_here