Am I Insane? = Cannot Find Simple Workflow That Works

#26
by ScramboSplinergy - opened

I would hope for a day that it doesn't take a week of troubleshooting every time a new model comes out πŸ˜†

I've been all over the webs, can't find a super simple fp8 workflow that works. The simplest ones are 3x more complicated than my past wan workflows, and have one issue to fix after another with not a single generation yet. I'm diving back in right after I post this but please...

Someone....

Where are the workflows!

Here is a simple basic workflow without too many bells and whistles, back to basics . Can be used with both fp8 and gguf
image
Kept it simple as a start, a bit "Wan ish". (but without the 2nd pass latent upscale, so the quality will be slightly lower)

All models are from this repo here ( aka https://huggingface.co/Kijai/LTXV2_comfy )

Download workflow here : https://huggingface.co/RuneXX/LTX-2-Workflows/tree/main

(i can try make a simple one with 2nd pass upscale too. But at least this one should get you the very first generation... hopefully )

The video doesn't have any workflow data attached to it.

The video doesn't have any workflow data attached to it.

Hmm, not sure if an update did something, or if the LTX-2 workflow is too complicated, but its not saving to the video.
So for now here: https://huggingface.co/RuneXX/LTX-2-Workflows/tree/main

image

Made a fairly simple one with 2nd pass upscale as well for better output quality. Hopefully not too complicated.

Workflow example available here : https://huggingface.co/RuneXX/LTX-2-Workflows/tree/main

can I run Guff model on 12 GB VRAM and 32 GB RAM?

image

Made a fairly simple one with 2nd pass upscale as well for better output quality. Hopefully not too complicated.

Workflow example available here : https://tinyurl.com/ltxworkflow

yeah no , this one OOMs even at low res , unusable vs default ltx

Thanks for the assist, but I am looking for a T2V workflow. I've gotten an I2V workflow through every error but the gemma model loader doesn't want to load any of my gemma; fp8.safetensors one-file quantized, fp8 2-model files 0001 0002 (not going to bother getting the links it's the standard gemma loader that everyone drops in their I2V workflows) recommended by AI Search, and neither work. The full one is a bit resource-intense for my taste, also again would really prefer a simple T2V. I've gotten better with the nodes, but building this one from scratch is beyond me.

These text encoders should work:
https://huggingface.co/Comfy-Org/ltx-2/tree/main/split_files/text_encoders

As for T2V, you just need to set an empty image (node) as the start.
(i'll upload a super simple T2V too)

These text encoders should work:
https://huggingface.co/Comfy-Org/ltx-2/tree/main/split_files/text_encoders

As for T2V, you just need to set an empty image (node) as the start.
(i'll upload a super simple T2V too)

Thanks for going out of your way to dig the link up (3 seconds is 3 seconds), but unfortunately I already have the fp8 from that list and it throws this error but keep in mind:

I've gone through maybe 8 or 9 I2V LTXV2 workflows, and of those I always hit a point of no solution for the issue, this one is the furthest I've gotten and so it may just be this one not liking my gemma files

I'm just waiting for the day ComfyUI integrates "Interfaces". A layer that sits atop the workflow for uncomplicated use. Skinnable, fun. Still won't make things just work but would make a significant difference in growth of this community. A better self-patching system, or perhaps a place in the .json workflow data that represents an easy-fetch tool to grab every listed model you're short so long as the workflow author drops all of it's links in.

I'm just waiting for the day ComfyUI integrates "Interfaces". A layer that sits atop the workflow for uncomplicated use. Skinnable, fun. Still won't make things just work but would make a significant difference in growth of this community. A better self-patching system, or perhaps a place in the .json workflow data that represents an easy-fetch tool to grab every listed model you're short so long as the workflow author drops all of it's links in.

Yeah some days I miss Automatic1111 and other GUI based ways ;-)
Took me a while to get comfy in comfy .. ;-)

This one comes close https://github.com/deepbeepmeep/Wan2GP . It has its own GUI and interface (gradio based)
Its a bit more user-friendly perhaps. But less powerful since you can do lot of tinkering and experimenting in Comfy

And a few others like SwarmUI, Stability Matrix, InvokeAI
But they are not as fast as ComfyUI to add new things, so i doubt any of them already have LTX-2
https://www.reddit.com/r/StableDiffusion/comments/1m89x9w/alternatives_for_automatic1111_in_2025/

I would stick with ComfyUI though, its a bit of a learning curve, but worth it ;-)

not liking my gemma files

Try update comfy and all the nodes.
Inside ComfyUI open the Manager, and try update all

not liking my gemma files

Try update comfy and all the nodes.
Inside ComfyUI open the Manager, and try update all

Skipped updating my dependencies. RIP. Also figured out the latent image thing. Never had to figure that out every workflow has one 🀣
Thanks again for your help 🫑

ScramboSplinergy changed discussion status to closed
ScramboSplinergy changed discussion status to open

so that didn't solve the issue and I'm currently stumped

'VAE' object has no attribute 'latent_frequency_bins' -

'VAE' object has no attribute 'latent_frequency_bins' -

Means your comfyUI is not up todate, and not KJNodes probably either
Update everything ;-)

https://huggingface.co/QuantStack/LTX-2-GGUF/discussions/6
https://huggingface.co/Kijai/LTXV2_comfy/discussions/10

I tried to change the " LTX-2 - I2V Basic (GGUF).json" workflow to have a vertical video.
This somehow completes messes up the quality.
Do you know what needs to be changed to make it better?
I guess LTX-2 is able to make vertical videos.

I heard LTX-2 is bad at vertical/portrait video πŸ€”

seems fine to me (but only done a few )
Used GGUF workflow with Dev model + distill lora (10 steps, cfg 1, euler_a)

Sign up or log in to comment