Works great πŸ™Œ πŸŽ‰

#2
by RuneXX - opened

First run ;-) worked ... yay.. hehe ;-)
And super thanks for making the split models ;-)

wich workflow are you using ?

i am so envy lol, need to wait til GGUF available, my poor mechine can't handle FP8 LOL outside CRY HARD inside, max at Q6. But it's indeed so beautiful. !!

which workflow are you using ?

Just same old workflows but changing out the models and the main model loaders as per front page model card
https://huggingface.co/Kijai/LTX2.3_comfy

Uploaded one to test if you need one : https://huggingface.co/RuneXX/LTX-2.3-Workflows/

Constant error
VAELoaderKJ
ERROR: VAE is invalid: None

If the VAE is from a checkpoint loader node your checkpoint does not contain a valid VAE.

ERROR: VAE is invalid: None
If the VAE is from a checkpoint loader node your checkpoint does not contain a valid VAE.

The vae is not from the checkpoint loader. The models here are split into seperate files
https://huggingface.co/Kijai/LTX2.3_comfy/tree/main/vae

So if you are using the models in this repo its based on seperate files, instead of one huge "all-in-one"

Look at the image at the front page: https://huggingface.co/Kijai/LTX2.3_comfy/
It explains how to use them

I use separate
image

Looks correct at a quick glance.
Try update KJNodes, it had some recent changes for LTX-2.3 (specifically adding support for new ltx-2.3 audio vae).
That must be it ;-)

Thank you, the latest version works, but it didn't solve the problem; only the nightly build fixed the bug.

Thank you, the latest version works, but it didn't solve the problem; only the nightly build fixed the bug.

yes both the model, and the update are fresh out of the oven ..
So KJNodes hasnt set a new version number "just" for this i bet.

(I always do a git pull in the folder. The ComfyUI manager its perhaps based on version number, so nightly would do as you said)

Constant error
VAELoaderKJ
ERROR: VAE is invalid: None

If the VAE is from a checkpoint loader node your checkpoint does not contain a valid VAE.

Update your kjnodes from git.
This is how I solved it.

5060 ti 16gb VRAM, 64 RAM, 20s video, 1600Ρ…896
It works very well, but it still needs to be tested. I couldn't get it up to 1920x1088, the process froze on upscale.

got prompt
Model LTXAVTEModel_ prepared for dynamic VRAM loading. 11200MB Staged. 0 patches attached. Force pre-loaded 290 weights: 1497 KB.
Generating tokens: 54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 139/256 [00:59<00:49, 2.35it/s]
Model LTXAVTEModel_ prepared for dynamic VRAM loading. 11200MB Staged. 0 patches attached. Force pre-loaded 290 weights: 1497 KB.
0 models unloaded.
Model LTXAV prepared for dynamic VRAM loading. 22364MB Staged. 0 patches attached.
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 8/8 [02:30<00:00, 18.81s/it]
Model VideoVAE prepared for dynamic VRAM loading. 1384MB Staged. 0 patches attached.
0 models unloaded.
Model LTXAV prepared for dynamic VRAM loading. 22364MB Staged. 0 patches attached.
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [05:07<00:00, 102.40s/it]
Requested to load AudioVAE
loaded completely; 13722.38 MB usable, 693.46 MB loaded, full load: True
0 models unloaded.
Model VideoVAE prepared for dynamic VRAM loading. 1384MB Staged. 0 patches attached.
Prompt executed in 00:10:45

It works very well, but it still needs to be tested. I couldn't get it up to 1920x1088, the process froze on upscale.

Single pass? or with 2nd pass sampler? Seems to be a little more "forgiving" on the Vram if you first run half size at first pass, and then upscale and 2nd pass (at least my end)

ah never mind. Looks like it was a 2nd pass from your copy-paste log ;-)

Uploaded one to test if you need one : https://huggingface.co/RuneXX/LTX-2.3-Workflows/

Very nice wf, thx!

@BahamutRU
Most welcome ;-)

@MattHVisual
nice one ;-) definitively a bump up in quality with LTX-2.3

i am so envy lol, need to wait til GGUF available, my poor mechine can't handle FP8 LOL outside CRY HARD inside, max at Q6. But it's indeed so beautiful. !!

@agus2312
Give it a try ;-)

GGUF main model: https://huggingface.co/QuantStack/LTX-2.3-GGUF
The other parts (text encoder, vae etc) from : https://huggingface.co/Kijai/LTX2.3_comfy

Workflow if you need: https://huggingface.co/RuneXX/LTX-2.3-Workflows (the GGUF one)

Never had such issues with a VAE before. Size mismatch no matter what I try. On GGUF and Safetensors models. KJNodes on latest update.

Never had such issues with a VAE before. Size mismatch no matter what I try. On GGUF and Safetensors models. KJNodes on latest update.

Look closely at the image at front page https://huggingface.co/Kijai/LTX2.3_comfy
Should work if you have that

Never had such issues with a VAE before. Size mismatch no matter what I try. On GGUF and Safetensors models. KJNodes on latest update.

Look closely at the image at front page https://huggingface.co/Kijai/LTX2.3_comfy
Should work if you have that

Not a problem with the node setup. The VAE encoder throws size mismatch.

size mismatch for decoder.up_blocks.7.conv.conv.weight: copying a param with shape torch.Size([512, 256, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3, 3])

I've always called them "GUFF" models. Nice voice clarity and camera focus!

I've always called them "GUFF" models. Nice voice clarity and camera focus!

yeah i for sure dont say Gee Gee You Eff.. ;-) Guff here too hehe

Sign up or log in to comment