22th January - Tiny VAE and live previews test workflow

#41
by RuneXX - opened

bandicam 2026-01-22 17-08-49-071

If you want to test out live sampler previews ;-)
I2V and T2V : https://huggingface.co/RuneXX/LTX-2-Workflows/blob/main/LTX-2%20-%20I2V%20and%20T2V%20(beta%20test%20sampler%20previews).json
(just a test workflow, it also has NAG and performance tweak nodes to test ;-)

You need to download the Tiny vae from here:
https://huggingface.co/Kijai/LTXV2_comfy/tree/main/VAE
and put in comfyui/models/vae_approx folder

And you need up to date ComfyUI as well as KJNodes

NB - support for LTX tiny vae is not yet out in the desktop version so be patient ;-)

Was that video taken from preview? 🤔 it's still looks good

First pass successful but errors out on the second stage at the SamplerCustomAdvanced:

File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-kjnodes\nodes\ltxv_nodes.py", line 782, in callback
x0 = vae.first_stage_model.per_channel_statistics.un_normalize(x0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1964, in getattr
raise AttributeError(
AttributeError: 'TAEHV' object has no attribute 'per_channel_statistics'

Finally tracked down to the extra "latent upscale model" piping on the "LTX2 Sampling Preview Override" node:
image

Was that video taken from preview? 🤔 it's still looks good

No it's the final render, but the preview is watchable on the second pass so you know whether to abort or not :)
(and previews have no sound)

the preview is watchable on the second pass so you know whether to abort or not :)

yeah thats pretty much the use case of previews, to see if its worth the wait, or start over ;-)

Btw, does enabling preview will cause noticeable increase on VRAM usage?

Btw, does enabling preview will cause noticeable increase on VRAM usage?

A little bit, but its a small footprint it seems

bandicam 2026-01-22 17-08-49-071

If you want to test out live sampler previews ;-)
https://huggingface.co/RuneXX/LTX-2-Workflows/blob/main/LTX-2%20-%20I2V%20Basic%20(beta%20test%20sampler%20previews).json
(just a quick test workflow, it also has NAG etc. So still testing somethings ;-)

You need to download the Tiny vae from here:
https://huggingface.co/Kijai/LTXV2_comfy/tree/main/VAE
and put in comfyui/models/vae_approx folder

And you need up to date ComfyUI as well as KJNodes

I still can't get the live preview to show up using your workflow.
Using other workflows like qwen,flux,sdxl the live preview is working, so its not comfyui.
I believed it's ltxv2 nodes related. Any idea on how to troubleshoot this?

I still can't get the live preview to show up using your workflow.
Using other workflows like qwen,flux,sdxl the live preview is working, so its not comfyui.
I believed it's ltxv2 nodes related. Any idea on how to troubleshoot this?

You need to update ComfyUI, and more importantly KJnodes to the very latest update
This feature was recently added

Also, on the latest ComfyUI, the preview setting in Manager will be ignored. You will see this in ComfyUI logs:

Manager's preview method feature is disabled. Use ComfyUI's --preview-method CLI option or 'Settings > Execution > Live preview method'.

There is no live preview support for LTX2 in core ComfyUI yet, it's bit complicated because of the audio.

The sampling preview override node used here will override every preview setting and give live previews with LTX2, as long as KJNodes is up to date, and ComfyUI itself is updated to support loading the Tiny VAE.

Also, in the provided workflow from OP the total counted frames do not align with the divisible by 8 +1 rule.

Simple math node works with: (round(a * b) // 8) * 8 + 1
or the Kijai simple calculator that's being used works with: floor( (a * b) / 8 ) * 8 + 1

made it in a hurry, so might be ;-) will edit it

There is no live preview support for LTX2 in core ComfyUI yet, it's bit complicated because of the audio.

The sampling preview override node used here will override every preview setting and give live previews with LTX2, as long as KJNodes is up to date, and ComfyUI itself is updated to support loading the Tiny VAE.

Any idea how to debug? Everything is up to date, software wise. I even did a fresh install of comfyUI portable. The only custom nodes are comfyui-kjnodes and comfyui-manager.
No error and no problems running @RuneXX workflow. Just that there is no live preview on that SamplerCustom. Live preview on Ksampler works, so its not comfyui's preview options.
By the way, i'm still on my crappy 11GB 2080TI.

Just update comfyui and KJnodes and you'll be fine ;-)
Either via comfyui manager, or git pull in the folder

Any idea how to debug? Everything is up to date, software wise. I even did a fresh install of comfyUI portable. The only custom nodes are comfyui-kjnodes and comfyui-manager.
No error and no problems running @RuneXX workflow. Just that there is no live preview on that SamplerCustom. Live preview on Ksampler works, so its not comfyui's preview options.
By the way, i'm still on my crappy 11GB 2080TI.

I'm sure you have but just check that you've added the needed KJ custom node in the model path before the sampler (don't pipe the upscale model, leave the input empty):

image

vae
I get this error with the tiny vae loader i updated Kjnodes (latest not nightly) and comfyui (Desktop version).

I get this error with the tiny vae loader i updated Kjnodes (latest not nightly) and comfyui (Desktop version).

I had that error when the second stage "LTX2 Sampling Preview Override" -node from Kijai had the latent upscale model input piped in. Leave it blank like above.
Although my error was at the custom sampler stage, but the TAEHV was the error and it went away when I disconnected the upscale model.

vae
I get this error with the tiny vae loader i updated Kjnodes (latest not nightly) and comfyui (Desktop version).

Desktop version lags behind the main version so it's just not available there yet.

Added a couple of "experimental" nodes, that can help performance, since this is a bit of a "test new features workflow" ;-)

  • LTX2 Mem Eff Sage Attention Patch : needs latest version Sage Attention
  • LTX2 Attention Tuner Patch
  • LTXV Chunk FeedForward

All from KJNodes (and as always update to have the latest). And click the (?) at top right corner of the node for a short description of what it does

(all are disabled by default, but there if you want to test, and experiment. )

https://huggingface.co/RuneXX/LTX-2-Workflows/blob/main/LTX-2%20-%20I2V%20and%20T2V%20(beta%20test%20sampler%20previews).json

There is no live preview support for LTX2 in core ComfyUI yet, it's bit complicated because of the audio.

The sampling preview override node used here will override every preview setting and give live previews with LTX2, as long as KJNodes is up to date, and ComfyUI itself is updated to support loading the Tiny VAE.

Any idea how to debug? Everything is up to date, software wise. I even did a fresh install of comfyUI portable. The only custom nodes are comfyui-kjnodes and comfyui-manager.
No error and no problems running @RuneXX workflow. Just that there is no live preview on that SamplerCustom. Live preview on Ksampler works, so its not comfyui's preview options.
By the way, i'm still on my crappy 11GB 2080TI.

I got it working at last. I need to install the custom node "comfyui-videohelpersuite" to get that live preview on that SamplerCustom working.

Added a couple of "experimental" nodes, that can help performance, since this is a bit of a "test new features workflow" ;-)

  • LTX2 Mem Eff Sage Attention Patch : needs latest version Sage Attention
  • LTX2 Attention Tuner Patch
  • LTXV Chunk FeedForward

short report for my 5060 ti 16GB torch2.9.1+cu130 config:

[LTXV Chunk FeedForward, LTX2 Attention Tuner Patch]
both seem to slow down sampling by 20%.

[LTX2 Mem Eff Sage Attention Patch]
has a slight perfomance boost effect.

[Model Memory Usage Factor Override]
this is a life saver! I'm tired of patching comfy files for memory_usage_ factor after each nightly update. :)
experimentally found that the optimal value (to use ~15.4GB) for stage 1 sampling is 0.052,
for stage 2 sampling is 0.057.

Model Memory Usage Factor Override .. not tried that one ;-) will give it a try as well

Does a different VRAM size will need a different memory_usage_ factor value? 🤔

i saw at github that someone use a formula to set it, so i wondered whether using formula can works better for various kind of condition than using a fixed value. 🤔

Edit: the link to the github comment https://github.com/Comfy-Org/ComfyUI/issues/11726#issuecomment-3734410242

Does a different VRAM size will need a different memory_usage_ factor value? 🤔

i saw at github that someone use a formula to set it, so i wondered whether using formula can works better for various kind of condition than using a fixed value. 🤔

Generally no, it mostly only varies between different overall Comfy setups, for example if you use monitor on same GPU you run the model on, the memory use may fluctuate simply based on what you are doing on your monitor, especially in Windows. Even just using ComfyUI itself through the browser can affect this, and currently it can't be caught "live" so to speak.

The --reserve-vram startup argument achieves the same thing, though clumsier to use as you have to restart.

The purpose of this node is more about being able to lower the value, since the memory optimization nodes reduce the need for offloading and you can easily end up not utilizing all your VRAM with them otherwise.

Also for everyone using the chunk ffn... it does have effect on the speed, more chunks is slower. Initially 4 chunks were needed to bring the ffn VRAM peak to a level under the attention peak, now with most recent optimizations it's only necessary to use 2 chunks, and in some cases it's not even necessary to use the node at all. The other nodes should not have any negative effects on speed.

The problem is that the --reserve-vram CLI switch affects all workflows globally. It's much more convenient to use the 'Model Memory Usage Factor Override' for specific LTX-2 workflows.
With 16GB of VRAM, I managed to get 18 seconds at 1536x864 (30fps) or over 25 seconds at 1280x704 (24fps) while maintaining performance. (I don't see the point in higher resolutions because of diminishing returns above 720p).
Without the Memory Usage Factor setting, I either get OOM errors or my VRAM is under-utilized, resulting in much slower performance.

The other nodes should not have any negative effects on speed.

By the way, regarding the LTX2 Attention Tuner Patch node: does adjusting video_to_audio_scale make any sense if I don't need audio generation? Would it improve video quality? I noticed that if I just enable the node with default settings (blocks: undefined, 1.0, 1.0, 1.0, 1.0), sampling slows down."

Sign up or log in to comment