The official LTX-2.3_T2V_I2V_Single_Stage_Distilled_Full.json process
Hello everyone, I am trying the official LTX-2.3_T2V_I2V_Single_Stage_Distilled_Full.json process, replacing with the Kijai's LTX-2.3-22B-distilled_transformer_only_fp8_scaled and other models and loader nodes, then both routes can be run, the distilled process is about 3~4 minutes, and the full process is about 20+ minutes.
I can also watch the video that runs out, the movement and expressions and lip syncs are also good, but it's not the result I wanted, it changed my reference picture characters and background.
So what are your suggestions on how to adjust some parameters? Lora's strength? LTXV Img To Video Condition strength? I don't know much about the LTX model mechanism and the role of LTX-2.3-22B-Distilled-Lora-384, maybe putting these together is wrong?
LTX-2.3_T2V_I2V_Single_Stage
So I guess this workflow from LTX is a bit like mine, that its both I2V and T2V in one and same workflow.
So if you reference image has no impact, check if you have turned on T2V mode. Its probably some switch there somewhere.
The switch is connected to LTX ImageInPlace node, and turns on/off I2V mode (and alter it to T2V if set to true.. aka bypass image input )
The role of the distilled lora, is to be able to use the DEV model + the lora, with low steps, as if it was a distilled model itself.
But if you are on low vram, its beneficial to use the distilled model instead. It works great. According to LTX 80-90% the quality of running DEV model with full steps (and that takes "forever" to run)
So, assuming you are on low vram (or ram) since you had problems with IC-lora:
- stick to distilled model
- no distilled lora (disable this node)
- 5 seconds (or 10)
- try lower resolutions if you want faster, at least as a first run, to test if your idea works etc.
Good resolutions on lower vram : 960x544, 832x480, 758x512 (can even try 640x480, i havent tried that one, but i bet it works ok)
I have some single pass workflows here too: https://huggingface.co/RuneXX/LTX-2.3-Workflows/tree/main
You can also try GGUF models, the smaller size one can help a bit if you run out of ram/vram.
For example try LTX GGUF Q4 (its pretty good quality), and Gemma Q2
Those files are A LOT smaller to load.
And also try 2 pass workflows (not single stage). It can be a lot faster if your pc allows.
It renders the video half size in 1st pass, then upscales it back to the size you wanted in 2nd pass.
So since first pass is half size its both a lot easier on the the computer, as well as it can be faster...
But all depends, if run low on RAM or VRAM, .. .
But worth a try ;-)
Hi runexx you got it right, I made a low-level mistake and didn't notice the switch.
I saw your LTX-2.3 Single process, which is almost similar to your LTX-2 process, which I have used and is perfect. I also used your basic two-stage process that is also perfect.
So I tested the official single process, it has two branches, one is the distilled branch, the same as your design, the other is called the full branch, I feel that it is a little more complicated, there are more parameters, and the generation time is longer, I still can't fully understand those parameters. I think this process should be designed for the dev model, I replaced it with the distilled model, closed lora, and then run the full branch to see the effect, the waiting time is relatively long indeed.
However, I have also tested some workflows, and I feel that the final decision is the ability of the model itself + hardware capabilities + prompt words. Sometimes the adjustment of those parameters has little impact, provided that there is an official or community practice to provide a set of basic parameter settings.
Thank you very much for sharing and answer!

