The correct way to use Kijai's LTX-2 GGUF?
I don't know how to properly use Kijai's LTX-2 GGUF series models, I guess it should be as follows:
- Download the model in this library, but do I need to use the matching gguf for the gemma3 model, where to download it, and are there any other models I need?
- Download the latest ComfyUI-KJNodes.
- Use the LTX official workflow\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-LTXVideo\example_workflows.
- Replace the nodes that load the model to the corresponding nodes in ComfyUI-KJNodes.
Is it possible to run smoothly in this way? I did, but there were always all sorts of problems. Am I missing any key points? And how did you start?
My system specifications:
RTX 5070 12GB
Python 3.10
torch 2.9.1+cu130
ComfyUI 0.15.1
Yeah that sounds about right ;-) should work fine.
Basically you can use any workflow, replacing the model loaders with those mentioned on front page here https://huggingface.co/Kijai/LTXV2_comfy
Replace the loaders in workflow as the images above, in the LTX-Video workflows you mentioned, or the default LTX-2 ones you find inside ComfyUI.
( And if you get "confused" by the sub-graphs, so you can always try mine too, made to be simple: https://huggingface.co/RuneXX/LTX-2-Workflows )
You dont need a matching Gemma (you can use the ComfyUI gemma models). https://huggingface.co/Comfy-Org/ltx-2/tree/main/split_files/text_encoders
( or if you want to use GGUF : https://huggingface.co/unsloth/gemma-3-12b-it-GGUF/ Both works, but for GGUF you need to use the Dual Clip GGUF node.)
I did, but there were always all sorts of problems. Am I missing any key points? And how did you start?
What problems did you run into?
Thanks a lot, your screenshots is very clear, especially regarding the two connections of embeddings_connector. I had always suspected that I was doing something wrong, but it seems there's no problem now.
BTW, your workflow looks very effective, I'm going to start trying it out.

