The workflows are based on the extracted models from https://huggingface.co/Kijai/LTX2.3_comfy The extracted models might run easier on your computer (as separate files). (but you can easily swap out the model loader for the ComfyUI default model loader if you want to load the checkpoint with "all in one" vae built-in etc)

LTX-2.3 Main Model Downloads (split models):

Gemma - either safetensor or GGUF:

  1. Gemma 3 12B it safetensor: https://huggingface.co/Comfy-Org/ltx-2/

  2. Gemma 3 12B it GGUF: https://huggingface.co/unsloth/gemma-3-12b-it-GGUF/

LTX-2.3 GGUF models (for GGUF workflows) - one of the source below:

  1. Quantstack: https://huggingface.co/QuantStack/LTX-2.3-GGUF
  2. Unsloth : https://huggingface.co/unsloth/LTX-2.3-GGUF
  3. Vantage : https://huggingface.co/vantagewithai/LTX-2.3-GGUF

Tiny Vae (for sampler previews): https://github.com/madebyollin/taehv/blob/main/safetensors/taeltx2_3.safetensors
(Optional/Recommended. Without this vae you still get previews with latentrgb from KJnodes, at a lower res)


Needed nodes:


LTX-2.3

Lighttricks LTX-2.3 main repro: https://huggingface.co/Lightricks/LTX-2.3
Lightricks LTX-2.3 Collection (loras etc): https://huggingface.co/collections/Lightricks/ltx-23


More workflows :

ComfyUI Official Workflows : https://blog.comfy.org/p/ltx-23-day-0-supporte-in-comfyui

LTX-2 Video Official Workflows : https://github.com/Lightricks/ComfyUI-LTXVideo/tree/master/example_workflows/2.3

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support