New Sample Workflow!!
The GGUF's for LTX2 require a few more extra components to be loaded since the GGUF's don't have the vae's and embedding connectors packaged in the transformer model.
You also need to install two custom node packages:
https://github.com/city96/ComfyUI-GGUF
https://github.com/kijai/ComfyUI-KJNodes
Navigate to your ComfyUI model folder and run the following to download all the model weights:
# Can try any quant type
ln -s "$(hf download unsloth/LTX-2-GGUF ltx-2-19b-dev-UD-Q2_K_XL.gguf --quiet)" unet/ltx-2-19b-dev-UD-Q2_K_XL.gguf
ln -s "$(hf download unsloth/LTX-2-GGUF vae/ltx-2-19b-dev_audio_vae.safetensors --quiet)" vae/ltx-2-19b-dev_audio_vae.safetensors
ln -s "$(hf download unsloth/LTX-2-GGUF vae/ltx-2-19b-dev_video_vae.safetensors --quiet)" vae/ltx-2-19b-dev_video_vae.safetensors
# Can try any quant type
ln -s "$(hf download unsloth/gemma-3-12b-it-qat-GGUF gemma-3-12b-it-qat-UD-Q4_K_XL.gguf --quiet)" text_encoders/gemma-3-12b-it-qat-UD-Q4_K_XL.gguf
ln -s "$(hf download unsloth/gemma-3-12b-it-qat-GGUF mmproj-BF16.gguf --quiet)" text_encoders/gemma-3-12b-it-qat-mmproj-BF16.gguf
ln -s "$(hf download unsloth/LTX-2-GGUF text_encoders/ltx-2-19b-dev_embeddings_connectors.safetensors --quiet)" text_encoders/ltx-2-19b-dev_embeddings_connectors.safetensors
ln -s "$(hf download Lightricks/LTX-2 ltx-2-19b-distilled-lora-384.safetensors --quiet)" loras/ltx-2-19b-distilled-lora-384.safetensors
ln -s "$(hf download Lightricks/LTX-2 ltx-2-spatial-upscaler-x2-1.0.safetensors --quiet)" latent_upscale_models/ltx-2-spatial-upscaler-x2-1.0.safetensors
# Optional
ln -s "$(hf download Lightricks/LTX-2-19b-LoRA-Camera-Control-Dolly-Left ltx-2-19b-lora-camera-control-dolly-left.safetensors --quiet)" loras/ltx-2-19b-lora-camera-control-dolly-left.safetensors
Then download the example video from the repo unsloth_best.mp4. This video was created with the above weights, and has the exact workflow embedded inside. You can directly open it inside ComfyUI and hit Run to recreate it.
Cool! Thanks for the workflow and the UD Quants!
by the way - how good is the UD_Q2_K_XL quant comparing to Q6, Q8 quants? Did you make any comparisons?
Ok, I have just tried it on Mac Studio. Compared BF16 with UD_Q2_K_XL and found no difference at all. The only difference on Mac Studio M3 Ultra that I noticed is that BF16 is faster (because of Mac Silicon chips architecture its a usual thing for Macs).
Your quants are truly amazing
Cool! Thanks for the workflow and the UD Quants!
by the way - how good is the UD_Q2_K_XL quant comparing to Q6, Q8 quants? Did you make any comparisons?
Usually we always recommend bigger. Our 2-bit ones are dynamic which makes them more powerful than normal 2-bit though. You should read: https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs
Cool! Thanks for the workflow and the UD Quants!
by the way - how good is the UD_Q2_K_XL quant comparing to Q6, Q8 quants? Did you make any comparisons?
We don't have official benchmarks, but they're very competitive with higher quants.
I used UD_Q4_K_S and when running the negative CLIP it says:RuntimeError: mat1 and mat2 shapes cannot be multiplied (24x3840 and 1920x4096)
I am using gemma_3_12B_fp4_mixed with the ltx-2-19B-dev_embedding_connector from this repo.
When I try using the same UD_Q4_K_S with the LTX-2 iv2 template's "LTXV Audio Text Encoder Loader", it at least finished but the video is broken.
Try running the workflow as is. If you’re still getting errors then it means you probably need to update comfyui, comfyuigguf or kjnodes.
Hi friend, I'm new. Do you have any workflow for image to video and 6 frames to video like in WAN? Or how could I do it?
Hi friend, I'm new. Do you have any workflow for image to video and 6 frames to video like in WAN? Or how could I do it?
https://civitai.com/models/2295882/ltx2-basic-gguf-720p-workflow