what model should i use ?
#30
by
Amr4G - opened
I have 5060TI 16GVram + 32RAM. what model should i use ?
Try the ltx-2.3-22b-distilled_transformer_only_fp8_scaled.safetensors
https://huggingface.co/Kijai/LTX2.3_comfy/tree/main/diffusion_models
But since you are on 5xxx series you can also try the input_scaled versions that might improve speed
And combine it with a Gemma fp8 scaled:
https://huggingface.co/Comfy-Org/ltx-2/tree/main/split_files/text_encoders
It might be more than your computer can handle, but with ComfyUI offloading etc, it might work ;-)
Alternatively you can use GGUF model for both LTX and Gemma.
Try something like Q4 for the LTX model and Q2 for Gemma.
But even the GGUF models for LTX are quite large (but you can get quite small Gemma, that might help)