ltx2.3 has lora_resized_dynamic ?

#9
by lanranjun - opened

When using LTX-2, I used LTX-2-19b-distilled_lora_resized_dynamics_fro09_avg_rank_175-bf16. I would like to know if there is a 2.3 version of this Lora or if it can be used universally. This version of Lora works well for me, and the model is also relatively small. The new version of 384 Lora requires more than 7GB, and the graphics memory is tight.

Not yet, but maybe it will come ;-)

1、ltx-2-19b-dev-fp8_transformer_only.safetensors 21.55GB
2、ltx-2-19b-distilled-lora_resized_dynamic_fro09_avg_rank_175_bf16.safetensors 3.58 GB
3、ltx-2-19b-embeddings_connector_distill_bf16.safetensors 2.86 GB
4、gemma_3_12B_it_fp8_e4m3fn.safetensors 13.2 GB
5、LTX2_video_vae_bf16.safetensors 2.45 GB
6、LTX2_audio_vae_bf16.safetensors 218 MB
7、ltx-2-spatial-upscaler-x2-1.0.safetensors 996 MB

https://space.bilibili.com/60182580?spm_id_from=333.1007.0.0

This is my current combination, and the T2V or I2V effect is very good, so I hope 2.3 can continue to be used.

Yes that combo you can still use upgraded to LTX-2.3, except the distilled lora that is currently the official one.
( only the full distilled 383 lora is available right now.)

(a tiny bit different when it comes to embedding, now its a text projector for the clip loader only)

I just tested using ltx-2.3-22b-distilled_transformer_only_fp8 directly, and it works well without distilled 384

I just tested using ltx-2.3-22b-distilled_transformer_only_fp8 directly, and it works well without distilled 384

yes i was going suggest that. The distilled model is really good ;-) quality is quite a bit better with ltx-2.3

But maybe Kijai will make a smaller version of the new distilled lora also ;-)

ltx-2.3-22b-distilled_transformer_only_fp8 directly + distilled 384
In addition, I found that the quality of T2V videos combined in this way is also good, but there will be much more character dynamics.

https://huggingface.co/drbaph/LTX-2.3-FP8/tree/main/LoRA
Has it been released?

Those might work ;-)

ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled.safetensors
ltx-2.3-22b-distilled_transformer_only_fp8_scaled.safetensors
What is the difference between these two models

ltx-2.3-22b-distilled_transformer_only_fp8_input_scaled.safetensors
ltx-2.3-22b-distilled_transformer_only_fp8_scaled.safetensors
What is the difference between these two models

Input scaled has it's activations calibrated a bit, it's somewhat experimental as I never did that before, but initial test showed signs of it being closer to bf16 than just weight scaling. It also enables fp8 matmuls (fp8_fast) so it's faster on GPUs that support that (roughly 40xx and later nvidia).

Good find thank you! Just what I was waiting for. I think .

https://huggingface.co/drbaph/LTX-2.3-FP8/tree/main/LoRA

Good find thank you! Just what I was waiting for. I think .

https://huggingface.co/drbaph/LTX-2.3-FP8/tree/main/LoRA

Doesn't work with dev fp8 transformer. Tested the rank 159 version

Good find thank you! Just what I was waiting for. I think .

https://huggingface.co/drbaph/LTX-2.3-FP8/tree/main/LoRA

Yeah, tried that, it totally breaks distilled fp8 model, black output, deformed audio that won't even merge in VHS.

Sign up or log in to comment