YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

These attempt to create a better distilled experience for fine tunes and especially for i2v and input conditioned workflows. As it is, the rank 384 official lora is massive overkill and extremely bad with conditioned inputs and finetunes; most community usage has gone for dynamically resized loras, but we've only been offered arbitrary sizes that are still overpowered. These don't compromise and will just rerank and chop off layers, tbh I don't care because those dimensions are clearly stripping and interfering with the output at high strengths.

For 72 rank and lower can be used 1.0 first pass I2V safely. Upscale pass at half or 0.4 strength, still working on an upscale version.

"_ceil" means that was the dynamic ceiling during rerank.

"_cond_safe" means that the attention layers like cross-attention bridges, adaln/scale-shift tables, gate logits, prompt scale-shift have been zeroed. This is technically what an official I2V distilled lora should have had and been released, the 384 distilled lora also seems to actively dampen the extra conditioning in the workflow. Those cond_safe versions are much better suited to I2V, maybe better all-around.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using TenStrip/LTX2.3_Distilled_Lora_1.1_Experiments 1