what about fp8 transformer only?
what about fp8 transformer only?
what about fp8 transformer only?
Wait a bit.
does ltx-2.3_text_projection_bf16.safetensors replace ltx-2-19b-embeddings_connector_distill_bf16 from LTX-2?
does ltx-2.3_text_projection_bf16.safetensors replace ltx-2-19b-embeddings_connector_distill_bf16 from LTX-2?
So the text encoder needs the text_embedding_projection layers, this can be used in the Dual CLIP loader like before.
And the actual connector weights are now in the model itself, so those are no longer separate.
Your old workflow is working great with 2.3 after KJ Node update. (I think tiny previews took a hit though. I was using dev with distilled lora previously and now the available distilled fp8. will try dev/distilled lora once available). It does work with old loras although still testing those. Thanks!
Your old workflow is working great with 2.3 after KJ Node update. (I think tiny previews took a hit though. I was using dev with distilled lora previously and now the available distilled fp8. will try dev/distilled lora once available). It does work with old loras although still testing those. Thanks!
Yeah if the VAE is new, we gonna need new tiny VAE as well.