Can you create a workflow for 10Eros v1?
Could you create a workflow for 10Eros v1? That would be great.
not familiar with that model, but assume its just LTX merged with some NSFW stuff by the name ;-)
Probably is nothing particular about it other than that, and should work in any and all workflows.
(but will look it up, and see if its something different)
Ah so its based on dev model then (makes sense since Dev model, as the name suggests, is for developing things)
So that would need the distilled lora yes, for few steps / fast generations.
And looked at his repro, it looks like he made his own distilled loras, so the screenshot from @PopHorn1956 should work in all the workflows here ;-)
Just load the eros one as the main model, and the distilled lora right below the main model
All the workflows here have a main Diffusion Model Loader + lora loader below, ready for dev model use
(swap out the model and lora in the screenshot above)
Ah so its based on dev model then (makes sense since Dev model, as the name suggests, is for developing things)
So that would need the distilled lora yes, for few steps / fast generations.And looked at his repro, it looks like he made his own distilled loras, so the screenshot from @PopHorn1956 should work in all the workflows here ;-)
Just load the eros one as the main model, and the distilled lora right below the main modelAll the workflows here have a main Diffusion Model Loader + lora loader below, ready for dev model use
(swap out the model and lora in the screenshot above)
384 lora worked so slowly for me... I just tested it and skipped
Yes, I see the guy that made the other loras claim that its stripped for extra weights, so it might be a faster lora. Will give it a try ;-)
Although i dont use the 384 one myself, despite the screenshot I made. I use Kijai's lora, and seems to be pretty fast too
I noticed something on the page of that model.
If you intend to use the model with the workflows here (that are made using Kijais split models), you need to use the fp8 transformer version, not the "learned" version
As its written on the page : Fp8_mixed_learned is a full checkpoint. FP8 Transformer version, is for Kijai's split files
Alternatively, you change the model loaders in my workflows to use checkpoint loaders (for models with vae and text encoder baked in).
Maybe I'll make a basic I2V and T2V with that type of model loaders (full checkpoint). For those that want to use single checkpoint models instead of the split transformer files
8-bit model mutilates faces :(
LTX-2.3 - I2V & T2V regular checkpoint workflow
Uploaded a workflow that can be used with standard full checkpoint versions of LTX (files from LTX themselves, or models trained/merged from them)
LTX-2.3_-_I2V_T2V_Basic_for_checkpoint_models.json
Can find it here: https://huggingface.co/RuneXX/LTX-2.3-Workflows/tree/main
(for split models, like those from Kijai, you should use any other workflows)
LTX-2.3 - I2V & T2V regular checkpoint workflow
Uploaded a workflow that can be used with standard full checkpoint versions of LTX (files from LTX themselves, or models trained/merged from them)
LTX-2.3_-_I2V_T2V_Basic_for_checkpoint_models.json
Can find it here: https://huggingface.co/RuneXX/LTX-2.3-Workflows/tree/main(for split models, like those from Kijai, you should use any other workflows)
Thanks, I'll go test it out.
It's more like a finetune than a merge, and specifically for conditioned inputs; video, image, audio+image. Even though it technically is just a merge it's merged with 768 rank data. Fp8_mixed_learned is the recommended.
The main issue with the 384 distill lora is during testing it had significant tendency to undermine the new data and styles, it would introduce visual, motion, figure corruption consistently. The gutted lora released has many attn and feed forward effecting layers removed and that is lessened a lot now and also allows for high use strength. That's for Lightricks to address imo they should be fixing their distilled for i2v, or split the versions entirely in the future because unified approach T2V/I2V has been mediocre at everything, good at nothing in all attempts at it.
Besides that it's just a checkpoint, but I find that what people expect of it requires additional sampling and also additions like NAG and STG.
Will try out that lora with some I2V, sounds interesting that it might work better for image to video. I saw someone saying it was faster as well
The workflows here do have NAG and an easy connect basic scheduler to add more steps instead of the manual sigma. But no STG. Might add that ;-)
And i see you also have workflows specifically made for your model version, so those trying out his model, might want to check that out

