Instructions to use TenStrip/LTX2.3-10Eros with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use TenStrip/LTX2.3-10Eros with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("TenStrip/LTX2.3-10Eros", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
Recommended settings
#4
by johiny - opened
hey hi thanks for the great work! I'm doing some testing with Anime and getting some really nice results. Do you have any recommended settings? I'm using the distilled CondSafe Lora at 0.5 for now.
If you want more movement you can try more distilled lora strength. I usually do 1.0 first pass with IC Lora conditioning on first pass only like in my workflows. If that causes it to change style it's usually fixed in my WF with the i2v upscale node. There's no actual significant 2D data so it's pretty much applying 3D animations to it and too much distilled or less conditioning strength could change the style of it but you probably want it to where you get a lot of motion.
This comment has been hidden (marked as Abuse)