Instructions to use adrepale/LTX2.3-10Eros-LoRA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use adrepale/LTX2.3-10Eros-LoRA with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("adrepale/LTX2.3-10Eros-LoRA", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
This is just a full-extract of https://huggingface.co/TenStrip/LTX2.3-10Eros as a LoRA, using the https://github.com/ethanfel/ComfyUI-LoRA-Optimizer suite.
You can use it with base LTX2.3 in GGUF format, then add the 10Eros LoRA the same way as all LoRAs.
I'm using this distilled GGUF from Unsloth for my working tests : https://huggingface.co/unsloth/LTX-2.3-GGUF/blob/main/distilled-1.1/ltx-2.3-22b-distilled-1.1-UD-Q4_K_M.gguf
- Downloads last month
- 296
Model tree for adrepale/LTX2.3-10Eros-LoRA
Base model
Lightricks/LTX-2.3