How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image, export_to_video

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("adrepale/LTX2.3-10Eros-LoRA", dtype=torch.bfloat16, device_map="cuda")
pipe.to("cuda")

prompt = "A man with short gray hair plays a red electric guitar."
image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png"
)

output = pipe(image=image, prompt=prompt).frames[0]
export_to_video(output, "output.mp4")

This is just a full-extract of https://huggingface.co/TenStrip/LTX2.3-10Eros as a LoRA, using the https://github.com/ethanfel/ComfyUI-LoRA-Optimizer suite.

You can use it with base LTX2.3 in GGUF format, then add the 10Eros LoRA the same way as all LoRAs.

I'm using this distilled GGUF from Unsloth for my working tests : https://huggingface.co/unsloth/LTX-2.3-GGUF/blob/main/distilled-1.1/ltx-2.3-22b-distilled-1.1-UD-Q4_K_M.gguf

Downloads last month
306
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for adrepale/LTX2.3-10Eros-LoRA

Finetuned
(53)
this model