LTX-Video in Rust (Candle)

This repository provides a high-performance, native Rust implementation of LTX-Video using the Candle ML framework.

Features

  • ๐Ÿฆ€ Native Rust: No Python dependency required for inference.
  • ๐Ÿš€ Performance: Optimized for NVIDIA GPUs with Flash Attention v2 and cuDNN.
  • ๐Ÿ’พ Memory Efficient: Supports GGUF quantization for T5-XXL text encoder and VAE tiling/slicing for generating HD videos on consumer GPUs.
  • ๐Ÿ›  Flexible: Easy to use CLI for video generation and library for custom integration.

Quick Start

Installation

Ensure you have Rust and the CUDA Toolkit installed, then:

git clone https://github.com/FerrisMind/candle-video
cd candle-video
cargo build --release --features flash-attn,cudnn

Video Generation

cargo run --example ltx-video --release -- \
    --local-weights ./models/ltx-video \
    --prompt "A serene mountain lake at sunset, photorealistic, 4k" \
    --width 768 --height 512 --num-frames 97 \
    --steps 30

Performance & Memory

Resolution Frames VRAM (BF16) VRAM (VAE Tiling)
512x768 97 ~8-13 GB ~8-9 GB

Note: Using GGUF T5 encoder saves an additional ~8-12GB of VRAM.

Credits


For more details, visit the main GitHub Repository.

Downloads last month
50
GGUF
Model size
5B params
Architecture
t5encoder
Hardware compatibility
Log In to view the estimation

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for oxide-lab/LTX-Video-0.9.5-diffusers

Quantized
(1)
this model

Collection including oxide-lab/LTX-Video-0.9.5-diffusers