LTX-2 19B Distilled (8-bit) - MLX

This is a 8-bit quantized version of the LTX-2 19B Distilled model, optimized for Apple Silicon using MLX.

Model Description

LTX-2 is a state-of-the-art video generation model from Lightricks. This version has been quantized to 8-bit precision for efficient inference on Apple Silicon devices with MLX.

Key Features

  • Pipeline: Distilled (faster generation, fewer steps required)
  • Quantization: 8-bit precision
  • Framework: MLX (Apple Silicon optimized)
  • Memory: ~19GB VRAM required

Usage

Installation

pip install git+https://github.com/CharafChnioune/mlx-video.git

Command Line

# Basic generation
mlx-video --prompt "A beautiful sunset over the ocean" \
    --model-repo AITRADER/ltx2-distilled-8bit-mlx \
    --pipeline distilled \
    --height 512 --width 512 \
    --num-frames 33





Python API

from mlx_video import generate_video

video = generate_video(
    prompt="A beautiful sunset over the ocean",
    model_repo="AITRADER/ltx2-distilled-8bit-mlx",
    pipeline="distilled",
    height=512,
    width=512,
    num_frames=33,
)

Model Files

  • ltx-2-19b-distilled-mlx.safetensors - Main model weights (8-bit quantized)
  • quantization.json - Quantization configuration
  • config.json - Model configuration
  • layer_report.json - Layer information

Performance

Resolution Frames Time
512x512 33 ~30s on M3 Max
768x512 33 ~45s on M3 Max

License

This model is released under the LTX Video License.

Acknowledgements

  • Lightricks for the original LTX-2 model
  • MLX team at Apple for the framework
  • mlx-video for the MLX conversion
Downloads last month
254
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support