LTX-2.3 22B Distilled (MLX, float16)

MLX-optimized weights for Lightricks/LTX-2.3 video generation model, converted for Apple Silicon.

Model Details

  • Model: LTX-2.3 22B Distilled (joint audio-video diffusion transformer)
  • Format: MLX safetensors (float16)
  • Total Size: ~66GB (transformer: 39GB, text encoder: 22GB, VAE: 1.7GB, upsampler: 1GB)
  • Minimum RAM: 64GB Apple Silicon Mac
  • Original: Lightricks/LTX-2.3

Also available in 4-bit quantized (35GB, fits 32GB Macs).

Architecture

Component Parameters Size (fp16)
DiT Transformer (48 layers) 22B 39 GB
Gemma 3 12B (text encoder) 12B 22 GB
Video VAE Decoder ~500M 777 MB
Spatial Upsampler ~500M 950 MB
Audio VAE + Vocoder ~500M 348 MB

Benchmarks (M4 Max 128GB)

Head-to-head comparison on the same machine, same config (576x1024, 121 frames, 5s video):

PyTorch MPS (BF16) MLX FP16 (Video+Audio) MLX FP16 (Video-only)
Stage 1 (8 steps, half-res) 66.7s 67.1s 51.2s
Stage 2 (3 steps, full-res) 157.9s 139.2s 101.3s
Total denoising 264.0s 206.2s 152.5s
Speedup 1.0x 1.3x 1.7x
Peak memory >60 GB 46.8 GB 34.2 GB

Additional configs:

Config Resolution Frames Time Memory
FP16 video-only 512x512 49 (2s) 35.1s 34.2 GB
FP16 video-only 320x512 25 (1s) 10.4s 31.3 GB

Key Advantages

  • 1.7x faster than PyTorch MPS in video-only mode
  • 42% less memory (34 GB vs 60+ GB) -- runs on 64GB Macs where PyTorch can't
  • Fused Metal attention via mx.fast.scaled_dot_product_attention
  • Unified memory -- no CPU/GPU transfer overhead

Usage

pip install mlx-ltx
from mlx_ltx.pipeline import DistilledPipeline, save_video

pipeline = DistilledPipeline("path/to/mlx-weights")
video = pipeline(
    prompt="A golden retriever playing piano in a concert hall",
    height=576, width=1024, num_frames=121,
    seed=42,
)
save_video(video, "output.mp4", fps=24.0)

Conversion

Converted from the original PyTorch checkpoint using mlx-ltx:

mlx-ltx-convert \
  --checkpoint /path/to/ltx-2.3-22b-distilled.safetensors \
  --gemma-root /path/to/gemma-3-12b-it/ \
  --output-dir ./mlx-weights/

License

This model conversion follows the LTX-2 Community License.

Acknowledgments

Downloads last month
-
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for gajesh/LTX-2.3-mlx-fp16

Finetuned
(17)
this model