Original model: https://huggingface.co/Lightricks/LTX-2
Quantized to fp8_e5m2 to support older Triton with older Pytorch on 30 series GPUs (for example, in default installation of WangGP in Pinokio with Performance -> Compile Transformer Model enabled).
If you have Triton >= 3.5 (which requires Pytorch >= 3.9), the default ltx-2-19b-dev-fp8_diffusion_model.safetensors might work for you and you don't need this.
Usage: download ltx-2-19b-dev-fp8_e5m2.safetensors (or ltx-2-19b-distilled-fp8_e5m2.safetensors if you want to use distilled weights) and put it WanGP ckpts folder (e.g. pinokio\api\wan.git\app\ckpts for Pinokio). Rename the file to ltx-2-19b-dev-fp8_diffusion_model.safetensors (or ltx-2-19b-distilled-fp8_diffusion_model.safetensors if you have downloaded the distilled weights). If the folder already has that file, rename it to _old in case you want to return to FP8_E4M3 weights.
WARNING: for unknown reasons, this does not work in ComfyUI - the result is black video and -Inf errors for audio. Maybe something is expecting specifically FP8_E4M3. If you manage to get it working in Comfy, let me know.