File size: 1,268 Bytes
07de7a9
 
 
 
 
 
 
1f636b6
c548376
1f636b6
67c595b
76a373d
67c595b
 
76a373d
67c595b
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
license: other
license_name: ltx-2-community-license-agreement
license_link: https://github.com/Lightricks/LTX-2/blob/main/LICENSE
---

Original model: https://huggingface.co/Lightricks/LTX-2

Quantized to fp8_e5m2 to support older Triton with older Pytorch on 30 series GPUs (for example, in default installation of WangGP in Pinokio with Performance -> Compile Transformer Model enabled).

If you have Triton >= 3.5 (which requires Pytorch >= 3.9), the default ltx-2-19b-dev-fp8_diffusion_model.safetensors might work for you and you don't need this.

Usage: download ltx-2-19b-dev-fp8_e5m2.safetensors (or ltx-2-19b-distilled-fp8_e5m2.safetensors if you want to use distilled weights) and put it WanGP ckpts folder (e.g. pinokio\api\wan.git\app\ckpts for Pinokio). 
Rename the file to ltx-2-19b-dev-fp8_diffusion_model.safetensors (or ltx-2-19b-distilled-fp8_diffusion_model.safetensors if you have downloaded the distilled weights). If the folder already has that file, rename it to _old in case you want to return to FP8_E4M3 weights.



WARNING: for unknown reasons, this does not work in ComfyUI - the result is black video and -Inf errors for audio. Maybe something is expecting specifically FP8_E4M3. If you manage to get it working in Comfy, let me know.