Update README.md
Browse files
README.md
CHANGED
|
@@ -8,8 +8,11 @@ Original model: https://huggingface.co/Lightricks/LTX-2
|
|
| 8 |
|
| 9 |
Quantized to fp8_e5m2 to support older Triton with older Pytorch on 30 series GPUs (for example, in default installation of WangGP in Pinokio with Performance -> Compile Transformer Model enabled).
|
| 10 |
|
| 11 |
-
If you have Triton >= 3.5 (which requires Pytorch >= 3.9), the default ltx-2-19b-dev-
|
| 12 |
|
| 13 |
-
Usage: download ltx-2-19b-dev-fp8_e5m2.safetensors and put it WanGP ckpts folder (e.g. pinokio\api\wan.git\app\ckpts for Pinokio).
|
|
|
|
| 14 |
|
| 15 |
-
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
Quantized to fp8_e5m2 to support older Triton with older Pytorch on 30 series GPUs (for example, in default installation of WangGP in Pinokio with Performance -> Compile Transformer Model enabled).
|
| 10 |
|
| 11 |
+
If you have Triton >= 3.5 (which requires Pytorch >= 3.9), the default ltx-2-19b-dev-fp8_diffusion_model.safetensors might work for you and you don't need this.
|
| 12 |
|
| 13 |
+
Usage: download ltx-2-19b-dev-fp8_e5m2.safetensors (or ltx-2-19b-distilled-fp8_e5m2.safetensors if you want to use distilled weights) and put it WanGP ckpts folder (e.g. pinokio\api\wan.git\app\ckpts for Pinokio).
|
| 14 |
+
Rename the file to ltx-2-19b-dev-fp8_diffusion_model.safetensors (or ltx-2-19b-distilled-fp8_diffusion_model.safetensors if you have downloaded the distilled weights). If the folder already has that file, rename it to _old in case you want to return to FP8_E4M3 weights.
|
| 15 |
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
WARNING: for unknown reasons, this does not work in ComfyUI - the result is black video and -Inf errors for audio. Maybe something is expecting specifically FP8_E4M3. If you manage to get it working in Comfy, let me know.
|