Update README.md
Browse files
README.md
CHANGED
|
@@ -28,12 +28,14 @@ Gemma - either safetensor or GGUF:
|
|
| 28 |
- Gemma 3 12B it GGUF: https://huggingface.co/unsloth/gemma-3-12b-it-GGUF/
|
| 29 |
|
| 30 |
|
| 31 |
-
-
|
|
|
|
| 32 |
1) Quantstack: https://huggingface.co/QuantStack/LTX-2.3-GGUF
|
| 33 |
2) Unsloth : https://huggingface.co/unsloth/LTX-2.3-GGUF
|
| 34 |
3) Vantage : https://huggingface.co/vantagewithai/LTX-2.3-GGUF
|
| 35 |
|
| 36 |
-
|
|
|
|
| 37 |
https://github.com/madebyollin/taehv/blob/main/safetensors/taeltx2_3.safetensors
|
| 38 |
(Optional/Recommended. Without this vae you still get previews with latentrgb from KJnodes, at a lower res)
|
| 39 |
|
|
|
|
| 28 |
- Gemma 3 12B it GGUF: https://huggingface.co/unsloth/gemma-3-12b-it-GGUF/
|
| 29 |
|
| 30 |
|
| 31 |
+
AlternativeLTX-2.3 GGUF models (for GGUF workflows) - one of the source below:
|
| 32 |
+
|
| 33 |
1) Quantstack: https://huggingface.co/QuantStack/LTX-2.3-GGUF
|
| 34 |
2) Unsloth : https://huggingface.co/unsloth/LTX-2.3-GGUF
|
| 35 |
3) Vantage : https://huggingface.co/vantagewithai/LTX-2.3-GGUF
|
| 36 |
|
| 37 |
+
|
| 38 |
+
Tiny Vae (for sampler previews):
|
| 39 |
https://github.com/madebyollin/taehv/blob/main/safetensors/taeltx2_3.safetensors
|
| 40 |
(Optional/Recommended. Without this vae you still get previews with latentrgb from KJnodes, at a lower res)
|
| 41 |
|