Update README.md
Browse files
README.md
CHANGED
|
@@ -22,10 +22,6 @@ Model Downloads:
|
|
| 22 |
- Main split models used in these workflows (LTX-2.3 dev & distilled safetensor, embeddings, audio and video vae):
|
| 23 |
https://huggingface.co/Kijai/LTX2.3_comfy
|
| 24 |
|
| 25 |
-
- LTX-2.3 GGUF for GGUF workflows:
|
| 26 |
-
Quantstack: https://huggingface.co/QuantStack/LTX-2.3-GGUF
|
| 27 |
-
Vantage : https://huggingface.co/vantagewithai/LTX-2.3-GGUF
|
| 28 |
-
|
| 29 |
- Gemma 3 12B it safetensor:
|
| 30 |
https://huggingface.co/Comfy-Org/ltx-2/
|
| 31 |
|
|
@@ -33,6 +29,11 @@ https://huggingface.co/Comfy-Org/ltx-2/
|
|
| 33 |
https://huggingface.co/unsloth/gemma-3-12b-it-GGUF/
|
| 34 |
|
| 35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
----
|
| 37 |
|
| 38 |
Needed nodes:
|
|
|
|
| 22 |
- Main split models used in these workflows (LTX-2.3 dev & distilled safetensor, embeddings, audio and video vae):
|
| 23 |
https://huggingface.co/Kijai/LTX2.3_comfy
|
| 24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
- Gemma 3 12B it safetensor:
|
| 26 |
https://huggingface.co/Comfy-Org/ltx-2/
|
| 27 |
|
|
|
|
| 29 |
https://huggingface.co/unsloth/gemma-3-12b-it-GGUF/
|
| 30 |
|
| 31 |
|
| 32 |
+
- Optional LTX-2.3 GGUF models (for GGUF workflows):
|
| 33 |
+
1) Quantstack: https://huggingface.co/QuantStack/LTX-2.3-GGUF
|
| 34 |
+
2) Vantage : https://huggingface.co/vantagewithai/LTX-2.3-GGUF
|
| 35 |
+
|
| 36 |
+
|
| 37 |
----
|
| 38 |
|
| 39 |
Needed nodes:
|