vantagewithai commited on
Commit
43d4816
·
verified ·
1 Parent(s): 0ab2f1a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -0
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: image-to-video
3
+ tags:
4
+ - image-to-video
5
+ - text-to-video
6
+ - video-to-video
7
+ - image-text-to-video
8
+ - audio-to-video
9
+ - text-to-audio
10
+ - video-to-audio
11
+ - audio-to-audio
12
+ - text-to-audio-video
13
+ - image-to-audio-video
14
+ - image-text-to-audio-video
15
+ - ltx-2
16
+ - ltx-video
17
+ - ltxv
18
+ - lightricks
19
+ pinned: true
20
+ language:
21
+ - en
22
+ - de
23
+ - es
24
+ - fr
25
+ - ja
26
+ - ko
27
+ - zh
28
+ - it
29
+ - pt
30
+ license: other
31
+ license_name: ltx-2-community-license-agreement
32
+ license_link: https://github.com/Lightricks/LTX-2/blob/main/LICENSE
33
+ library_name: diffusers
34
+ demo: https://app.ltx.studio/ltx-2-playground/i2v
35
+ ---
36
+
37
+ **Split version of Split LTX-2 checkpoint - Model/VAE/Audio VAE/Text Encoder**
38
+
39
+ **Original model Link:** [https://huggingface.co/Lightricks/LTX-2](https://huggingface.co/Lightricks/LTX-2)
40
+
41
+ **Watch us at Youtube:** [@VantageWithAI](https://www.youtube.com/@vantagewithai)
42
+
43
+ # LTX-2 Model Card
44
+ This model card focuses on the LTX-2 model, codebase available [here](https://github.com/Lightricks/LTX-2).
45
+
46
+ LTX-2 is a DiT-based audio-video foundation model designed to generate synchronized video and audio within a single model. It brings together the core building blocks of modern video generation, with open weights and a focus on practical, local execution.
47
+
48
+ [![LTX-2 Open Source](https://img.youtube.com/vi/8fWAJXZJbRA/maxresdefault.jpg)](https://www.youtube.com/watch?v=8fWAJXZJbRA)
49
+
50
+ # Model Checkpoints
51
+
52
+ | Name | Notes |
53
+ |--------------------------------|----------------------------------------------------------------------------------------------------------------|
54
+ | ltx-2-19b-dev | The full model, flexible and trainable in bf16 |
55
+ | ltx-2-19b-dev-fp8 | The full model in fp8 quantization |
56
+ | ltx-2-19b-dev-fp4 | The full model in nvfp4 quantization |
57
+ | ltx-2-19b-distilled | The distilled version of the full model, 8 steps, CFG=1 |
58
+ | ltx-2-19b-distilled-lora-384 | A LoRA version of the distilled model applicable to the full model |
59
+ | ltx-2-spatial-upscaler-x2-1.0 | An x2 spatial upscaler for the ltx-2 latents, used in multi stage (multiscale) pipelines for higher resolution |
60
+ | ltx-2-temporal-upscaler-x2-1.0 | An x2 temporal upscaler for the ltx-2 latents, used in multi stage (multiscale) pipelines for higher FPS |
61
+
62
+ ## Model Details
63
+ - **Developed by:** Lightricks
64
+ - **Model type:** Diffusion-based audio-video foundation model
65
+ - **Language(s):** English
66
+
67
+ # Online demo
68
+ LTX-2 is accessible right away via the following links:
69
+ - [LTX-Studio text-to-video](https://app.ltx.studio/ltx-2-playground/t2v)
70
+ - [LTX-Studio image-to-video](https://app.ltx.studio/ltx-2-playground/i2v)
71
+
72
+ # Run locally
73
+
74
+ ## Direct use license
75
+ You can use the models - full, distilled, upscalers and any derivatives of the models - for purposes under the [license](./LICENSE).
76
+
77
+ ## ComfyUI
78
+ We recommend you use the built-in LTXVideo nodes that can be found in the ComfyUI Manager.
79
+ For manual installation information, please refer to our [documentation site](https://docs.ltx.video/open-source-model/integration-tools/comfy-ui).
80
+
81
+ ## PyTorch codebase
82
+
83
+ The [LTX-2 codebase](https://github.com/Lightricks/LTX-2) is a monorepo with several packages. From model definition in 'ltx-core' to pipelines in 'ltx-pipelines' and training capabilities in 'ltx-trainer'.
84
+ The codebase was tested with Python >=3.12, CUDA version >12.7, and supports PyTorch ~= 2.7.
85
+
86
+ ## Diffusers 🧨
87
+
88
+ LTX-2 is supported in the [Diffusers Python library](https://huggingface.co/docs/diffusers/main/en/index) for image-to-video generation.
89
+
90
+ ## General tips:
91
+ * Width & height settings must be divisible by 32. Frame count must be divisible by 8 + 1.
92
+ * In case the resolution or number of frames are not divisible by 32 or 8 + 1, the input should be padded with -1 and then cropped to the desired resolution and number of frames.
93
+ * For tips on writing effective prompts, please visit our [Prompting guide](https://ltx.video/blog/how-to-prompt-for-ltx-2)
94
+
95
+ ### Limitations
96
+ - This model is not intended or able to provide factual information.
97
+ - As a statistical model this checkpoint might amplify existing societal biases.
98
+ - The model may fail to generate videos that matches the prompts perfectly.
99
+ - Prompt following is heavily influenced by the prompting-style.
100
+ - The model may generate content that is inappropriate or offensive.
101
+ - When generating audio without speech, the audio may be of lower quality.
102
+
103
+ # Train the model
104
+
105
+ The base (dev) model is fully trainable.
106
+
107
+ It's extremely easy to reproduce the LoRAs and IC-LoRAs we publish with the model by following the instructions on the [LTX-2 Trainer Readme](https://github.com/Lightricks/LTX-2/blob/main/packages/ltx-trainer/README.md).
108
+
109
+ Training for motion, style or likeness (sound+appearance) can take less than an hour in many settings.