Update README.md
Browse files
README.md
CHANGED
|
@@ -28,8 +28,8 @@ language:
|
|
| 28 |
- it
|
| 29 |
- pt
|
| 30 |
license: other
|
| 31 |
-
license_name: ltx-2-
|
| 32 |
-
license_link: https://
|
| 33 |
library_name: diffusers
|
| 34 |
demo: https://app.ltx.studio/ltx-2-playground/i2v
|
| 35 |
---
|
|
@@ -39,7 +39,19 @@ This model card focuses on the LTX-2 model, codebase available [here](https://gi
|
|
| 39 |
|
| 40 |
LTX-2 is a DiT-based audio-video foundation model designed to generate synchronized video and audio within a single model. It brings together the core building blocks of modern video generation, with open weights and a focus on practical, local execution.
|
| 41 |
|
| 42 |
-
<
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
# Model Checkpoints
|
| 45 |
|
|
@@ -66,7 +78,7 @@ LTX-2 is accessible right away via the following links:
|
|
| 66 |
# Run locally
|
| 67 |
|
| 68 |
## Direct use license
|
| 69 |
-
You can use the models - full, distilled, upscalers and any derivatives of the models - for purposes under the [license](
|
| 70 |
|
| 71 |
## ComfyUI
|
| 72 |
We recommend you use the built-in LTXVideo nodes that can be found in the ComfyUI Manager.
|
|
@@ -109,9 +121,8 @@ LTX-2 is supported in the [Diffusers Python library](https://huggingface.co/docs
|
|
| 109 |
- The model may generate content that is inappropriate or offensive.
|
| 110 |
- When generating audio without speech, the audio may be of lower quality.
|
| 111 |
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|  |  |  |
|
|
|
|
| 28 |
- it
|
| 29 |
- pt
|
| 30 |
license: other
|
| 31 |
+
license_name: ltx-2-community-license-agreement
|
| 32 |
+
license_link: https://github.com/Lightricks/LTX-2/blob/main/LICENSE
|
| 33 |
library_name: diffusers
|
| 34 |
demo: https://app.ltx.studio/ltx-2-playground/i2v
|
| 35 |
---
|
|
|
|
| 39 |
|
| 40 |
LTX-2 is a DiT-based audio-video foundation model designed to generate synchronized video and audio within a single model. It brings together the core building blocks of modern video generation, with open weights and a focus on practical, local execution.
|
| 41 |
|
| 42 |
+
<div class="card-video">
|
| 43 |
+
<video
|
| 44 |
+
src="./assets/ltx-2-intro.mp4"
|
| 45 |
+
width="854"
|
| 46 |
+
height="480"
|
| 47 |
+
autoPlay
|
| 48 |
+
loop
|
| 49 |
+
playsInline
|
| 50 |
+
muted
|
| 51 |
+
controls
|
| 52 |
+
>
|
| 53 |
+
</video>
|
| 54 |
+
</div>
|
| 55 |
|
| 56 |
# Model Checkpoints
|
| 57 |
|
|
|
|
| 78 |
# Run locally
|
| 79 |
|
| 80 |
## Direct use license
|
| 81 |
+
You can use the models - full, distilled, upscalers and any derivatives of the models - for purposes under the [license](./LICENSE).
|
| 82 |
|
| 83 |
## ComfyUI
|
| 84 |
We recommend you use the built-in LTXVideo nodes that can be found in the ComfyUI Manager.
|
|
|
|
| 121 |
- The model may generate content that is inappropriate or offensive.
|
| 122 |
- When generating audio without speech, the audio may be of lower quality.
|
| 123 |
|
| 124 |
+
# Train the model
|
| 125 |
+
|
| 126 |
+
The base (dev) model is fully trainable.
|
| 127 |
+
It's extremely easy to reproduce the LoRAs and IC-LoRAs we publish with the model by following the instructions on the [LTX-2 Trainer Readme](https://github.com/Lightricks/LTX-2/blob/main/packages/ltx-trainer/README.md).
|
| 128 |
+
Training for motion, style or likeness (sound+appearance) can take less than an hour in many settings.
|
|
|