ofirbibi commited on
Commit
402dd47
·
verified ·
1 Parent(s): 33ced56

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -10
README.md CHANGED
@@ -28,8 +28,8 @@ language:
28
  - it
29
  - pt
30
  license: other
31
- license_name: ltx-2-open-weights-license
32
- license_link: https://static.lightricks.com/legal/ltx-2-open-weights-license-0.X.pdf
33
  library_name: diffusers
34
  demo: https://app.ltx.studio/ltx-2-playground/i2v
35
  ---
@@ -39,7 +39,19 @@ This model card focuses on the LTX-2 model, codebase available [here](https://gi
39
 
40
  LTX-2 is a DiT-based audio-video foundation model designed to generate synchronized video and audio within a single model. It brings together the core building blocks of modern video generation, with open weights and a focus on practical, local execution.
41
 
42
- <img src="./media/trailer.gif" alt="trailer" width="512">
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  # Model Checkpoints
45
 
@@ -66,7 +78,7 @@ LTX-2 is accessible right away via the following links:
66
  # Run locally
67
 
68
  ## Direct use license
69
- You can use the models - full, distilled, upscalers and any derivatives of the models - for purposes under the [license](https://static.lightricks.com/legal/ltx-2-open-weights-license-0.X.pdf).
70
 
71
  ## ComfyUI
72
  We recommend you use the built-in LTXVideo nodes that can be found in the ComfyUI Manager.
@@ -109,9 +121,8 @@ LTX-2 is supported in the [Diffusers Python library](https://huggingface.co/docs
109
  - The model may generate content that is inappropriate or offensive.
110
  - When generating audio without speech, the audio may be of lower quality.
111
 
112
- ## Image-to-video examples
113
- | | | |
114
- |:---:|:---:|:---:|
115
- | ![example1](./media/ltx-video_i2v_example_00001.gif) | ![example2](./media/ltx-video_i2v_example_00002.gif) | ![example3](./media/ltx-video_i2v_example_00003.gif) |
116
- | ![example4](./media/ltx-video_i2v_example_00004.gif) | ![example5](./media/ltx-video_i2v_example_00005.gif) | ![example6](./media/ltx-video_i2v_example_00006.gif) |
117
- | ![example7](./media/ltx-video_i2v_example_00007.gif) | ![example8](./media/ltx-video_i2v_example_00008.gif) | ![example9](./media/ltx-video_i2v_example_00009.gif) |
 
28
  - it
29
  - pt
30
  license: other
31
+ license_name: ltx-2-community-license-agreement
32
+ license_link: https://github.com/Lightricks/LTX-2/blob/main/LICENSE
33
  library_name: diffusers
34
  demo: https://app.ltx.studio/ltx-2-playground/i2v
35
  ---
 
39
 
40
  LTX-2 is a DiT-based audio-video foundation model designed to generate synchronized video and audio within a single model. It brings together the core building blocks of modern video generation, with open weights and a focus on practical, local execution.
41
 
42
+ <div class="card-video">
43
+ <video
44
+ src="./assets/ltx-2-intro.mp4"
45
+ width="854"
46
+ height="480"
47
+ autoPlay
48
+ loop
49
+ playsInline
50
+ muted
51
+ controls
52
+ >
53
+ </video>
54
+ </div>
55
 
56
  # Model Checkpoints
57
 
 
78
  # Run locally
79
 
80
  ## Direct use license
81
+ You can use the models - full, distilled, upscalers and any derivatives of the models - for purposes under the [license](./LICENSE).
82
 
83
  ## ComfyUI
84
  We recommend you use the built-in LTXVideo nodes that can be found in the ComfyUI Manager.
 
121
  - The model may generate content that is inappropriate or offensive.
122
  - When generating audio without speech, the audio may be of lower quality.
123
 
124
+ # Train the model
125
+
126
+ The base (dev) model is fully trainable.
127
+ It's extremely easy to reproduce the LoRAs and IC-LoRAs we publish with the model by following the instructions on the [LTX-2 Trainer Readme](https://github.com/Lightricks/LTX-2/blob/main/packages/ltx-trainer/README.md).
128
+ Training for motion, style or likeness (sound+appearance) can take less than an hour in many settings.