Any-to-Any
English
ltx-video
image-to-video
text-to-video

Update model card with paper, code links and correct pipeline tag

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +26 -10
README.md CHANGED
@@ -1,26 +1,31 @@
1
  ---
 
 
 
 
 
 
2
  license: other
3
  license_name: ltx-2-community-license
4
  license_link: https://www.github.com/Lightricks/LTX-2/LICENSE
5
-
6
  tags:
7
  - ltx-video
8
  - image-to-video
9
  - text-to-video
10
  pinned: true
11
- language:
12
- - en
13
- pipeline_tag: text-to-video
14
- datasets:
15
- - Lightricks/Canny-Control-Dataset
16
- base_model:
17
- - Lightricks/LTX-2
18
  ---
19
 
20
  # LTX-2 19B IC-LoRA Canny Control
21
 
22
  This is a Canny control IC-LoRA trained on top of **LTX-2-19b**, enabling structure-preserving video generation from text and reference frames.
23
 
 
 
 
 
 
 
24
  ## What is In-Context LoRA (IC LoRA)?
25
 
26
  IC LoRA enables conditioning video generation on reference video frames at inference time, allowing fine-grained video-to-video control on top of a text-to-video, base model.
@@ -42,11 +47,22 @@ See the **LTX-2-community-license** for full terms.
42
 
43
  ### 🔌 Using in ComfyUI
44
  1. Copy the LoRA weights into `models/loras`.
45
- 2. Use the official IC-LoRA workflow from the LTX-2 ComfyUI repository.
46
 
47
  ## Dataset
48
 
49
- https://huggingface.co/datasets/Lightricks/Canny-Control-Dataset/
 
 
 
 
 
 
 
 
 
 
 
50
 
51
  ## Acknowledgments
52
 
 
1
  ---
2
+ base_model:
3
+ - Lightricks/LTX-2
4
+ datasets:
5
+ - Lightricks/Canny-Control-Dataset
6
+ language:
7
+ - en
8
  license: other
9
  license_name: ltx-2-community-license
10
  license_link: https://www.github.com/Lightricks/LTX-2/LICENSE
11
+ pipeline_tag: any-to-any
12
  tags:
13
  - ltx-video
14
  - image-to-video
15
  - text-to-video
16
  pinned: true
 
 
 
 
 
 
 
17
  ---
18
 
19
  # LTX-2 19B IC-LoRA Canny Control
20
 
21
  This is a Canny control IC-LoRA trained on top of **LTX-2-19b**, enabling structure-preserving video generation from text and reference frames.
22
 
23
+ It is based on the [LTX-2](https://huggingface.co/papers/2601.03233) foundation model.
24
+
25
+ - **Paper:** [LTX-2: Efficient Joint Audio-Visual Foundation Model](https://huggingface.co/papers/2601.03233)
26
+ - **Code:** [GitHub Repository](https://github.com/Lightricks/LTX-2)
27
+ - **Project Page:** [LTX-2 Playground](https://app.ltx.studio/ltx-2-playground/i2v)
28
+
29
  ## What is In-Context LoRA (IC LoRA)?
30
 
31
  IC LoRA enables conditioning video generation on reference video frames at inference time, allowing fine-grained video-to-video control on top of a text-to-video, base model.
 
47
 
48
  ### 🔌 Using in ComfyUI
49
  1. Copy the LoRA weights into `models/loras`.
50
+ 2. Use the official IC-LoRA workflow from the [LTX-2 ComfyUI repository](https://github.com/Lightricks/ComfyUI-LTXVideo/).
51
 
52
  ## Dataset
53
 
54
+ The model was trained using the [Lightricks/Canny-Control-Dataset](https://huggingface.co/datasets/Lightricks/Canny-Control-Dataset/).
55
+
56
+ ## Citation
57
+
58
+ ```bibtex
59
+ @article{hacohen2025ltx2,
60
+ title={LTX-2: Efficient Joint Audio-Visual Foundation Model},
61
+ author={HaCohen, Yoav and Brazowski, Benny and Chiprut, Nisan and Bitterman, Yaki and Kvochko, Andrew and Berkowitz, Avishai and Shalem, Daniel and Lifschitz, Daphna and Moshe, Dudu and Porat, Eitan and others},
62
+ journal={arXiv preprint arXiv:2601.03233},
63
+ year={2025}
64
+ }
65
+ ```
66
 
67
  ## Acknowledgments
68