Alissonerdx commited on
Commit
a115a43
·
verified ·
1 Parent(s): 9a7b2aa

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -3,7 +3,108 @@ license: apache-2.0
3
  base_model:
4
  - Lightricks/LTX-2.3
5
  tags:
6
- - inpaint
7
  - ltx
8
  - lora
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  base_model:
4
  - Lightricks/LTX-2.3
5
  tags:
 
6
  - ltx
7
  - lora
8
+ - inpaint
9
+ ---
10
+
11
+ # LoRAs for LTX 2.3
12
+
13
+ Here I will share some LoRAs that I trained for **LTX 2.3**.
14
+
15
+ These LoRAs may cover different use cases over time, so this repository is not limited to inpainting only.
16
+
17
+ ## Models
18
+
19
+ | File | Description |
20
+ |---|---|
21
+ | `ltx23_inpaint_rank128_v1_02500steps.safetensors` | Follows the prompt better, probably because it experienced less overfitting. |
22
+ | `ltx23_inpaint_rank128_v1_10000steps.safetensors` | Follows the prompt in a more limited way, but uses the mask area better. This is probably because it experienced more overfitting after a longer training time on a more limited dataset. |
23
+
24
+ ## Important inference notes for the inpainting LoRAs
25
+
26
+ These inpainting LoRAs were trained with a specific guide and mask setup, so input preparation during inference is important.
27
+
28
+ ### How to use the mask
29
+
30
+ During inference, **you should not pass the mask as a separate channel**.
31
+
32
+ The **mask must be embedded into the guide video**, which means:
33
+
34
+ - the **mask video**
35
+ - and the **guide video**
36
+
37
+ must be treated as **a single video**.
38
+
39
+ After that, you need to use the **`LTXVAddGuideMulti`** node to pass the guide video into the model.
40
+
41
+ ## About the mask format used during training
42
+
43
+ My dataset included samples where the mask was more **blockified**. In other words, the default pattern used **8x8 blocks**.
44
+
45
+ To better reproduce the training conditions during inference, you can use:
46
+
47
+ - **`Blockify Mask`** from **KJNodes**
48
+
49
+ This may help make the mask distribution closer to what the model saw during training.
50
+
51
+ ## Notes
52
+
53
+ - **Base model:** `Lightricks/LTX-2.3`
54
+ - Checkpoint behavior may vary significantly in terms of:
55
+ - prompt adherence
56
+ - use of the masked area
57
+ - overfitting tendency
58
+
59
+ ## Practical recommendations
60
+
61
+ For the inpainting LoRAs in this repo:
62
+
63
+ - If you want **better prompt adherence**, try the **2500 steps** checkpoint first
64
+ - If you want **better use of the masked area**, try the **10000 steps** checkpoint first
65
+
66
+ The best approach is to compare both in your workflow, since preference may vary depending on the scene, mask, and prompt.
67
+
68
+ ---
69
+
70
+ ## Examples — 2500 Steps
71
+
72
+ ### Example 1
73
+
74
+ **Model:** `ltx23_inpaint_rank128_v1_02500steps.safetensors`
75
+
76
+ **Video:**
77
+
78
+ **Prompt:**
79
+
80
+ ---
81
+
82
+ ### Example 2
83
+
84
+ **Model:** `ltx23_inpaint_rank128_v1_02500steps.safetensors`
85
+
86
+ **Video:**
87
+
88
+ **Prompt:**
89
+
90
+ ---
91
+
92
+ ## Examples — 10000 Steps
93
+
94
+ ### Example 1
95
+
96
+ **Model:** `ltx23_inpaint_rank128_v1_10000steps.safetensors`
97
+
98
+ **Video:**
99
+
100
+ **Prompt:**
101
+
102
+ ---
103
+
104
+ ### Example 2
105
+
106
+ **Model:** `ltx23_inpaint_rank128_v1_10000steps.safetensors`
107
+
108
+ **Video:**
109
+
110
+ **Prompt:**
ltx23_inpaint_rank128_v1_02500steps.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e8d2082f79be715774026a4fbbddaa2d64154d4dc9b956ade5208cc4dd8adf8
3
+ size 1308756416
ltx23_inpaint_rank128_v1_10000steps.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f2089f5cd3cac56f93d3641c90695a7296514aa1292ec8c6f3a6ad369eda728
3
+ size 1308756416