blanchon commited on
Commit
b59ff5d
·
verified ·
1 Parent(s): 32fca61

Upload LTX2ImageToVideoPipeline

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer/tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: diffusers
3
+ ---
4
+
5
+ # Model Card for Model ID
6
+
7
+ <!-- Provide a quick summary of what the model is/does. -->
8
+
9
+
10
+
11
+ ## Model Details
12
+
13
+ ### Model Description
14
+
15
+ <!-- Provide a longer summary of what this model is. -->
16
+
17
+ This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
18
+
19
+ - **Developed by:** [More Information Needed]
20
+ - **Funded by [optional]:** [More Information Needed]
21
+ - **Shared by [optional]:** [More Information Needed]
22
+ - **Model type:** [More Information Needed]
23
+ - **Language(s) (NLP):** [More Information Needed]
24
+ - **License:** [More Information Needed]
25
+ - **Finetuned from model [optional]:** [More Information Needed]
26
+
27
+ ### Model Sources [optional]
28
+
29
+ <!-- Provide the basic links for the model. -->
30
+
31
+ - **Repository:** [More Information Needed]
32
+ - **Paper [optional]:** [More Information Needed]
33
+ - **Demo [optional]:** [More Information Needed]
34
+
35
+ ## Uses
36
+
37
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
38
+
39
+ ### Direct Use
40
+
41
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
42
+
43
+ [More Information Needed]
44
+
45
+ ### Downstream Use [optional]
46
+
47
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
48
+
49
+ [More Information Needed]
50
+
51
+ ### Out-of-Scope Use
52
+
53
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
54
+
55
+ [More Information Needed]
56
+
57
+ ## Bias, Risks, and Limitations
58
+
59
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ### Recommendations
64
+
65
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
66
+
67
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
68
+
69
+ ## How to Get Started with the Model
70
+
71
+ Use the code below to get started with the model.
72
+
73
+ [More Information Needed]
74
+
75
+ ## Training Details
76
+
77
+ ### Training Data
78
+
79
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
80
+
81
+ [More Information Needed]
82
+
83
+ ### Training Procedure
84
+
85
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
86
+
87
+ #### Preprocessing [optional]
88
+
89
+ [More Information Needed]
90
+
91
+
92
+ #### Training Hyperparameters
93
+
94
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
95
+
96
+ #### Speeds, Sizes, Times [optional]
97
+
98
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
99
+
100
+ [More Information Needed]
101
+
102
+ ## Evaluation
103
+
104
+ <!-- This section describes the evaluation protocols and provides the results. -->
105
+
106
+ ### Testing Data, Factors & Metrics
107
+
108
+ #### Testing Data
109
+
110
+ <!-- This should link to a Dataset Card if possible. -->
111
+
112
+ [More Information Needed]
113
+
114
+ #### Factors
115
+
116
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
117
+
118
+ [More Information Needed]
119
+
120
+ #### Metrics
121
+
122
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
123
+
124
+ [More Information Needed]
125
+
126
+ ### Results
127
+
128
+ [More Information Needed]
129
+
130
+ #### Summary
131
+
132
+
133
+
134
+ ## Model Examination [optional]
135
+
136
+ <!-- Relevant interpretability work for the model goes here -->
137
+
138
+ [More Information Needed]
139
+
140
+ ## Environmental Impact
141
+
142
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
143
+
144
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
145
+
146
+ - **Hardware Type:** [More Information Needed]
147
+ - **Hours used:** [More Information Needed]
148
+ - **Cloud Provider:** [More Information Needed]
149
+ - **Compute Region:** [More Information Needed]
150
+ - **Carbon Emitted:** [More Information Needed]
151
+
152
+ ## Technical Specifications [optional]
153
+
154
+ ### Model Architecture and Objective
155
+
156
+ [More Information Needed]
157
+
158
+ ### Compute Infrastructure
159
+
160
+ [More Information Needed]
161
+
162
+ #### Hardware
163
+
164
+ [More Information Needed]
165
+
166
+ #### Software
167
+
168
+ [More Information Needed]
169
+
170
+ ## Citation [optional]
171
+
172
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
173
+
174
+ **BibTeX:**
175
+
176
+ [More Information Needed]
177
+
178
+ **APA:**
179
+
180
+ [More Information Needed]
181
+
182
+ ## Glossary [optional]
183
+
184
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
185
+
186
+ [More Information Needed]
187
+
188
+ ## More Information [optional]
189
+
190
+ [More Information Needed]
191
+
192
+ ## Model Card Authors [optional]
193
+
194
+ [More Information Needed]
195
+
196
+ ## Model Card Contact
197
+
198
+ [More Information Needed]
audio_vae/config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AutoencoderKLLTX2Audio",
3
+ "_diffusers_version": "0.37.0.dev0",
4
+ "_name_or_path": "/home/ubuntu/.cache/huggingface/hub/models--Lightricks--LTX-2/snapshots/ec81a2df13c166827fe169b189d39a94d7fad04d/audio_vae",
5
+ "attn_resolutions": null,
6
+ "base_channels": 128,
7
+ "causality_axis": "height",
8
+ "ch_mult": [
9
+ 1,
10
+ 2,
11
+ 4
12
+ ],
13
+ "dropout": 0.0,
14
+ "in_channels": 2,
15
+ "is_causal": true,
16
+ "latent_channels": 8,
17
+ "mel_bins": 64,
18
+ "mel_hop_length": 160,
19
+ "mid_block_add_attention": false,
20
+ "norm_type": "pixel",
21
+ "num_res_blocks": 2,
22
+ "output_channels": 2,
23
+ "resolution": 256,
24
+ "sample_rate": 16000
25
+ }
audio_vae/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96a24f2ec886fa08b8b6566e8094b973c7d2ba0855bcbf7046b4b7156d1d1355
3
+ size 63837076
connectors/config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "LTX2TextConnectors",
3
+ "_diffusers_version": "0.37.0.dev0",
4
+ "_name_or_path": "/home/ubuntu/.cache/huggingface/hub/models--Lightricks--LTX-2/snapshots/ec81a2df13c166827fe169b189d39a94d7fad04d/connectors",
5
+ "audio_connector_attention_head_dim": 128,
6
+ "audio_connector_num_attention_heads": 30,
7
+ "audio_connector_num_layers": 2,
8
+ "audio_connector_num_learnable_registers": 128,
9
+ "caption_channels": 3840,
10
+ "causal_temporal_positioning": false,
11
+ "connector_rope_base_seq_len": 4096,
12
+ "rope_double_precision": true,
13
+ "rope_theta": 10000.0,
14
+ "rope_type": "split",
15
+ "text_proj_in_factor": 49,
16
+ "video_connector_attention_head_dim": 128,
17
+ "video_connector_num_attention_heads": 30,
18
+ "video_connector_num_layers": 2,
19
+ "video_connector_num_learnable_registers": 128
20
+ }
connectors/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7c0ad36c2d0706fb229193d5c698f0ef50c9b33678140b4ee84723a047b4032
3
+ size 2862957976
model_index.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "LTX2ImageToVideoPipeline",
3
+ "_diffusers_version": "0.37.0.dev0",
4
+ "_name_or_path": "Lightricks/LTX-2",
5
+ "audio_vae": [
6
+ "diffusers",
7
+ "AutoencoderKLLTX2Audio"
8
+ ],
9
+ "connectors": [
10
+ "ltx2",
11
+ "LTX2TextConnectors"
12
+ ],
13
+ "scheduler": [
14
+ "diffusers",
15
+ "FlowMatchEulerDiscreteScheduler"
16
+ ],
17
+ "text_encoder": [
18
+ "transformers",
19
+ "Gemma3ForConditionalGeneration"
20
+ ],
21
+ "tokenizer": [
22
+ "transformers",
23
+ "GemmaTokenizerFast"
24
+ ],
25
+ "transformer": [
26
+ "diffusers",
27
+ "LTX2VideoTransformer3DModel"
28
+ ],
29
+ "vae": [
30
+ "diffusers",
31
+ "AutoencoderKLLTX2Video"
32
+ ],
33
+ "vocoder": [
34
+ "ltx2",
35
+ "LTX2Vocoder"
36
+ ]
37
+ }
scheduler/scheduler_config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "FlowMatchEulerDiscreteScheduler",
3
+ "_diffusers_version": "0.37.0.dev0",
4
+ "base_image_seq_len": 1024,
5
+ "base_shift": 0.95,
6
+ "invert_sigmas": false,
7
+ "max_image_seq_len": 4096,
8
+ "max_shift": 2.05,
9
+ "num_train_timesteps": 1000,
10
+ "shift": 1.0,
11
+ "shift_terminal": 0.1,
12
+ "stochastic_sampling": false,
13
+ "time_shift_type": "exponential",
14
+ "use_beta_sigmas": false,
15
+ "use_dynamic_shifting": true,
16
+ "use_exponential_sigmas": false,
17
+ "use_karras_sigmas": false
18
+ }
text_encoder/config.json ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Gemma3ForConditionalGeneration"
4
+ ],
5
+ "boi_token_index": 255999,
6
+ "dtype": "bfloat16",
7
+ "eoi_token_index": 256000,
8
+ "eos_token_id": [
9
+ 1,
10
+ 106
11
+ ],
12
+ "image_token_index": 262144,
13
+ "initializer_range": 0.02,
14
+ "mm_tokens_per_image": 256,
15
+ "model_type": "gemma3",
16
+ "text_config": {
17
+ "_sliding_window_pattern": 6,
18
+ "attention_bias": false,
19
+ "attention_dropout": 0.0,
20
+ "attn_logit_softcapping": null,
21
+ "cache_implementation": "hybrid",
22
+ "dtype": "bfloat16",
23
+ "final_logit_softcapping": null,
24
+ "head_dim": 256,
25
+ "hidden_activation": "gelu_pytorch_tanh",
26
+ "hidden_size": 3840,
27
+ "initializer_range": 0.02,
28
+ "intermediate_size": 15360,
29
+ "layer_types": [
30
+ "sliding_attention",
31
+ "sliding_attention",
32
+ "sliding_attention",
33
+ "sliding_attention",
34
+ "sliding_attention",
35
+ "full_attention",
36
+ "sliding_attention",
37
+ "sliding_attention",
38
+ "sliding_attention",
39
+ "sliding_attention",
40
+ "sliding_attention",
41
+ "full_attention",
42
+ "sliding_attention",
43
+ "sliding_attention",
44
+ "sliding_attention",
45
+ "sliding_attention",
46
+ "sliding_attention",
47
+ "full_attention",
48
+ "sliding_attention",
49
+ "sliding_attention",
50
+ "sliding_attention",
51
+ "sliding_attention",
52
+ "sliding_attention",
53
+ "full_attention",
54
+ "sliding_attention",
55
+ "sliding_attention",
56
+ "sliding_attention",
57
+ "sliding_attention",
58
+ "sliding_attention",
59
+ "full_attention",
60
+ "sliding_attention",
61
+ "sliding_attention",
62
+ "sliding_attention",
63
+ "sliding_attention",
64
+ "sliding_attention",
65
+ "full_attention",
66
+ "sliding_attention",
67
+ "sliding_attention",
68
+ "sliding_attention",
69
+ "sliding_attention",
70
+ "sliding_attention",
71
+ "full_attention",
72
+ "sliding_attention",
73
+ "sliding_attention",
74
+ "sliding_attention",
75
+ "sliding_attention",
76
+ "sliding_attention",
77
+ "full_attention"
78
+ ],
79
+ "max_position_embeddings": 131072,
80
+ "model_type": "gemma3_text",
81
+ "num_attention_heads": 16,
82
+ "num_hidden_layers": 48,
83
+ "num_key_value_heads": 8,
84
+ "query_pre_attn_scalar": 256,
85
+ "rms_norm_eps": 1e-06,
86
+ "rope_local_base_freq": 10000,
87
+ "rope_scaling": {
88
+ "factor": 8.0,
89
+ "rope_type": "linear"
90
+ },
91
+ "rope_theta": 1000000,
92
+ "sliding_window": 1024,
93
+ "sliding_window_pattern": 6,
94
+ "use_bidirectional_attention": false,
95
+ "use_cache": true,
96
+ "vocab_size": 262208
97
+ },
98
+ "transformers_version": "4.57.3",
99
+ "vision_config": {
100
+ "attention_dropout": 0.0,
101
+ "dtype": "bfloat16",
102
+ "hidden_act": "gelu_pytorch_tanh",
103
+ "hidden_size": 1152,
104
+ "image_size": 896,
105
+ "intermediate_size": 4304,
106
+ "layer_norm_eps": 1e-06,
107
+ "model_type": "siglip_vision_model",
108
+ "num_attention_heads": 16,
109
+ "num_channels": 3,
110
+ "num_hidden_layers": 27,
111
+ "patch_size": 14,
112
+ "vision_use_head": false
113
+ }
114
+ }
text_encoder/generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 2,
4
+ "cache_implementation": "hybrid",
5
+ "eos_token_id": [
6
+ 1,
7
+ 106
8
+ ],
9
+ "pad_token_id": 0,
10
+ "transformers_version": "4.57.3"
11
+ }
text_encoder/model-00001-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6fb899db428481aafb45a20130457df6e247e7cb03b7d9f01ee4bc2a9a08138
3
+ size 4979902192
text_encoder/model-00002-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d251e7fe9799d529405ddb61705a44cd700bd30a8b66a8d44ae26ddf8365dbc6
3
+ size 4931296592
text_encoder/model-00003-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0684ef801385f0669a0b3e4ab160c50877efdbfa40eb97788595985de2743e78
3
+ size 4931296656
text_encoder/model-00004-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4b964e6526f81ccfa625c900b72ce92d5e0fd2debb75998763038ad06b9c541
3
+ size 4931296656
text_encoder/model-00005-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ef2de8f93e165b4e02425769fc566000b0674256ef0c3a27b23a0d45eb12088
3
+ size 4601000928
text_encoder/model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer/chat_template.jinja ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {{ bos_token }}
2
+ {%- if messages[0]['role'] == 'system' -%}
3
+ {%- if messages[0]['content'] is string -%}
4
+ {%- set first_user_prefix = messages[0]['content'] + '
5
+
6
+ ' -%}
7
+ {%- else -%}
8
+ {%- set first_user_prefix = messages[0]['content'][0]['text'] + '
9
+
10
+ ' -%}
11
+ {%- endif -%}
12
+ {%- set loop_messages = messages[1:] -%}
13
+ {%- else -%}
14
+ {%- set first_user_prefix = "" -%}
15
+ {%- set loop_messages = messages -%}
16
+ {%- endif -%}
17
+ {%- for message in loop_messages -%}
18
+ {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}
19
+ {{ raise_exception("Conversation roles must alternate user/assistant/user/assistant/...") }}
20
+ {%- endif -%}
21
+ {%- if (message['role'] == 'assistant') -%}
22
+ {%- set role = "model" -%}
23
+ {%- else -%}
24
+ {%- set role = message['role'] -%}
25
+ {%- endif -%}
26
+ {{ '<start_of_turn>' + role + '
27
+ ' + (first_user_prefix if loop.first else "") }}
28
+ {%- if message['content'] is string -%}
29
+ {{ message['content'] | trim }}
30
+ {%- elif message['content'] is iterable -%}
31
+ {%- for item in message['content'] -%}
32
+ {%- if item['type'] == 'image' -%}
33
+ {{ '<start_of_image>' }}
34
+ {%- elif item['type'] == 'text' -%}
35
+ {{ item['text'] | trim }}
36
+ {%- endif -%}
37
+ {%- endfor -%}
38
+ {%- else -%}
39
+ {{ raise_exception("Invalid content type") }}
40
+ {%- endif -%}
41
+ {{ '<end_of_turn>
42
+ ' }}
43
+ {%- endfor -%}
44
+ {%- if add_generation_prompt -%}
45
+ {{'<start_of_turn>model
46
+ '}}
47
+ {%- endif -%}
tokenizer/special_tokens_map.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "boi_token": "<start_of_image>",
3
+ "bos_token": {
4
+ "content": "<bos>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "eoi_token": "<end_of_image>",
11
+ "eos_token": {
12
+ "content": "<eos>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "image_token": "<image_soft_token>",
19
+ "pad_token": {
20
+ "content": "<pad>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ "unk_token": {
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
tokenizer/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
3
+ size 33384568
tokenizer/tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
transformer/config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "LTX2VideoTransformer3DModel",
3
+ "_diffusers_version": "0.37.0.dev0",
4
+ "_name_or_path": "/home/ubuntu/.cache/huggingface/hub/models--Lightricks--LTX-2/snapshots/ec81a2df13c166827fe169b189d39a94d7fad04d/transformer",
5
+ "activation_fn": "gelu-approximate",
6
+ "attention_bias": true,
7
+ "attention_head_dim": 128,
8
+ "attention_out_bias": true,
9
+ "audio_attention_head_dim": 64,
10
+ "audio_cross_attention_dim": 2048,
11
+ "audio_hop_length": 160,
12
+ "audio_in_channels": 128,
13
+ "audio_num_attention_heads": 32,
14
+ "audio_out_channels": 128,
15
+ "audio_patch_size": 1,
16
+ "audio_patch_size_t": 1,
17
+ "audio_pos_embed_max_pos": 20,
18
+ "audio_sampling_rate": 16000,
19
+ "audio_scale_factor": 4,
20
+ "base_height": 2048,
21
+ "base_width": 2048,
22
+ "caption_channels": 3840,
23
+ "causal_offset": 1,
24
+ "cross_attention_dim": 4096,
25
+ "cross_attn_timestep_scale_multiplier": 1000,
26
+ "in_channels": 128,
27
+ "norm_elementwise_affine": false,
28
+ "norm_eps": 1e-06,
29
+ "num_attention_heads": 32,
30
+ "num_layers": 48,
31
+ "out_channels": 128,
32
+ "patch_size": 1,
33
+ "patch_size_t": 1,
34
+ "pos_embed_max_pos": 20,
35
+ "qk_norm": "rms_norm_across_heads",
36
+ "rope_double_precision": true,
37
+ "rope_theta": 10000.0,
38
+ "rope_type": "split",
39
+ "timestep_scale_multiplier": 1000,
40
+ "vae_scale_factors": [
41
+ 8,
42
+ 32,
43
+ 32
44
+ ]
45
+ }
transformer/diffusion_pytorch_model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e86065901e2a420bd224ed3de434f56c8092ec1c078b2ea936f7cba9337ec47
3
+ size 9987228592
transformer/diffusion_pytorch_model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ff2e82c84864911aa2a857ebff65c80a105f42632f5a59a12f6db8fb68a2386
3
+ size 9970780296
transformer/diffusion_pytorch_model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28d06f63134b02e7868530593fb8fa24806ae30abe9ee54002a8ad4d5737d57c
3
+ size 9870256976
transformer/diffusion_pytorch_model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0f43c8f4bac294abe9293c26f8bbce78e122e85a0a7e452df8fce2abae8e40b
3
+ size 7924509816
transformer/diffusion_pytorch_model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
vae/config.json ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AutoencoderKLLTX2Video",
3
+ "_diffusers_version": "0.37.0.dev0",
4
+ "_name_or_path": "/home/ubuntu/.cache/huggingface/hub/models--Lightricks--LTX-2/snapshots/ec81a2df13c166827fe169b189d39a94d7fad04d/vae",
5
+ "block_out_channels": [
6
+ 256,
7
+ 512,
8
+ 1024,
9
+ 2048
10
+ ],
11
+ "decoder_block_out_channels": [
12
+ 256,
13
+ 512,
14
+ 1024
15
+ ],
16
+ "decoder_causal": false,
17
+ "decoder_inject_noise": [
18
+ false,
19
+ false,
20
+ false,
21
+ false
22
+ ],
23
+ "decoder_layers_per_block": [
24
+ 5,
25
+ 5,
26
+ 5,
27
+ 5
28
+ ],
29
+ "decoder_spatial_padding_mode": "reflect",
30
+ "decoder_spatio_temporal_scaling": [
31
+ true,
32
+ true,
33
+ true
34
+ ],
35
+ "down_block_types": [
36
+ "LTX2VideoDownBlock3D",
37
+ "LTX2VideoDownBlock3D",
38
+ "LTX2VideoDownBlock3D",
39
+ "LTX2VideoDownBlock3D"
40
+ ],
41
+ "downsample_type": [
42
+ "spatial",
43
+ "temporal",
44
+ "spatiotemporal",
45
+ "spatiotemporal"
46
+ ],
47
+ "encoder_causal": true,
48
+ "encoder_spatial_padding_mode": "zeros",
49
+ "in_channels": 3,
50
+ "latent_channels": 128,
51
+ "layers_per_block": [
52
+ 4,
53
+ 6,
54
+ 6,
55
+ 2,
56
+ 2
57
+ ],
58
+ "out_channels": 3,
59
+ "patch_size": 4,
60
+ "patch_size_t": 1,
61
+ "resnet_norm_eps": 1e-06,
62
+ "scaling_factor": 1.0,
63
+ "spatial_compression_ratio": 32,
64
+ "spatio_temporal_scaling": [
65
+ true,
66
+ true,
67
+ true,
68
+ true
69
+ ],
70
+ "temporal_compression_ratio": 8,
71
+ "timestep_conditioning": false,
72
+ "upsample_factor": [
73
+ 2,
74
+ 2,
75
+ 2
76
+ ],
77
+ "upsample_residual": [
78
+ true,
79
+ true,
80
+ true
81
+ ]
82
+ }
vae/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88a8257e2e3358e4a5d5609782a47eefc1ae559051b8d44dddc669fda03e5bcc
3
+ size 2444982370
vocoder/config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "LTX2Vocoder",
3
+ "_diffusers_version": "0.37.0.dev0",
4
+ "_name_or_path": "/home/ubuntu/.cache/huggingface/hub/models--Lightricks--LTX-2/snapshots/ec81a2df13c166827fe169b189d39a94d7fad04d/vocoder",
5
+ "hidden_channels": 1024,
6
+ "in_channels": 128,
7
+ "leaky_relu_negative_slope": 0.1,
8
+ "out_channels": 2,
9
+ "output_sampling_rate": 24000,
10
+ "resnet_dilations": [
11
+ [
12
+ 1,
13
+ 3,
14
+ 5
15
+ ],
16
+ [
17
+ 1,
18
+ 3,
19
+ 5
20
+ ],
21
+ [
22
+ 1,
23
+ 3,
24
+ 5
25
+ ]
26
+ ],
27
+ "resnet_kernel_sizes": [
28
+ 3,
29
+ 7,
30
+ 11
31
+ ],
32
+ "upsample_factors": [
33
+ 6,
34
+ 5,
35
+ 2,
36
+ 2,
37
+ 2
38
+ ],
39
+ "upsample_kernel_sizes": [
40
+ 16,
41
+ 15,
42
+ 8,
43
+ 4,
44
+ 4
45
+ ]
46
+ }
vocoder/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15855fc59233b9cac50bdd1f0d2ccea4a5eaedbd7fd7549b16d5ebd6cc47d92a
3
+ size 111204124