jiacheng-ye commited on
Commit
4cfd06c
·
verified ·
1 Parent(s): 60ffcba

Initial commit

Browse files
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - robotics
5
+ - vla
6
+ - image-text-to-text
7
+ - multimodal
8
+ - pretraining
9
+ license: apache-2.0
10
+ language:
11
+ - en
12
+ pipeline_tag: image-text-to-text
13
+ ---
14
+
15
+ # Dream-VLA 7B
16
+
17
+ Dream-VLA 7B is an open vision-language-action model built from a diffusion VLM [Dream-VL](https://huggingface.co/Dream-org/Dream-VL-7B).
18
+ The model takes language instructions and camera images as input and generates robot actions. It supports controlling multiple robots out-of-the-box, and can be quickly adapted for new robot domains via (parameter-efficient) fine-tuning.
19
+
20
+ All Dream-VLA checkpoints, as well as our [training codebase](https://github.com/DreamLM/DreamVLX) are released under an Apache 2.0 License.
21
+
22
+ For full details, please read [our blog](https://hkunlp.github.io/blog/2025/dream-vlx/) and paper (pending).
23
+
24
+ ## Model Summary
25
+
26
+ - **Model type:** Vision-language-action (language, image => robot actions)
27
+ - **Language(s) (NLP):** en
28
+ - **License:** apache-2.0
29
+ - **Finetuned from:** [`Dream-VL`](https://huggingface.co/Dream-org/Dream-VL-7B), a VLM trained from:
30
+ + **Vision Backbone**: Qwen2ViT
31
+ + **Language Model**: Dream-7B
32
+ - **Pretraining Dataset:** [Open X-Embodiment](https://robotics-transformer-x.github.io/), with specific dataset mixture following [OpenVLA](https://github.com/openvla/openvla).
33
+ - **Repository:** [https://github.com/DreamLM/DreamVLX](https://github.com/DreamLM/DreamVLX)
34
+ - **Project Page & Videos:** [https://hkunlp.github.io/blog/2025/dream-vlx](https://hkunlp.github.io/blog/2025/dream-vlx/)
35
+
36
+ ## Uses
37
+
38
+ Dream-VLA models take a language instruction and a camera image of a robot workspace as input, and predict (normalized) robot actions consisting of 7-DoF end-effector deltas
39
+ of the form (x, y, z, roll, pitch, yaw, gripper). To execute on an actual robot platform, actions need to be *un-normalized* subject to statistics computed on a per-robot,
40
+ per-dataset basis. The available *un-normalized* keys are listed inside `config.json`.
41
+
42
+ Dream-VLA models can be used zero-shot to control robots for specific combinations of embodiments and domains seen in the Open X-Embodiment pretraining mixture (e.g., for
43
+ [BridgeV2 environments with a Widow-X robot](https://rail-berkeley.github.io/bridgedata/)). They can also be efficiently *fine-tuned* for new tasks and robot setups
44
+ given minimal demonstration data; [see here](https://github.com/DreamLM/DreamVLX).
45
+
46
+ Simialr as OpenVLA, Dream-VLA models also do not zero-shot generalize to new (unseen) robot embodiments, or setups that are not represented in the pretraining mix; in these cases,
47
+ we suggest collecting a dataset of demonstrations on the desired setup, and fine-tuning Dream-VLA models instead.
48
+
49
+ ## Getting Started
50
+
51
+ Dream-VLA 7B can be used to control multiple robots for domains represented in the pretraining mixture out-of-the-box. For example,
52
+ here is an example for loading Dream-VLA for zero-shot instruction following in the [BridgeV2 environments] with a Widow-X robot:
53
+
54
+ ```python
55
+ # Install minimal dependencies (`torch`, `transformers`, `timm`, `tokenizers`, `flash_attn`, ...)
56
+ from transformers import AutoModel, AutoProcessor
57
+ from PIL import Image
58
+
59
+ import torch
60
+
61
+ # Load Processor & VLA
62
+ processor = AutoProcessor.from_pretrained("Dream-org/Dream-VLA-7B", trust_remote_code=True)
63
+ vla = AutoModel.from_pretrained(
64
+ "Dream-org/Dream-VLA-7B",
65
+ attn_implementation="flash_attention_2", # [Optional] Requires `flash_attn`
66
+ torch_dtype=torch.bfloat16,
67
+ low_cpu_mem_usage=True,
68
+ trust_remote_code=True
69
+ ).to("cuda:0")
70
+
71
+ # Grab image input & format prompt
72
+ image: Image.Image = get_from_camera(...)
73
+ prompt = "In: What action should the robot take to {<INSTRUCTION>}?\nOut:"
74
+
75
+ # Predict Action (7-DoF; un-normalize for BridgeV2)
76
+ inputs = processor(prompt, image).to("cuda:0", dtype=torch.bfloat16)
77
+ action = vla.predict_action(**inputs, unnorm_key="bridge_orig", do_sample=False)
78
+
79
+ # Execute...
80
+ robot.act(action, ...)
81
+ ```
82
+
83
+ ## Citation
84
+
85
+ **BibTeX:**
86
+
87
+ ```bibtex
88
+ @article{ye2025dreamvla,
89
+ title={Dream-VL & Dream-VLA: Open Vision-Language and Vision-Language-Action Models with Diffusion Language Model Backbone},
90
+ author={Ye, Jiacheng and Gong, Shansan and Gao, Jiahui and Fan, Junming and Wu, Shuang and Bi, Wei and Bai, Haoli and Shang, Lifeng and Kong, Lingpeng},
91
+ journal={arXiv preprint},
92
+ year={2025}
93
+ }
94
+ ```
added_tokens.json ADDED
@@ -0,0 +1,282 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|action_0|>": 151667,
5
+ "<|action_100|>": 151767,
6
+ "<|action_101|>": 151768,
7
+ "<|action_102|>": 151769,
8
+ "<|action_103|>": 151770,
9
+ "<|action_104|>": 151771,
10
+ "<|action_105|>": 151772,
11
+ "<|action_106|>": 151773,
12
+ "<|action_107|>": 151774,
13
+ "<|action_108|>": 151775,
14
+ "<|action_109|>": 151776,
15
+ "<|action_10|>": 151677,
16
+ "<|action_110|>": 151777,
17
+ "<|action_111|>": 151778,
18
+ "<|action_112|>": 151779,
19
+ "<|action_113|>": 151780,
20
+ "<|action_114|>": 151781,
21
+ "<|action_115|>": 151782,
22
+ "<|action_116|>": 151783,
23
+ "<|action_117|>": 151784,
24
+ "<|action_118|>": 151785,
25
+ "<|action_119|>": 151786,
26
+ "<|action_11|>": 151678,
27
+ "<|action_120|>": 151787,
28
+ "<|action_121|>": 151788,
29
+ "<|action_122|>": 151789,
30
+ "<|action_123|>": 151790,
31
+ "<|action_124|>": 151791,
32
+ "<|action_125|>": 151792,
33
+ "<|action_126|>": 151793,
34
+ "<|action_127|>": 151794,
35
+ "<|action_128|>": 151795,
36
+ "<|action_129|>": 151796,
37
+ "<|action_12|>": 151679,
38
+ "<|action_130|>": 151797,
39
+ "<|action_131|>": 151798,
40
+ "<|action_132|>": 151799,
41
+ "<|action_133|>": 151800,
42
+ "<|action_134|>": 151801,
43
+ "<|action_135|>": 151802,
44
+ "<|action_136|>": 151803,
45
+ "<|action_137|>": 151804,
46
+ "<|action_138|>": 151805,
47
+ "<|action_139|>": 151806,
48
+ "<|action_13|>": 151680,
49
+ "<|action_140|>": 151807,
50
+ "<|action_141|>": 151808,
51
+ "<|action_142|>": 151809,
52
+ "<|action_143|>": 151810,
53
+ "<|action_144|>": 151811,
54
+ "<|action_145|>": 151812,
55
+ "<|action_146|>": 151813,
56
+ "<|action_147|>": 151814,
57
+ "<|action_148|>": 151815,
58
+ "<|action_149|>": 151816,
59
+ "<|action_14|>": 151681,
60
+ "<|action_150|>": 151817,
61
+ "<|action_151|>": 151818,
62
+ "<|action_152|>": 151819,
63
+ "<|action_153|>": 151820,
64
+ "<|action_154|>": 151821,
65
+ "<|action_155|>": 151822,
66
+ "<|action_156|>": 151823,
67
+ "<|action_157|>": 151824,
68
+ "<|action_158|>": 151825,
69
+ "<|action_159|>": 151826,
70
+ "<|action_15|>": 151682,
71
+ "<|action_160|>": 151827,
72
+ "<|action_161|>": 151828,
73
+ "<|action_162|>": 151829,
74
+ "<|action_163|>": 151830,
75
+ "<|action_164|>": 151831,
76
+ "<|action_165|>": 151832,
77
+ "<|action_166|>": 151833,
78
+ "<|action_167|>": 151834,
79
+ "<|action_168|>": 151835,
80
+ "<|action_169|>": 151836,
81
+ "<|action_16|>": 151683,
82
+ "<|action_170|>": 151837,
83
+ "<|action_171|>": 151838,
84
+ "<|action_172|>": 151839,
85
+ "<|action_173|>": 151840,
86
+ "<|action_174|>": 151841,
87
+ "<|action_175|>": 151842,
88
+ "<|action_176|>": 151843,
89
+ "<|action_177|>": 151844,
90
+ "<|action_178|>": 151845,
91
+ "<|action_179|>": 151846,
92
+ "<|action_17|>": 151684,
93
+ "<|action_180|>": 151847,
94
+ "<|action_181|>": 151848,
95
+ "<|action_182|>": 151849,
96
+ "<|action_183|>": 151850,
97
+ "<|action_184|>": 151851,
98
+ "<|action_185|>": 151852,
99
+ "<|action_186|>": 151853,
100
+ "<|action_187|>": 151854,
101
+ "<|action_188|>": 151855,
102
+ "<|action_189|>": 151856,
103
+ "<|action_18|>": 151685,
104
+ "<|action_190|>": 151857,
105
+ "<|action_191|>": 151858,
106
+ "<|action_192|>": 151859,
107
+ "<|action_193|>": 151860,
108
+ "<|action_194|>": 151861,
109
+ "<|action_195|>": 151862,
110
+ "<|action_196|>": 151863,
111
+ "<|action_197|>": 151864,
112
+ "<|action_198|>": 151865,
113
+ "<|action_199|>": 151866,
114
+ "<|action_19|>": 151686,
115
+ "<|action_1|>": 151668,
116
+ "<|action_200|>": 151867,
117
+ "<|action_201|>": 151868,
118
+ "<|action_202|>": 151869,
119
+ "<|action_203|>": 151870,
120
+ "<|action_204|>": 151871,
121
+ "<|action_205|>": 151872,
122
+ "<|action_206|>": 151873,
123
+ "<|action_207|>": 151874,
124
+ "<|action_208|>": 151875,
125
+ "<|action_209|>": 151876,
126
+ "<|action_20|>": 151687,
127
+ "<|action_210|>": 151877,
128
+ "<|action_211|>": 151878,
129
+ "<|action_212|>": 151879,
130
+ "<|action_213|>": 151880,
131
+ "<|action_214|>": 151881,
132
+ "<|action_215|>": 151882,
133
+ "<|action_216|>": 151883,
134
+ "<|action_217|>": 151884,
135
+ "<|action_218|>": 151885,
136
+ "<|action_219|>": 151886,
137
+ "<|action_21|>": 151688,
138
+ "<|action_220|>": 151887,
139
+ "<|action_221|>": 151888,
140
+ "<|action_222|>": 151889,
141
+ "<|action_223|>": 151890,
142
+ "<|action_224|>": 151891,
143
+ "<|action_225|>": 151892,
144
+ "<|action_226|>": 151893,
145
+ "<|action_227|>": 151894,
146
+ "<|action_228|>": 151895,
147
+ "<|action_229|>": 151896,
148
+ "<|action_22|>": 151689,
149
+ "<|action_230|>": 151897,
150
+ "<|action_231|>": 151898,
151
+ "<|action_232|>": 151899,
152
+ "<|action_233|>": 151900,
153
+ "<|action_234|>": 151901,
154
+ "<|action_235|>": 151902,
155
+ "<|action_236|>": 151903,
156
+ "<|action_237|>": 151904,
157
+ "<|action_238|>": 151905,
158
+ "<|action_239|>": 151906,
159
+ "<|action_23|>": 151690,
160
+ "<|action_240|>": 151907,
161
+ "<|action_241|>": 151908,
162
+ "<|action_242|>": 151909,
163
+ "<|action_243|>": 151910,
164
+ "<|action_244|>": 151911,
165
+ "<|action_245|>": 151912,
166
+ "<|action_246|>": 151913,
167
+ "<|action_247|>": 151914,
168
+ "<|action_248|>": 151915,
169
+ "<|action_249|>": 151916,
170
+ "<|action_24|>": 151691,
171
+ "<|action_250|>": 151917,
172
+ "<|action_251|>": 151918,
173
+ "<|action_252|>": 151919,
174
+ "<|action_253|>": 151920,
175
+ "<|action_254|>": 151921,
176
+ "<|action_255|>": 151922,
177
+ "<|action_25|>": 151692,
178
+ "<|action_26|>": 151693,
179
+ "<|action_27|>": 151694,
180
+ "<|action_28|>": 151695,
181
+ "<|action_29|>": 151696,
182
+ "<|action_2|>": 151669,
183
+ "<|action_30|>": 151697,
184
+ "<|action_31|>": 151698,
185
+ "<|action_32|>": 151699,
186
+ "<|action_33|>": 151700,
187
+ "<|action_34|>": 151701,
188
+ "<|action_35|>": 151702,
189
+ "<|action_36|>": 151703,
190
+ "<|action_37|>": 151704,
191
+ "<|action_38|>": 151705,
192
+ "<|action_39|>": 151706,
193
+ "<|action_3|>": 151670,
194
+ "<|action_40|>": 151707,
195
+ "<|action_41|>": 151708,
196
+ "<|action_42|>": 151709,
197
+ "<|action_43|>": 151710,
198
+ "<|action_44|>": 151711,
199
+ "<|action_45|>": 151712,
200
+ "<|action_46|>": 151713,
201
+ "<|action_47|>": 151714,
202
+ "<|action_48|>": 151715,
203
+ "<|action_49|>": 151716,
204
+ "<|action_4|>": 151671,
205
+ "<|action_50|>": 151717,
206
+ "<|action_51|>": 151718,
207
+ "<|action_52|>": 151719,
208
+ "<|action_53|>": 151720,
209
+ "<|action_54|>": 151721,
210
+ "<|action_55|>": 151722,
211
+ "<|action_56|>": 151723,
212
+ "<|action_57|>": 151724,
213
+ "<|action_58|>": 151725,
214
+ "<|action_59|>": 151726,
215
+ "<|action_5|>": 151672,
216
+ "<|action_60|>": 151727,
217
+ "<|action_61|>": 151728,
218
+ "<|action_62|>": 151729,
219
+ "<|action_63|>": 151730,
220
+ "<|action_64|>": 151731,
221
+ "<|action_65|>": 151732,
222
+ "<|action_66|>": 151733,
223
+ "<|action_67|>": 151734,
224
+ "<|action_68|>": 151735,
225
+ "<|action_69|>": 151736,
226
+ "<|action_6|>": 151673,
227
+ "<|action_70|>": 151737,
228
+ "<|action_71|>": 151738,
229
+ "<|action_72|>": 151739,
230
+ "<|action_73|>": 151740,
231
+ "<|action_74|>": 151741,
232
+ "<|action_75|>": 151742,
233
+ "<|action_76|>": 151743,
234
+ "<|action_77|>": 151744,
235
+ "<|action_78|>": 151745,
236
+ "<|action_79|>": 151746,
237
+ "<|action_7|>": 151674,
238
+ "<|action_80|>": 151747,
239
+ "<|action_81|>": 151748,
240
+ "<|action_82|>": 151749,
241
+ "<|action_83|>": 151750,
242
+ "<|action_84|>": 151751,
243
+ "<|action_85|>": 151752,
244
+ "<|action_86|>": 151753,
245
+ "<|action_87|>": 151754,
246
+ "<|action_88|>": 151755,
247
+ "<|action_89|>": 151756,
248
+ "<|action_8|>": 151675,
249
+ "<|action_90|>": 151757,
250
+ "<|action_91|>": 151758,
251
+ "<|action_92|>": 151759,
252
+ "<|action_93|>": 151760,
253
+ "<|action_94|>": 151761,
254
+ "<|action_95|>": 151762,
255
+ "<|action_96|>": 151763,
256
+ "<|action_97|>": 151764,
257
+ "<|action_98|>": 151765,
258
+ "<|action_99|>": 151766,
259
+ "<|action_9|>": 151676,
260
+ "<|beginoftext|>": 151665,
261
+ "<|box_end|>": 151649,
262
+ "<|box_start|>": 151648,
263
+ "<|endoftext|>": 151643,
264
+ "<|file_sep|>": 151664,
265
+ "<|fim_middle|>": 151660,
266
+ "<|fim_pad|>": 151662,
267
+ "<|fim_prefix|>": 151659,
268
+ "<|fim_suffix|>": 151661,
269
+ "<|im_end|>": 151645,
270
+ "<|im_start|>": 151644,
271
+ "<|image_pad|>": 151655,
272
+ "<|mask|>": 151666,
273
+ "<|object_ref_end|>": 151647,
274
+ "<|object_ref_start|>": 151646,
275
+ "<|quad_end|>": 151651,
276
+ "<|quad_start|>": 151650,
277
+ "<|repo_name|>": 151663,
278
+ "<|video_pad|>": 151656,
279
+ "<|vision_end|>": 151653,
280
+ "<|vision_pad|>": 151654,
281
+ "<|vision_start|>": 151652
282
+ }
chat_template.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|image_pad|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|video_pad|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}"
3
+ }
config.json ADDED
@@ -0,0 +1,3190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "DreamVLAForActionPrediction"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_dreamvl.DreamVLAConfig",
8
+ "AutoModel": "modeling_dreamvl.DreamVLAForActionPrediction"
9
+ },
10
+ "bos_token_id": 151643,
11
+ "eos_token_id": 151643,
12
+ "full_attn_mask": true,
13
+ "hidden_act": "silu",
14
+ "hidden_size": 3584,
15
+ "image_token_id": 151655,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 18944,
18
+ "mask_token_id": 151666,
19
+ "max_position_embeddings": 131072,
20
+ "max_window_layers": 28,
21
+ "model_type": "dream-vla",
22
+ "mrope_section": [
23
+ 16,
24
+ 24,
25
+ 24
26
+ ],
27
+ "n_action_bins": 256,
28
+ "norm_stats": {
29
+ "austin_buds_dataset_converted_externally_to_rlds": {
30
+ "action": {
31
+ "mask": [
32
+ true,
33
+ true,
34
+ true,
35
+ true,
36
+ true,
37
+ true,
38
+ false
39
+ ],
40
+ "max": [
41
+ 1.0,
42
+ 1.0,
43
+ 1.0,
44
+ 0.0,
45
+ 0.0,
46
+ 0.0,
47
+ 1.0
48
+ ],
49
+ "mean": [
50
+ -0.07678322494029999,
51
+ 0.003684911411255598,
52
+ 0.056449249386787415,
53
+ 0.0,
54
+ 0.0,
55
+ 0.0,
56
+ 0.3510494828224182
57
+ ],
58
+ "min": [
59
+ -1.0,
60
+ -1.0,
61
+ -1.0,
62
+ 0.0,
63
+ 0.0,
64
+ 0.0,
65
+ 0.0
66
+ ],
67
+ "q01": [
68
+ -1.0,
69
+ -0.9599999785423279,
70
+ -0.8714285492897034,
71
+ 0.0,
72
+ 0.0,
73
+ 0.0,
74
+ 0.0
75
+ ],
76
+ "q99": [
77
+ 1.0,
78
+ 0.8600000143051147,
79
+ 1.0,
80
+ 0.0,
81
+ 0.0,
82
+ 0.0,
83
+ 1.0
84
+ ],
85
+ "std": [
86
+ 0.6367751359939575,
87
+ 0.3788909912109375,
88
+ 0.47796276211738586,
89
+ 0.0,
90
+ 0.0,
91
+ 0.0,
92
+ 0.4772103726863861
93
+ ]
94
+ },
95
+ "num_trajectories": 50,
96
+ "num_transitions": 34112,
97
+ "proprio": {
98
+ "max": [
99
+ 0.0,
100
+ 0.0,
101
+ 0.0,
102
+ 0.0,
103
+ 0.0,
104
+ 0.0,
105
+ 0.0
106
+ ],
107
+ "mean": [
108
+ 0.0,
109
+ 0.0,
110
+ 0.0,
111
+ 0.0,
112
+ 0.0,
113
+ 0.0,
114
+ 0.0
115
+ ],
116
+ "min": [
117
+ 0.0,
118
+ 0.0,
119
+ 0.0,
120
+ 0.0,
121
+ 0.0,
122
+ 0.0,
123
+ 0.0
124
+ ],
125
+ "q01": [
126
+ 0.0,
127
+ 0.0,
128
+ 0.0,
129
+ 0.0,
130
+ 0.0,
131
+ 0.0,
132
+ 0.0
133
+ ],
134
+ "q99": [
135
+ 0.0,
136
+ 0.0,
137
+ 0.0,
138
+ 0.0,
139
+ 0.0,
140
+ 0.0,
141
+ 0.0
142
+ ],
143
+ "std": [
144
+ 0.0,
145
+ 0.0,
146
+ 0.0,
147
+ 0.0,
148
+ 0.0,
149
+ 0.0,
150
+ 0.0
151
+ ]
152
+ }
153
+ },
154
+ "austin_sailor_dataset_converted_externally_to_rlds": {
155
+ "action": {
156
+ "mask": [
157
+ true,
158
+ true,
159
+ true,
160
+ true,
161
+ true,
162
+ true,
163
+ false
164
+ ],
165
+ "max": [
166
+ 1.0,
167
+ 1.0,
168
+ 1.0,
169
+ 0.0,
170
+ 0.0,
171
+ 0.375,
172
+ 1.0
173
+ ],
174
+ "mean": [
175
+ 0.011825310997664928,
176
+ 0.006461076904088259,
177
+ 0.06023634970188141,
178
+ 0.0,
179
+ 0.0,
180
+ 0.001646595192141831,
181
+ 0.5260950326919556
182
+ ],
183
+ "min": [
184
+ -1.0,
185
+ -1.0,
186
+ -1.0,
187
+ 0.0,
188
+ 0.0,
189
+ -0.375,
190
+ 0.0
191
+ ],
192
+ "q01": [
193
+ -1.0,
194
+ -0.9828571677207947,
195
+ -0.6000000238418579,
196
+ 0.0,
197
+ 0.0,
198
+ -0.17249999940395355,
199
+ 0.0
200
+ ],
201
+ "q99": [
202
+ 1.0,
203
+ 0.9457142949104309,
204
+ 1.0,
205
+ 0.0,
206
+ 0.0,
207
+ 0.17892856895923615,
208
+ 1.0
209
+ ],
210
+ "std": [
211
+ 0.46348825097084045,
212
+ 0.41240230202674866,
213
+ 0.4118625819683075,
214
+ 0.0,
215
+ 0.0,
216
+ 0.05786110833287239,
217
+ 0.4989398121833801
218
+ ]
219
+ },
220
+ "num_trajectories": 240,
221
+ "num_transitions": 353094,
222
+ "proprio": {
223
+ "max": [
224
+ 0.0,
225
+ 0.0,
226
+ 0.0,
227
+ 0.0,
228
+ 0.0,
229
+ 0.0,
230
+ 0.0
231
+ ],
232
+ "mean": [
233
+ 0.0,
234
+ 0.0,
235
+ 0.0,
236
+ 0.0,
237
+ 0.0,
238
+ 0.0,
239
+ 0.0
240
+ ],
241
+ "min": [
242
+ 0.0,
243
+ 0.0,
244
+ 0.0,
245
+ 0.0,
246
+ 0.0,
247
+ 0.0,
248
+ 0.0
249
+ ],
250
+ "q01": [
251
+ 0.0,
252
+ 0.0,
253
+ 0.0,
254
+ 0.0,
255
+ 0.0,
256
+ 0.0,
257
+ 0.0
258
+ ],
259
+ "q99": [
260
+ 0.0,
261
+ 0.0,
262
+ 0.0,
263
+ 0.0,
264
+ 0.0,
265
+ 0.0,
266
+ 0.0
267
+ ],
268
+ "std": [
269
+ 0.0,
270
+ 0.0,
271
+ 0.0,
272
+ 0.0,
273
+ 0.0,
274
+ 0.0,
275
+ 0.0
276
+ ]
277
+ }
278
+ },
279
+ "austin_sirius_dataset_converted_externally_to_rlds": {
280
+ "action": {
281
+ "mask": [
282
+ true,
283
+ true,
284
+ true,
285
+ true,
286
+ true,
287
+ true,
288
+ false
289
+ ],
290
+ "max": [
291
+ 1.0002285242080688,
292
+ 0.960608720779419,
293
+ 1.105179786682129,
294
+ 0.0,
295
+ 0.0,
296
+ 0.341785728931427,
297
+ 1.0
298
+ ],
299
+ "mean": [
300
+ 0.077476866543293,
301
+ 0.031955525279045105,
302
+ 0.04244736582040787,
303
+ 0.0,
304
+ 0.0,
305
+ -0.01603453978896141,
306
+ 0.43260183930397034
307
+ ],
308
+ "min": [
309
+ -1.0183025598526,
310
+ -0.9800000190734863,
311
+ -0.9774575233459473,
312
+ 0.0,
313
+ 0.0,
314
+ -0.34607142210006714,
315
+ 0.0
316
+ ],
317
+ "q01": [
318
+ -0.780905865430832,
319
+ -0.5667179036140442,
320
+ -0.5254343223571777,
321
+ 0.0,
322
+ 0.0,
323
+ -0.28495091378688814,
324
+ 0.0
325
+ ],
326
+ "q99": [
327
+ 0.9569637751579284,
328
+ 0.6971374487876891,
329
+ 0.8124888157844541,
330
+ 0.0,
331
+ 0.0,
332
+ 0.1971428543329239,
333
+ 1.0
334
+ ],
335
+ "std": [
336
+ 0.3906329870223999,
337
+ 0.2998153865337372,
338
+ 0.2782270312309265,
339
+ 0.0,
340
+ 0.0,
341
+ 0.08120627701282501,
342
+ 0.4952819347381592
343
+ ]
344
+ },
345
+ "num_trajectories": 559,
346
+ "num_transitions": 279939,
347
+ "proprio": {
348
+ "max": [
349
+ 0.0,
350
+ 0.0,
351
+ 0.0,
352
+ 0.0,
353
+ 0.0,
354
+ 0.0,
355
+ 0.0
356
+ ],
357
+ "mean": [
358
+ 0.0,
359
+ 0.0,
360
+ 0.0,
361
+ 0.0,
362
+ 0.0,
363
+ 0.0,
364
+ 0.0
365
+ ],
366
+ "min": [
367
+ 0.0,
368
+ 0.0,
369
+ 0.0,
370
+ 0.0,
371
+ 0.0,
372
+ 0.0,
373
+ 0.0
374
+ ],
375
+ "q01": [
376
+ 0.0,
377
+ 0.0,
378
+ 0.0,
379
+ 0.0,
380
+ 0.0,
381
+ 0.0,
382
+ 0.0
383
+ ],
384
+ "q99": [
385
+ 0.0,
386
+ 0.0,
387
+ 0.0,
388
+ 0.0,
389
+ 0.0,
390
+ 0.0,
391
+ 0.0
392
+ ],
393
+ "std": [
394
+ 0.0,
395
+ 0.0,
396
+ 0.0,
397
+ 0.0,
398
+ 0.0,
399
+ 0.0,
400
+ 0.0
401
+ ]
402
+ }
403
+ },
404
+ "bc_z": {
405
+ "action": {
406
+ "mask": [
407
+ true,
408
+ true,
409
+ true,
410
+ true,
411
+ true,
412
+ true,
413
+ false
414
+ ],
415
+ "max": [
416
+ 0.2165454924106598,
417
+ 0.1251407265663147,
418
+ 0.10772687941789627,
419
+ 0.33544227480888367,
420
+ 0.28117990493774414,
421
+ 0.40614867210388184,
422
+ 1.0
423
+ ],
424
+ "mean": [
425
+ -0.00995862390846014,
426
+ 0.0008958307444117963,
427
+ 0.0049952236004173756,
428
+ 0.0002975410898216069,
429
+ -0.008734572678804398,
430
+ -0.03068925067782402,
431
+ 0.8344562649726868
432
+ ],
433
+ "min": [
434
+ -0.1677047461271286,
435
+ -0.14630407094955444,
436
+ -0.10066790133714676,
437
+ -0.29421567916870117,
438
+ -0.32101404666900635,
439
+ -0.4635624885559082,
440
+ 0.0
441
+ ],
442
+ "q01": [
443
+ -0.09220654994249344,
444
+ -0.06456145539879798,
445
+ -0.049121275544166565,
446
+ -0.11594625547528267,
447
+ -0.14152548640966414,
448
+ -0.2251061636209488,
449
+ 0.0
450
+ ],
451
+ "q99": [
452
+ 0.07628866866230968,
453
+ 0.058019736707210584,
454
+ 0.052540797740221024,
455
+ 0.11740604028105736,
456
+ 0.11703975558280955,
457
+ 0.16729306846857078,
458
+ 1.0
459
+ ],
460
+ "std": [
461
+ 0.030533358454704285,
462
+ 0.023141471669077873,
463
+ 0.020642178133130074,
464
+ 0.041561778634786606,
465
+ 0.046429991722106934,
466
+ 0.07697731256484985,
467
+ 0.3610878884792328
468
+ ]
469
+ },
470
+ "num_trajectories": 43264,
471
+ "num_transitions": 6015535,
472
+ "proprio": {
473
+ "max": [
474
+ 0.0,
475
+ 0.0,
476
+ 0.0,
477
+ 0.0,
478
+ 0.0,
479
+ 0.0,
480
+ 0.0
481
+ ],
482
+ "mean": [
483
+ 0.0,
484
+ 0.0,
485
+ 0.0,
486
+ 0.0,
487
+ 0.0,
488
+ 0.0,
489
+ 0.0
490
+ ],
491
+ "min": [
492
+ 0.0,
493
+ 0.0,
494
+ 0.0,
495
+ 0.0,
496
+ 0.0,
497
+ 0.0,
498
+ 0.0
499
+ ],
500
+ "q01": [
501
+ 0.0,
502
+ 0.0,
503
+ 0.0,
504
+ 0.0,
505
+ 0.0,
506
+ 0.0,
507
+ 0.0
508
+ ],
509
+ "q99": [
510
+ 0.0,
511
+ 0.0,
512
+ 0.0,
513
+ 0.0,
514
+ 0.0,
515
+ 0.0,
516
+ 0.0
517
+ ],
518
+ "std": [
519
+ 0.0,
520
+ 0.0,
521
+ 0.0,
522
+ 0.0,
523
+ 0.0,
524
+ 0.0,
525
+ 0.0
526
+ ]
527
+ }
528
+ },
529
+ "berkeley_autolab_ur5": {
530
+ "action": {
531
+ "mask": [
532
+ true,
533
+ true,
534
+ true,
535
+ true,
536
+ true,
537
+ true,
538
+ false
539
+ ],
540
+ "max": [
541
+ 0.019999999552965164,
542
+ 0.019999999552965164,
543
+ 0.019999999552965164,
544
+ 0.06666667014360428,
545
+ 0.06666667014360428,
546
+ 0.06666667014360428,
547
+ 1.0
548
+ ],
549
+ "mean": [
550
+ 0.000568361661862582,
551
+ 0.0012176960008218884,
552
+ -0.0005296391318552196,
553
+ 0.00021029781782999635,
554
+ 6.069496521377005e-05,
555
+ 0.0012049865908920765,
556
+ 0.6298308372497559
557
+ ],
558
+ "min": [
559
+ -0.019999999552965164,
560
+ -0.019999999552965164,
561
+ -0.019999999552965164,
562
+ -0.06666667014360428,
563
+ -0.06666667014360428,
564
+ -0.06666667014360428,
565
+ 0.0
566
+ ],
567
+ "q01": [
568
+ -0.019999999552965164,
569
+ -0.019999999552965164,
570
+ -0.019999999552965164,
571
+ -0.02628571353852749,
572
+ -0.06666667014360428,
573
+ -0.03847619146108627,
574
+ 0.0
575
+ ],
576
+ "q99": [
577
+ 0.019999999552965164,
578
+ 0.019999999552965164,
579
+ 0.019999999552965164,
580
+ 0.031809523701667786,
581
+ 0.06666667014360428,
582
+ 0.036571428179740906,
583
+ 1.0
584
+ ],
585
+ "std": [
586
+ 0.011533070355653763,
587
+ 0.007990499958395958,
588
+ 0.009577802382409573,
589
+ 0.009432998485863209,
590
+ 0.016427574679255486,
591
+ 0.0110540222376585,
592
+ 0.4826793968677521
593
+ ]
594
+ },
595
+ "num_trajectories": 1000,
596
+ "num_transitions": 97939,
597
+ "proprio": {
598
+ "max": [
599
+ 0.0,
600
+ 0.0,
601
+ 0.0,
602
+ 0.0,
603
+ 0.0,
604
+ 0.0,
605
+ 0.0
606
+ ],
607
+ "mean": [
608
+ 0.0,
609
+ 0.0,
610
+ 0.0,
611
+ 0.0,
612
+ 0.0,
613
+ 0.0,
614
+ 0.0
615
+ ],
616
+ "min": [
617
+ 0.0,
618
+ 0.0,
619
+ 0.0,
620
+ 0.0,
621
+ 0.0,
622
+ 0.0,
623
+ 0.0
624
+ ],
625
+ "q01": [
626
+ 0.0,
627
+ 0.0,
628
+ 0.0,
629
+ 0.0,
630
+ 0.0,
631
+ 0.0,
632
+ 0.0
633
+ ],
634
+ "q99": [
635
+ 0.0,
636
+ 0.0,
637
+ 0.0,
638
+ 0.0,
639
+ 0.0,
640
+ 0.0,
641
+ 0.0
642
+ ],
643
+ "std": [
644
+ 0.0,
645
+ 0.0,
646
+ 0.0,
647
+ 0.0,
648
+ 0.0,
649
+ 0.0,
650
+ 0.0
651
+ ]
652
+ }
653
+ },
654
+ "berkeley_cable_routing": {
655
+ "action": {
656
+ "mask": [
657
+ true,
658
+ true,
659
+ true,
660
+ true,
661
+ true,
662
+ true,
663
+ false
664
+ ],
665
+ "max": [
666
+ 0.9633283019065857,
667
+ 1.0,
668
+ 1.0,
669
+ 0.0,
670
+ 0.0,
671
+ 1.0,
672
+ 0.0
673
+ ],
674
+ "mean": [
675
+ -0.07139880955219269,
676
+ 0.023608988150954247,
677
+ 0.10241948068141937,
678
+ 0.0,
679
+ 0.0,
680
+ 0.0496709831058979,
681
+ 0.0
682
+ ],
683
+ "min": [
684
+ -0.9809081554412842,
685
+ -0.9554349184036255,
686
+ -0.9994775056838989,
687
+ 0.0,
688
+ 0.0,
689
+ -1.0,
690
+ 0.0
691
+ ],
692
+ "q01": [
693
+ -0.5534318816661835,
694
+ -0.4797285574674606,
695
+ -0.5314934802055359,
696
+ 0.0,
697
+ 0.0,
698
+ -0.8855219376087189,
699
+ 0.0
700
+ ],
701
+ "q99": [
702
+ 0.42652835428714786,
703
+ 0.5000944086909298,
704
+ 0.639823433756829,
705
+ 0.0,
706
+ 0.0,
707
+ 0.984243879914284,
708
+ 0.0
709
+ ],
710
+ "std": [
711
+ 0.1815498173236847,
712
+ 0.18109899759292603,
713
+ 0.2122078835964203,
714
+ 0.0,
715
+ 0.0,
716
+ 0.3475511968135834,
717
+ 0.0
718
+ ]
719
+ },
720
+ "num_trajectories": 1647,
721
+ "num_transitions": 42328,
722
+ "proprio": {
723
+ "max": [
724
+ 0.0,
725
+ 0.0,
726
+ 0.0,
727
+ 0.0,
728
+ 0.0,
729
+ 0.0,
730
+ 0.0
731
+ ],
732
+ "mean": [
733
+ 0.0,
734
+ 0.0,
735
+ 0.0,
736
+ 0.0,
737
+ 0.0,
738
+ 0.0,
739
+ 0.0
740
+ ],
741
+ "min": [
742
+ 0.0,
743
+ 0.0,
744
+ 0.0,
745
+ 0.0,
746
+ 0.0,
747
+ 0.0,
748
+ 0.0
749
+ ],
750
+ "q01": [
751
+ 0.0,
752
+ 0.0,
753
+ 0.0,
754
+ 0.0,
755
+ 0.0,
756
+ 0.0,
757
+ 0.0
758
+ ],
759
+ "q99": [
760
+ 0.0,
761
+ 0.0,
762
+ 0.0,
763
+ 0.0,
764
+ 0.0,
765
+ 0.0,
766
+ 0.0
767
+ ],
768
+ "std": [
769
+ 0.0,
770
+ 0.0,
771
+ 0.0,
772
+ 0.0,
773
+ 0.0,
774
+ 0.0,
775
+ 0.0
776
+ ]
777
+ }
778
+ },
779
+ "berkeley_fanuc_manipulation": {
780
+ "action": {
781
+ "mask": [
782
+ true,
783
+ true,
784
+ true,
785
+ true,
786
+ true,
787
+ true,
788
+ false
789
+ ],
790
+ "max": [
791
+ 0.009999999776482582,
792
+ 0.009999999776482582,
793
+ 0.009999999776482582,
794
+ 0.03490658476948738,
795
+ 0.03490658476948738,
796
+ 0.03490658476948738,
797
+ 1.0
798
+ ],
799
+ "mean": [
800
+ 0.0007744057802483439,
801
+ -0.00031240080716088414,
802
+ -0.0015001941937953234,
803
+ -0.0007515158504247665,
804
+ -0.00015832878125365824,
805
+ 0.00014327642566058785,
806
+ 0.699295699596405
807
+ ],
808
+ "min": [
809
+ -0.009999999776482582,
810
+ -0.009999999776482582,
811
+ -0.009999999776482582,
812
+ -0.03490658476948738,
813
+ -0.03490658476948738,
814
+ -0.03490658476948738,
815
+ 0.0
816
+ ],
817
+ "q01": [
818
+ -0.009999999776482582,
819
+ -0.009999999776482582,
820
+ -0.009999999776482582,
821
+ -0.03490658476948738,
822
+ 0.0,
823
+ -0.03490658476948738,
824
+ 0.0
825
+ ],
826
+ "q99": [
827
+ 0.009999999776482582,
828
+ 0.009999999776482582,
829
+ 0.009999999776482582,
830
+ 0.03490658476948738,
831
+ 0.0,
832
+ 0.03490658476948738,
833
+ 1.0
834
+ ],
835
+ "std": [
836
+ 0.003407012205570936,
837
+ 0.00499218562617898,
838
+ 0.0053443326614797115,
839
+ 0.00759896170347929,
840
+ 0.004081868566572666,
841
+ 0.008568967692553997,
842
+ 0.4586922824382782
843
+ ]
844
+ },
845
+ "num_trajectories": 415,
846
+ "num_transitions": 62613,
847
+ "proprio": {
848
+ "max": [
849
+ 0.0,
850
+ 0.0,
851
+ 0.0,
852
+ 0.0,
853
+ 0.0,
854
+ 0.0,
855
+ 0.0
856
+ ],
857
+ "mean": [
858
+ 0.0,
859
+ 0.0,
860
+ 0.0,
861
+ 0.0,
862
+ 0.0,
863
+ 0.0,
864
+ 0.0
865
+ ],
866
+ "min": [
867
+ 0.0,
868
+ 0.0,
869
+ 0.0,
870
+ 0.0,
871
+ 0.0,
872
+ 0.0,
873
+ 0.0
874
+ ],
875
+ "q01": [
876
+ 0.0,
877
+ 0.0,
878
+ 0.0,
879
+ 0.0,
880
+ 0.0,
881
+ 0.0,
882
+ 0.0
883
+ ],
884
+ "q99": [
885
+ 0.0,
886
+ 0.0,
887
+ 0.0,
888
+ 0.0,
889
+ 0.0,
890
+ 0.0,
891
+ 0.0
892
+ ],
893
+ "std": [
894
+ 0.0,
895
+ 0.0,
896
+ 0.0,
897
+ 0.0,
898
+ 0.0,
899
+ 0.0,
900
+ 0.0
901
+ ]
902
+ }
903
+ },
904
+ "bridge_orig": {
905
+ "action": {
906
+ "mask": [
907
+ true,
908
+ true,
909
+ true,
910
+ true,
911
+ true,
912
+ true,
913
+ false
914
+ ],
915
+ "max": [
916
+ 0.41691166162490845,
917
+ 0.25864794850349426,
918
+ 0.21218234300613403,
919
+ 3.122201919555664,
920
+ 1.8618112802505493,
921
+ 6.280478477478027,
922
+ 1.0
923
+ ],
924
+ "mean": [
925
+ 0.00023342043277807534,
926
+ 0.0001300483418162912,
927
+ -0.00012762520054820925,
928
+ -0.00015565576904918998,
929
+ -0.0004039337218273431,
930
+ 0.00023557580425404012,
931
+ 0.5764579772949219
932
+ ],
933
+ "min": [
934
+ -0.4007510244846344,
935
+ -0.13874775171279907,
936
+ -0.22553899884223938,
937
+ -3.2010786533355713,
938
+ -1.8618112802505493,
939
+ -6.279075622558594,
940
+ 0.0
941
+ ],
942
+ "q01": [
943
+ -0.02872725307941437,
944
+ -0.04170349963009357,
945
+ -0.026093858778476715,
946
+ -0.08092105075716972,
947
+ -0.09288699507713317,
948
+ -0.20718276381492615,
949
+ 0.0
950
+ ],
951
+ "q99": [
952
+ 0.028309678435325586,
953
+ 0.040855254605412394,
954
+ 0.040161586627364146,
955
+ 0.08192047759890528,
956
+ 0.07792850524187081,
957
+ 0.20382574498653397,
958
+ 1.0
959
+ ],
960
+ "std": [
961
+ 0.009765931405127048,
962
+ 0.01368919387459755,
963
+ 0.012667370028793812,
964
+ 0.02853415347635746,
965
+ 0.030638061463832855,
966
+ 0.07691339403390884,
967
+ 0.49737080931663513
968
+ ]
969
+ },
970
+ "num_trajectories": 60064,
971
+ "num_transitions": 2135463,
972
+ "proprio": {
973
+ "max": [
974
+ 0.0,
975
+ 0.0,
976
+ 0.0,
977
+ 0.0,
978
+ 0.0,
979
+ 0.0,
980
+ 0.0
981
+ ],
982
+ "mean": [
983
+ 0.0,
984
+ 0.0,
985
+ 0.0,
986
+ 0.0,
987
+ 0.0,
988
+ 0.0,
989
+ 0.0
990
+ ],
991
+ "min": [
992
+ 0.0,
993
+ 0.0,
994
+ 0.0,
995
+ 0.0,
996
+ 0.0,
997
+ 0.0,
998
+ 0.0
999
+ ],
1000
+ "q01": [
1001
+ 0.0,
1002
+ 0.0,
1003
+ 0.0,
1004
+ 0.0,
1005
+ 0.0,
1006
+ 0.0,
1007
+ 0.0
1008
+ ],
1009
+ "q99": [
1010
+ 0.0,
1011
+ 0.0,
1012
+ 0.0,
1013
+ 0.0,
1014
+ 0.0,
1015
+ 0.0,
1016
+ 0.0
1017
+ ],
1018
+ "std": [
1019
+ 0.0,
1020
+ 0.0,
1021
+ 0.0,
1022
+ 0.0,
1023
+ 0.0,
1024
+ 0.0,
1025
+ 0.0
1026
+ ]
1027
+ }
1028
+ },
1029
+ "cmu_stretch": {
1030
+ "action": {
1031
+ "mask": [
1032
+ true,
1033
+ true,
1034
+ true,
1035
+ true,
1036
+ true,
1037
+ true,
1038
+ false
1039
+ ],
1040
+ "max": [
1041
+ 0.02338407188653946,
1042
+ 0.0,
1043
+ 0.023404927924275398,
1044
+ 0.0,
1045
+ 0.0,
1046
+ 0.0,
1047
+ 1.0
1048
+ ],
1049
+ "mean": [
1050
+ 0.00036304566310718656,
1051
+ 0.0,
1052
+ 0.001646695309318602,
1053
+ 0.0,
1054
+ 0.0,
1055
+ 0.0,
1056
+ 0.3987048268318176
1057
+ ],
1058
+ "min": [
1059
+ -0.019353797659277916,
1060
+ 0.0,
1061
+ -0.02019215188920498,
1062
+ 0.0,
1063
+ 0.0,
1064
+ 0.0,
1065
+ 0.0
1066
+ ],
1067
+ "q01": [
1068
+ -0.011175686959177256,
1069
+ 0.0,
1070
+ -0.0032206363626755773,
1071
+ 0.0,
1072
+ 0.0,
1073
+ 0.0,
1074
+ 0.0
1075
+ ],
1076
+ "q99": [
1077
+ 0.014501785952597848,
1078
+ 0.0,
1079
+ 0.015056106168776728,
1080
+ 0.0,
1081
+ 0.0,
1082
+ 0.0,
1083
+ 1.0
1084
+ ],
1085
+ "std": [
1086
+ 0.004081828519701958,
1087
+ 0.0,
1088
+ 0.003774335840716958,
1089
+ 0.0,
1090
+ 0.0,
1091
+ 0.0,
1092
+ 0.48963701725006104
1093
+ ]
1094
+ },
1095
+ "num_trajectories": 135,
1096
+ "num_transitions": 25016,
1097
+ "proprio": {
1098
+ "max": [
1099
+ 0.0,
1100
+ 0.0,
1101
+ 0.0,
1102
+ 0.0,
1103
+ 0.0,
1104
+ 0.0,
1105
+ 0.0
1106
+ ],
1107
+ "mean": [
1108
+ 0.0,
1109
+ 0.0,
1110
+ 0.0,
1111
+ 0.0,
1112
+ 0.0,
1113
+ 0.0,
1114
+ 0.0
1115
+ ],
1116
+ "min": [
1117
+ 0.0,
1118
+ 0.0,
1119
+ 0.0,
1120
+ 0.0,
1121
+ 0.0,
1122
+ 0.0,
1123
+ 0.0
1124
+ ],
1125
+ "q01": [
1126
+ 0.0,
1127
+ 0.0,
1128
+ 0.0,
1129
+ 0.0,
1130
+ 0.0,
1131
+ 0.0,
1132
+ 0.0
1133
+ ],
1134
+ "q99": [
1135
+ 0.0,
1136
+ 0.0,
1137
+ 0.0,
1138
+ 0.0,
1139
+ 0.0,
1140
+ 0.0,
1141
+ 0.0
1142
+ ],
1143
+ "std": [
1144
+ 0.0,
1145
+ 0.0,
1146
+ 0.0,
1147
+ 0.0,
1148
+ 0.0,
1149
+ 0.0,
1150
+ 0.0
1151
+ ]
1152
+ }
1153
+ },
1154
+ "dlr_edan_shared_control_converted_externally_to_rlds": {
1155
+ "action": {
1156
+ "mask": [
1157
+ true,
1158
+ true,
1159
+ true,
1160
+ true,
1161
+ true,
1162
+ true,
1163
+ false
1164
+ ],
1165
+ "max": [
1166
+ 0.18991442024707794,
1167
+ 0.0739002525806427,
1168
+ 0.18064819276332855,
1169
+ 0.0866486132144928,
1170
+ 0.13464981317520142,
1171
+ 0.16910280287265778,
1172
+ 1.0
1173
+ ],
1174
+ "mean": [
1175
+ 0.006647810805588961,
1176
+ -0.0007657366222701967,
1177
+ 0.006522853393107653,
1178
+ 0.0011679711751639843,
1179
+ -0.0063956258818507195,
1180
+ -0.011902999132871628,
1181
+ 0.6985887289047241
1182
+ ],
1183
+ "min": [
1184
+ -0.10054297000169754,
1185
+ -0.08427435159683228,
1186
+ -0.13533438742160797,
1187
+ -0.17556548118591309,
1188
+ -0.18485672771930695,
1189
+ -0.2680685818195343,
1190
+ 0.0
1191
+ ],
1192
+ "q01": [
1193
+ -0.02987122368067503,
1194
+ -0.06013262912631035,
1195
+ -0.08286409199237824,
1196
+ -0.05924444157630205,
1197
+ -0.15986866518855095,
1198
+ -0.15636983573436739,
1199
+ 0.0
1200
+ ],
1201
+ "q99": [
1202
+ 0.08832092039287087,
1203
+ 0.042126184627413736,
1204
+ 0.11311905644834042,
1205
+ 0.0643695573508739,
1206
+ 0.03941855944693088,
1207
+ 0.156646853685379,
1208
+ 1.0
1209
+ ],
1210
+ "std": [
1211
+ 0.02139361761510372,
1212
+ 0.01814231649041176,
1213
+ 0.03374375030398369,
1214
+ 0.01743542030453682,
1215
+ 0.03394376486539841,
1216
+ 0.04641873762011528,
1217
+ 0.4588589370250702
1218
+ ]
1219
+ },
1220
+ "num_trajectories": 104,
1221
+ "num_transitions": 8928,
1222
+ "proprio": {
1223
+ "max": [
1224
+ 0.0,
1225
+ 0.0,
1226
+ 0.0,
1227
+ 0.0,
1228
+ 0.0,
1229
+ 0.0,
1230
+ 0.0
1231
+ ],
1232
+ "mean": [
1233
+ 0.0,
1234
+ 0.0,
1235
+ 0.0,
1236
+ 0.0,
1237
+ 0.0,
1238
+ 0.0,
1239
+ 0.0
1240
+ ],
1241
+ "min": [
1242
+ 0.0,
1243
+ 0.0,
1244
+ 0.0,
1245
+ 0.0,
1246
+ 0.0,
1247
+ 0.0,
1248
+ 0.0
1249
+ ],
1250
+ "q01": [
1251
+ 0.0,
1252
+ 0.0,
1253
+ 0.0,
1254
+ 0.0,
1255
+ 0.0,
1256
+ 0.0,
1257
+ 0.0
1258
+ ],
1259
+ "q99": [
1260
+ 0.0,
1261
+ 0.0,
1262
+ 0.0,
1263
+ 0.0,
1264
+ 0.0,
1265
+ 0.0,
1266
+ 0.0
1267
+ ],
1268
+ "std": [
1269
+ 0.0,
1270
+ 0.0,
1271
+ 0.0,
1272
+ 0.0,
1273
+ 0.0,
1274
+ 0.0,
1275
+ 0.0
1276
+ ]
1277
+ }
1278
+ },
1279
+ "dobbe": {
1280
+ "action": {
1281
+ "mask": [
1282
+ true,
1283
+ true,
1284
+ true,
1285
+ true,
1286
+ true,
1287
+ true,
1288
+ false
1289
+ ],
1290
+ "max": [
1291
+ 38.590423583984375,
1292
+ 17.932697296142578,
1293
+ 4.843764305114746,
1294
+ 1.4372116327285767,
1295
+ 0.4340403974056244,
1296
+ 1.2057193517684937,
1297
+ 0.9998947381973267
1298
+ ],
1299
+ "mean": [
1300
+ -0.0001120688539231196,
1301
+ 0.0011229675728827715,
1302
+ -0.00010193953494308516,
1303
+ -7.371311221504584e-05,
1304
+ -0.0006753370980732143,
1305
+ -5.664395575877279e-05,
1306
+ 0.6318628191947937
1307
+ ],
1308
+ "min": [
1309
+ -5.700923442840576,
1310
+ -21.605947494506836,
1311
+ -123.72489929199219,
1312
+ -1.7229845523834229,
1313
+ -0.4998578727245331,
1314
+ -0.8867913484573364,
1315
+ 1.4196479014572105e-06
1316
+ ],
1317
+ "q01": [
1318
+ -0.01119564864784479,
1319
+ -0.014266146533191203,
1320
+ -0.0071747214533388615,
1321
+ -0.009444301575422287,
1322
+ -0.03990109823644161,
1323
+ -0.017422311007976532,
1324
+ 4.003279136668425e-05
1325
+ ],
1326
+ "q99": [
1327
+ 0.01015154086053368,
1328
+ 0.017181577533483497,
1329
+ 0.007216989761218411,
1330
+ 0.010380979906767595,
1331
+ 0.03556173853576176,
1332
+ 0.018032474815845446,
1333
+ 0.9982578039169312
1334
+ ],
1335
+ "std": [
1336
+ 0.04266023263335228,
1337
+ 0.04428130388259888,
1338
+ 0.12224864959716797,
1339
+ 0.005388160236179829,
1340
+ 0.011246981099247932,
1341
+ 0.0062882062047719955,
1342
+ 0.3973258137702942
1343
+ ]
1344
+ },
1345
+ "num_trajectories": 5208,
1346
+ "num_transitions": 1139911,
1347
+ "proprio": {
1348
+ "max": [
1349
+ 0.0,
1350
+ 0.0,
1351
+ 0.0,
1352
+ 0.0,
1353
+ 0.0,
1354
+ 0.0,
1355
+ 0.0
1356
+ ],
1357
+ "mean": [
1358
+ 0.0,
1359
+ 0.0,
1360
+ 0.0,
1361
+ 0.0,
1362
+ 0.0,
1363
+ 0.0,
1364
+ 0.0
1365
+ ],
1366
+ "min": [
1367
+ 0.0,
1368
+ 0.0,
1369
+ 0.0,
1370
+ 0.0,
1371
+ 0.0,
1372
+ 0.0,
1373
+ 0.0
1374
+ ],
1375
+ "q01": [
1376
+ 0.0,
1377
+ 0.0,
1378
+ 0.0,
1379
+ 0.0,
1380
+ 0.0,
1381
+ 0.0,
1382
+ 0.0
1383
+ ],
1384
+ "q99": [
1385
+ 0.0,
1386
+ 0.0,
1387
+ 0.0,
1388
+ 0.0,
1389
+ 0.0,
1390
+ 0.0,
1391
+ 0.0
1392
+ ],
1393
+ "std": [
1394
+ 0.0,
1395
+ 0.0,
1396
+ 0.0,
1397
+ 0.0,
1398
+ 0.0,
1399
+ 0.0,
1400
+ 0.0
1401
+ ]
1402
+ }
1403
+ },
1404
+ "fmb_dataset": {
1405
+ "action": {
1406
+ "mask": [
1407
+ true,
1408
+ true,
1409
+ true,
1410
+ true,
1411
+ true,
1412
+ true,
1413
+ false
1414
+ ],
1415
+ "max": [
1416
+ 1.399999976158142,
1417
+ 1.0,
1418
+ 1.399999976158142,
1419
+ 1.0,
1420
+ 1.0,
1421
+ 1.0,
1422
+ 1.0
1423
+ ],
1424
+ "mean": [
1425
+ 0.05903266370296478,
1426
+ -0.06476021558046341,
1427
+ -0.0978814959526062,
1428
+ 0.0043251593597233295,
1429
+ 0.00029058579821139574,
1430
+ -0.044575780630111694,
1431
+ 0.7336407899856567
1432
+ ],
1433
+ "min": [
1434
+ -1.399999976158142,
1435
+ -1.399999976158142,
1436
+ -1.0,
1437
+ -1.0,
1438
+ -1.0,
1439
+ -1.0,
1440
+ 0.0
1441
+ ],
1442
+ "q01": [
1443
+ -0.8257142901420593,
1444
+ -1.399999976158142,
1445
+ -1.0,
1446
+ -1.0,
1447
+ -0.3028571307659149,
1448
+ -1.0,
1449
+ 0.0
1450
+ ],
1451
+ "q99": [
1452
+ 1.0,
1453
+ 0.5257142782211304,
1454
+ 1.0,
1455
+ 1.0,
1456
+ 0.3400000035762787,
1457
+ 1.0,
1458
+ 1.0
1459
+ ],
1460
+ "std": [
1461
+ 0.28808653354644775,
1462
+ 0.28204405307769775,
1463
+ 0.46267420053482056,
1464
+ 0.3266729414463043,
1465
+ 0.10843165963888168,
1466
+ 0.34402996301651,
1467
+ 0.4435197114944458
1468
+ ]
1469
+ },
1470
+ "num_trajectories": 8611,
1471
+ "num_transitions": 1137340,
1472
+ "proprio": {
1473
+ "max": [
1474
+ 0.0,
1475
+ 0.0,
1476
+ 0.0,
1477
+ 0.0,
1478
+ 0.0,
1479
+ 0.0,
1480
+ 0.0
1481
+ ],
1482
+ "mean": [
1483
+ 0.0,
1484
+ 0.0,
1485
+ 0.0,
1486
+ 0.0,
1487
+ 0.0,
1488
+ 0.0,
1489
+ 0.0
1490
+ ],
1491
+ "min": [
1492
+ 0.0,
1493
+ 0.0,
1494
+ 0.0,
1495
+ 0.0,
1496
+ 0.0,
1497
+ 0.0,
1498
+ 0.0
1499
+ ],
1500
+ "q01": [
1501
+ 0.0,
1502
+ 0.0,
1503
+ 0.0,
1504
+ 0.0,
1505
+ 0.0,
1506
+ 0.0,
1507
+ 0.0
1508
+ ],
1509
+ "q99": [
1510
+ 0.0,
1511
+ 0.0,
1512
+ 0.0,
1513
+ 0.0,
1514
+ 0.0,
1515
+ 0.0,
1516
+ 0.0
1517
+ ],
1518
+ "std": [
1519
+ 0.0,
1520
+ 0.0,
1521
+ 0.0,
1522
+ 0.0,
1523
+ 0.0,
1524
+ 0.0,
1525
+ 0.0
1526
+ ]
1527
+ }
1528
+ },
1529
+ "fractal20220817_data": {
1530
+ "action": {
1531
+ "mask": [
1532
+ true,
1533
+ true,
1534
+ true,
1535
+ true,
1536
+ true,
1537
+ true,
1538
+ false
1539
+ ],
1540
+ "max": [
1541
+ 2.9984593391418457,
1542
+ 22.09052848815918,
1543
+ 2.7507524490356445,
1544
+ 1.570636510848999,
1545
+ 1.5321086645126343,
1546
+ 1.5691522359848022,
1547
+ 1.0
1548
+ ],
1549
+ "mean": [
1550
+ 0.006987573113292456,
1551
+ 0.006265930365771055,
1552
+ -0.012625149451196194,
1553
+ 0.04333356022834778,
1554
+ -0.005756193306297064,
1555
+ 0.0009130323305726051,
1556
+ 0.5354204773902893
1557
+ ],
1558
+ "min": [
1559
+ -2.0204520225524902,
1560
+ -5.497899532318115,
1561
+ -2.031663417816162,
1562
+ -1.569917917251587,
1563
+ -1.569892168045044,
1564
+ -1.570419430732727,
1565
+ 0.0
1566
+ ],
1567
+ "q01": [
1568
+ -0.22453527510166169,
1569
+ -0.14820013284683228,
1570
+ -0.231589707583189,
1571
+ -0.3517994859814644,
1572
+ -0.4193011274933815,
1573
+ -0.43643461108207704,
1574
+ 0.0
1575
+ ],
1576
+ "q99": [
1577
+ 0.17824687153100965,
1578
+ 0.14938379630446405,
1579
+ 0.21842354819178575,
1580
+ 0.5892666035890578,
1581
+ 0.35272657424211445,
1582
+ 0.44796681255102094,
1583
+ 1.0
1584
+ ],
1585
+ "std": [
1586
+ 0.06921130418777466,
1587
+ 0.05970517918467522,
1588
+ 0.07353121042251587,
1589
+ 0.15610435605049133,
1590
+ 0.13164447247982025,
1591
+ 0.14593809843063354,
1592
+ 0.49711260199546814
1593
+ ]
1594
+ },
1595
+ "num_trajectories": 87212,
1596
+ "num_transitions": 3786400,
1597
+ "proprio": {
1598
+ "max": [
1599
+ 0.0,
1600
+ 0.0,
1601
+ 0.0,
1602
+ 0.0,
1603
+ 0.0,
1604
+ 0.0,
1605
+ 0.0
1606
+ ],
1607
+ "mean": [
1608
+ 0.0,
1609
+ 0.0,
1610
+ 0.0,
1611
+ 0.0,
1612
+ 0.0,
1613
+ 0.0,
1614
+ 0.0
1615
+ ],
1616
+ "min": [
1617
+ 0.0,
1618
+ 0.0,
1619
+ 0.0,
1620
+ 0.0,
1621
+ 0.0,
1622
+ 0.0,
1623
+ 0.0
1624
+ ],
1625
+ "q01": [
1626
+ 0.0,
1627
+ 0.0,
1628
+ 0.0,
1629
+ 0.0,
1630
+ 0.0,
1631
+ 0.0,
1632
+ 0.0
1633
+ ],
1634
+ "q99": [
1635
+ 0.0,
1636
+ 0.0,
1637
+ 0.0,
1638
+ 0.0,
1639
+ 0.0,
1640
+ 0.0,
1641
+ 0.0
1642
+ ],
1643
+ "std": [
1644
+ 0.0,
1645
+ 0.0,
1646
+ 0.0,
1647
+ 0.0,
1648
+ 0.0,
1649
+ 0.0,
1650
+ 0.0
1651
+ ]
1652
+ }
1653
+ },
1654
+ "furniture_bench_dataset_converted_externally_to_rlds": {
1655
+ "action": {
1656
+ "mask": [
1657
+ true,
1658
+ true,
1659
+ true,
1660
+ true,
1661
+ true,
1662
+ true,
1663
+ false
1664
+ ],
1665
+ "max": [
1666
+ 0.10000000149011612,
1667
+ 0.10000000149011612,
1668
+ 0.10000000149011612,
1669
+ 0.8651833534240723,
1670
+ 1.0909736156463623,
1671
+ 2.863185405731201,
1672
+ 1.0
1673
+ ],
1674
+ "mean": [
1675
+ 0.0001461072388337925,
1676
+ 0.0010830991668626666,
1677
+ 0.0006224962417036295,
1678
+ -0.0033032018691301346,
1679
+ -0.002688060747459531,
1680
+ 0.01824265345931053,
1681
+ 0.48854944109916687
1682
+ ],
1683
+ "min": [
1684
+ -0.10495579987764359,
1685
+ -0.10939455777406693,
1686
+ -0.10000000149011612,
1687
+ -0.971906840801239,
1688
+ -1.0475432872772217,
1689
+ -3.06000018119812,
1690
+ 0.0
1691
+ ],
1692
+ "q01": [
1693
+ -0.053988199681043625,
1694
+ -0.05049169331789017,
1695
+ -0.032499241530895236,
1696
+ -0.1953887003660202,
1697
+ -0.41674559473991396,
1698
+ -0.8886768388748169,
1699
+ 0.0
1700
+ ],
1701
+ "q99": [
1702
+ 0.05414841488003723,
1703
+ 0.04965164884924884,
1704
+ 0.060055799782276154,
1705
+ 0.18231668293476103,
1706
+ 0.39867786407470646,
1707
+ 0.8772023963928218,
1708
+ 1.0
1709
+ ],
1710
+ "std": [
1711
+ 0.016107235103845596,
1712
+ 0.014891562052071095,
1713
+ 0.014014234766364098,
1714
+ 0.058274269104003906,
1715
+ 0.1141708493232727,
1716
+ 0.33479613065719604,
1717
+ 0.4999157190322876
1718
+ ]
1719
+ },
1720
+ "num_trajectories": 5100,
1721
+ "num_transitions": 3948057,
1722
+ "proprio": {
1723
+ "max": [
1724
+ 0.0,
1725
+ 0.0,
1726
+ 0.0,
1727
+ 0.0,
1728
+ 0.0,
1729
+ 0.0,
1730
+ 0.0
1731
+ ],
1732
+ "mean": [
1733
+ 0.0,
1734
+ 0.0,
1735
+ 0.0,
1736
+ 0.0,
1737
+ 0.0,
1738
+ 0.0,
1739
+ 0.0
1740
+ ],
1741
+ "min": [
1742
+ 0.0,
1743
+ 0.0,
1744
+ 0.0,
1745
+ 0.0,
1746
+ 0.0,
1747
+ 0.0,
1748
+ 0.0
1749
+ ],
1750
+ "q01": [
1751
+ 0.0,
1752
+ 0.0,
1753
+ 0.0,
1754
+ 0.0,
1755
+ 0.0,
1756
+ 0.0,
1757
+ 0.0
1758
+ ],
1759
+ "q99": [
1760
+ 0.0,
1761
+ 0.0,
1762
+ 0.0,
1763
+ 0.0,
1764
+ 0.0,
1765
+ 0.0,
1766
+ 0.0
1767
+ ],
1768
+ "std": [
1769
+ 0.0,
1770
+ 0.0,
1771
+ 0.0,
1772
+ 0.0,
1773
+ 0.0,
1774
+ 0.0,
1775
+ 0.0
1776
+ ]
1777
+ }
1778
+ },
1779
+ "iamlab_cmu_pickup_insert_converted_externally_to_rlds": {
1780
+ "action": {
1781
+ "mask": [
1782
+ true,
1783
+ true,
1784
+ true,
1785
+ true,
1786
+ true,
1787
+ true,
1788
+ false
1789
+ ],
1790
+ "max": [
1791
+ 0.6634981632232666,
1792
+ 0.23428471386432648,
1793
+ 0.4308285415172577,
1794
+ 3.1415927410125732,
1795
+ 0.13647015392780304,
1796
+ 3.141592502593994,
1797
+ 1.0
1798
+ ],
1799
+ "mean": [
1800
+ 0.5274373292922974,
1801
+ 0.028582019731402397,
1802
+ 0.18712475895881653,
1803
+ 1.2339574098587036,
1804
+ 0.03226623311638832,
1805
+ -1.4199477434158325,
1806
+ 0.5550631880760193
1807
+ ],
1808
+ "min": [
1809
+ 0.3071657121181488,
1810
+ -0.29754969477653503,
1811
+ 0.06578229367733002,
1812
+ -3.1415927410125732,
1813
+ -0.04584203287959099,
1814
+ -3.141592502593994,
1815
+ 0.0
1816
+ ],
1817
+ "q01": [
1818
+ 0.3148897051811218,
1819
+ -0.20317550599575043,
1820
+ 0.06785467118024827,
1821
+ -3.140952730178833,
1822
+ -0.029743434861302376,
1823
+ -3.141091251373291,
1824
+ 0.0
1825
+ ],
1826
+ "q99": [
1827
+ 0.6472805738449097,
1828
+ 0.20846802592277527,
1829
+ 0.36855655312538155,
1830
+ 3.1409926891326903,
1831
+ 0.11424950212240226,
1832
+ 3.1410969257354737,
1833
+ 1.0
1834
+ ],
1835
+ "std": [
1836
+ 0.08108346909284592,
1837
+ 0.1116756796836853,
1838
+ 0.07747554779052734,
1839
+ 2.8737246990203857,
1840
+ 0.02774704247713089,
1841
+ 2.7678680419921875,
1842
+ 0.49695101380348206
1843
+ ]
1844
+ },
1845
+ "num_trajectories": 631,
1846
+ "num_transitions": 146241,
1847
+ "proprio": {
1848
+ "max": [
1849
+ 0.0,
1850
+ 0.0,
1851
+ 0.0,
1852
+ 0.0,
1853
+ 0.0,
1854
+ 0.0,
1855
+ 0.0
1856
+ ],
1857
+ "mean": [
1858
+ 0.0,
1859
+ 0.0,
1860
+ 0.0,
1861
+ 0.0,
1862
+ 0.0,
1863
+ 0.0,
1864
+ 0.0
1865
+ ],
1866
+ "min": [
1867
+ 0.0,
1868
+ 0.0,
1869
+ 0.0,
1870
+ 0.0,
1871
+ 0.0,
1872
+ 0.0,
1873
+ 0.0
1874
+ ],
1875
+ "q01": [
1876
+ 0.0,
1877
+ 0.0,
1878
+ 0.0,
1879
+ 0.0,
1880
+ 0.0,
1881
+ 0.0,
1882
+ 0.0
1883
+ ],
1884
+ "q99": [
1885
+ 0.0,
1886
+ 0.0,
1887
+ 0.0,
1888
+ 0.0,
1889
+ 0.0,
1890
+ 0.0,
1891
+ 0.0
1892
+ ],
1893
+ "std": [
1894
+ 0.0,
1895
+ 0.0,
1896
+ 0.0,
1897
+ 0.0,
1898
+ 0.0,
1899
+ 0.0,
1900
+ 0.0
1901
+ ]
1902
+ }
1903
+ },
1904
+ "jaco_play": {
1905
+ "action": {
1906
+ "mask": [
1907
+ true,
1908
+ true,
1909
+ true,
1910
+ true,
1911
+ true,
1912
+ true,
1913
+ false
1914
+ ],
1915
+ "max": [
1916
+ 0.20000000298023224,
1917
+ 0.20000000298023224,
1918
+ 0.20000000298023224,
1919
+ 0.0,
1920
+ 0.0,
1921
+ 0.0,
1922
+ 1.0
1923
+ ],
1924
+ "mean": [
1925
+ 0.0009658575872890651,
1926
+ -0.0058008055202662945,
1927
+ -0.003950489219278097,
1928
+ 0.0,
1929
+ 0.0,
1930
+ 0.0,
1931
+ 0.34934908151626587
1932
+ ],
1933
+ "min": [
1934
+ -0.20000000298023224,
1935
+ -0.20000000298023224,
1936
+ -0.20000000298023224,
1937
+ 0.0,
1938
+ 0.0,
1939
+ 0.0,
1940
+ 0.0
1941
+ ],
1942
+ "q01": [
1943
+ -0.20000000298023224,
1944
+ -0.20000000298023224,
1945
+ -0.20000000298023224,
1946
+ 0.0,
1947
+ 0.0,
1948
+ 0.0,
1949
+ 0.0
1950
+ ],
1951
+ "q99": [
1952
+ 0.20000000298023224,
1953
+ 0.20000000298023224,
1954
+ 0.20000000298023224,
1955
+ 0.0,
1956
+ 0.0,
1957
+ 0.0,
1958
+ 1.0
1959
+ ],
1960
+ "std": [
1961
+ 0.12235049903392792,
1962
+ 0.09678870439529419,
1963
+ 0.11155415326356888,
1964
+ 0.0,
1965
+ 0.0,
1966
+ 0.0,
1967
+ 0.47682517766952515
1968
+ ]
1969
+ },
1970
+ "num_trajectories": 1085,
1971
+ "num_transitions": 77965,
1972
+ "proprio": {
1973
+ "max": [
1974
+ 0.0,
1975
+ 0.0,
1976
+ 0.0,
1977
+ 0.0,
1978
+ 0.0,
1979
+ 0.0,
1980
+ 0.0
1981
+ ],
1982
+ "mean": [
1983
+ 0.0,
1984
+ 0.0,
1985
+ 0.0,
1986
+ 0.0,
1987
+ 0.0,
1988
+ 0.0,
1989
+ 0.0
1990
+ ],
1991
+ "min": [
1992
+ 0.0,
1993
+ 0.0,
1994
+ 0.0,
1995
+ 0.0,
1996
+ 0.0,
1997
+ 0.0,
1998
+ 0.0
1999
+ ],
2000
+ "q01": [
2001
+ 0.0,
2002
+ 0.0,
2003
+ 0.0,
2004
+ 0.0,
2005
+ 0.0,
2006
+ 0.0,
2007
+ 0.0
2008
+ ],
2009
+ "q99": [
2010
+ 0.0,
2011
+ 0.0,
2012
+ 0.0,
2013
+ 0.0,
2014
+ 0.0,
2015
+ 0.0,
2016
+ 0.0
2017
+ ],
2018
+ "std": [
2019
+ 0.0,
2020
+ 0.0,
2021
+ 0.0,
2022
+ 0.0,
2023
+ 0.0,
2024
+ 0.0,
2025
+ 0.0
2026
+ ]
2027
+ }
2028
+ },
2029
+ "kuka": {
2030
+ "action": {
2031
+ "mask": [
2032
+ true,
2033
+ true,
2034
+ true,
2035
+ true,
2036
+ true,
2037
+ true,
2038
+ false
2039
+ ],
2040
+ "max": [
2041
+ 0.1697135865688324,
2042
+ 0.2777623236179352,
2043
+ 0.43710532784461975,
2044
+ 0.0,
2045
+ 0.0,
2046
+ 1.9684287309646606,
2047
+ 1.0
2048
+ ],
2049
+ "mean": [
2050
+ -0.000466893776319921,
2051
+ 0.0004013827128801495,
2052
+ -0.0012807840248569846,
2053
+ 0.0,
2054
+ 0.0,
2055
+ -0.03722434118390083,
2056
+ 0.4131543040275574
2057
+ ],
2058
+ "min": [
2059
+ -0.159867063164711,
2060
+ -0.2892282009124756,
2061
+ -0.2795473635196686,
2062
+ 0.0,
2063
+ 0.0,
2064
+ -1.9875637292861938,
2065
+ 0.0
2066
+ ],
2067
+ "q01": [
2068
+ -0.06619441494345665,
2069
+ -0.08713878810405731,
2070
+ -0.15083016991615295,
2071
+ 0.0,
2072
+ 0.0,
2073
+ -0.5415697038173676,
2074
+ 0.0
2075
+ ],
2076
+ "q99": [
2077
+ 0.06601839080452929,
2078
+ 0.08732476785779003,
2079
+ 0.18168179214000715,
2080
+ 0.0,
2081
+ 0.0,
2082
+ 0.2923380345106127,
2083
+ 1.0
2084
+ ],
2085
+ "std": [
2086
+ 0.020832622423768044,
2087
+ 0.02915864996612072,
2088
+ 0.06422857940196991,
2089
+ 0.0,
2090
+ 0.0,
2091
+ 0.14224305748939514,
2092
+ 0.49086397886276245
2093
+ ]
2094
+ },
2095
+ "num_trajectories": 209880,
2096
+ "num_transitions": 2455879,
2097
+ "proprio": {
2098
+ "max": [
2099
+ 0.0,
2100
+ 0.0,
2101
+ 0.0,
2102
+ 0.0,
2103
+ 0.0,
2104
+ 0.0,
2105
+ 0.0
2106
+ ],
2107
+ "mean": [
2108
+ 0.0,
2109
+ 0.0,
2110
+ 0.0,
2111
+ 0.0,
2112
+ 0.0,
2113
+ 0.0,
2114
+ 0.0
2115
+ ],
2116
+ "min": [
2117
+ 0.0,
2118
+ 0.0,
2119
+ 0.0,
2120
+ 0.0,
2121
+ 0.0,
2122
+ 0.0,
2123
+ 0.0
2124
+ ],
2125
+ "q01": [
2126
+ 0.0,
2127
+ 0.0,
2128
+ 0.0,
2129
+ 0.0,
2130
+ 0.0,
2131
+ 0.0,
2132
+ 0.0
2133
+ ],
2134
+ "q99": [
2135
+ 0.0,
2136
+ 0.0,
2137
+ 0.0,
2138
+ 0.0,
2139
+ 0.0,
2140
+ 0.0,
2141
+ 0.0
2142
+ ],
2143
+ "std": [
2144
+ 0.0,
2145
+ 0.0,
2146
+ 0.0,
2147
+ 0.0,
2148
+ 0.0,
2149
+ 0.0,
2150
+ 0.0
2151
+ ]
2152
+ }
2153
+ },
2154
+ "nyu_franka_play_dataset_converted_externally_to_rlds": {
2155
+ "action": {
2156
+ "mask": [
2157
+ true,
2158
+ true,
2159
+ true,
2160
+ true,
2161
+ true,
2162
+ true,
2163
+ false
2164
+ ],
2165
+ "max": [
2166
+ 0.06424188613891602,
2167
+ 0.07027634978294373,
2168
+ 0.06129661202430725,
2169
+ 6.281067848205566,
2170
+ 0.1967729926109314,
2171
+ 0.26377415657043457,
2172
+ 1.0
2173
+ ],
2174
+ "mean": [
2175
+ 0.0010219914838671684,
2176
+ -0.00012002645235043019,
2177
+ 0.00032894135802052915,
2178
+ 0.001503427978605032,
2179
+ -0.002198529429733753,
2180
+ -0.0016632297774776816,
2181
+ 0.7230083346366882
2182
+ ],
2183
+ "min": [
2184
+ -0.05952230095863342,
2185
+ -0.07232445478439331,
2186
+ -0.06730806827545166,
2187
+ -6.278434753417969,
2188
+ -0.21479034423828125,
2189
+ -0.3627619743347168,
2190
+ 0.0
2191
+ ],
2192
+ "q01": [
2193
+ -0.03199600875377655,
2194
+ -0.032861671447753905,
2195
+ -0.03368805110454559,
2196
+ -0.12080862045288086,
2197
+ -0.12175218224525451,
2198
+ -0.11370223641395569,
2199
+ 0.0
2200
+ ],
2201
+ "q99": [
2202
+ 0.03101520001888276,
2203
+ 0.0373908892273903,
2204
+ 0.03646374464035038,
2205
+ 0.11764093399047852,
2206
+ 0.1258920183777809,
2207
+ 0.09366151213645942,
2208
+ 1.0
2209
+ ],
2210
+ "std": [
2211
+ 0.013274148106575012,
2212
+ 0.013215921819210052,
2213
+ 0.012822107411921024,
2214
+ 0.27324536442756653,
2215
+ 0.057022497057914734,
2216
+ 0.039172809571027756,
2217
+ 0.4475318491458893
2218
+ ]
2219
+ },
2220
+ "num_trajectories": 456,
2221
+ "num_transitions": 44875,
2222
+ "proprio": {
2223
+ "max": [
2224
+ 0.0,
2225
+ 0.0,
2226
+ 0.0,
2227
+ 0.0,
2228
+ 0.0,
2229
+ 0.0,
2230
+ 0.0
2231
+ ],
2232
+ "mean": [
2233
+ 0.0,
2234
+ 0.0,
2235
+ 0.0,
2236
+ 0.0,
2237
+ 0.0,
2238
+ 0.0,
2239
+ 0.0
2240
+ ],
2241
+ "min": [
2242
+ 0.0,
2243
+ 0.0,
2244
+ 0.0,
2245
+ 0.0,
2246
+ 0.0,
2247
+ 0.0,
2248
+ 0.0
2249
+ ],
2250
+ "q01": [
2251
+ 0.0,
2252
+ 0.0,
2253
+ 0.0,
2254
+ 0.0,
2255
+ 0.0,
2256
+ 0.0,
2257
+ 0.0
2258
+ ],
2259
+ "q99": [
2260
+ 0.0,
2261
+ 0.0,
2262
+ 0.0,
2263
+ 0.0,
2264
+ 0.0,
2265
+ 0.0,
2266
+ 0.0
2267
+ ],
2268
+ "std": [
2269
+ 0.0,
2270
+ 0.0,
2271
+ 0.0,
2272
+ 0.0,
2273
+ 0.0,
2274
+ 0.0,
2275
+ 0.0
2276
+ ]
2277
+ }
2278
+ },
2279
+ "roboturk": {
2280
+ "action": {
2281
+ "mask": [
2282
+ true,
2283
+ true,
2284
+ true,
2285
+ true,
2286
+ true,
2287
+ true,
2288
+ false
2289
+ ],
2290
+ "max": [
2291
+ 0.39124172925949097,
2292
+ 0.4601028263568878,
2293
+ 0.4870833456516266,
2294
+ 1.816888689994812,
2295
+ 1.8240282535552979,
2296
+ 1.4824820756912231,
2297
+ 1.0
2298
+ ],
2299
+ "mean": [
2300
+ 0.0014448781730607152,
2301
+ -0.0015945184277370572,
2302
+ -0.0011753765866160393,
2303
+ 0.0023012382443994284,
2304
+ -0.0009382434654980898,
2305
+ -0.00011485753930173814,
2306
+ 0.5746025443077087
2307
+ ],
2308
+ "min": [
2309
+ -0.6546999216079712,
2310
+ -0.6365841031074524,
2311
+ -0.4217723608016968,
2312
+ -1.6695482730865479,
2313
+ -1.8023357391357422,
2314
+ -1.4630827903747559,
2315
+ 0.0
2316
+ ],
2317
+ "q01": [
2318
+ -0.1342635464668274,
2319
+ -0.19996687173843383,
2320
+ -0.1482972100377083,
2321
+ -0.20720748245716095,
2322
+ -0.09676413893699647,
2323
+ -0.18075634717941286,
2324
+ 0.0
2325
+ ],
2326
+ "q99": [
2327
+ 0.14956976801157001,
2328
+ 0.1805950567126275,
2329
+ 0.18841815620660796,
2330
+ 0.21615413755178453,
2331
+ 0.09457383215427405,
2332
+ 0.18543301910162005,
2333
+ 1.0
2334
+ ],
2335
+ "std": [
2336
+ 0.04935364052653313,
2337
+ 0.06354569643735886,
2338
+ 0.0611649826169014,
2339
+ 0.0955345630645752,
2340
+ 0.0842016190290451,
2341
+ 0.06517927348613739,
2342
+ 0.4945116341114044
2343
+ ]
2344
+ },
2345
+ "num_trajectories": 1995,
2346
+ "num_transitions": 187507,
2347
+ "proprio": {
2348
+ "max": [
2349
+ 0.0,
2350
+ 0.0,
2351
+ 0.0,
2352
+ 0.0,
2353
+ 0.0,
2354
+ 0.0,
2355
+ 0.0
2356
+ ],
2357
+ "mean": [
2358
+ 0.0,
2359
+ 0.0,
2360
+ 0.0,
2361
+ 0.0,
2362
+ 0.0,
2363
+ 0.0,
2364
+ 0.0
2365
+ ],
2366
+ "min": [
2367
+ 0.0,
2368
+ 0.0,
2369
+ 0.0,
2370
+ 0.0,
2371
+ 0.0,
2372
+ 0.0,
2373
+ 0.0
2374
+ ],
2375
+ "q01": [
2376
+ 0.0,
2377
+ 0.0,
2378
+ 0.0,
2379
+ 0.0,
2380
+ 0.0,
2381
+ 0.0,
2382
+ 0.0
2383
+ ],
2384
+ "q99": [
2385
+ 0.0,
2386
+ 0.0,
2387
+ 0.0,
2388
+ 0.0,
2389
+ 0.0,
2390
+ 0.0,
2391
+ 0.0
2392
+ ],
2393
+ "std": [
2394
+ 0.0,
2395
+ 0.0,
2396
+ 0.0,
2397
+ 0.0,
2398
+ 0.0,
2399
+ 0.0,
2400
+ 0.0
2401
+ ]
2402
+ }
2403
+ },
2404
+ "stanford_hydra_dataset_converted_externally_to_rlds": {
2405
+ "action": {
2406
+ "mask": [
2407
+ true,
2408
+ true,
2409
+ true,
2410
+ true,
2411
+ true,
2412
+ true,
2413
+ false
2414
+ ],
2415
+ "max": [
2416
+ 0.02499854564666748,
2417
+ 0.02499903365969658,
2418
+ 0.024999922141432762,
2419
+ 0.24974457919597626,
2420
+ 0.24997030198574066,
2421
+ 0.24999946355819702,
2422
+ 1.0
2423
+ ],
2424
+ "mean": [
2425
+ 0.0007790075615048409,
2426
+ 0.00013707915786653757,
2427
+ -0.00025485886726528406,
2428
+ 0.0012903279857710004,
2429
+ -0.004751726984977722,
2430
+ 0.002692904556170106,
2431
+ 0.48855218291282654
2432
+ ],
2433
+ "min": [
2434
+ -0.024999044835567474,
2435
+ -0.024999700486660004,
2436
+ -0.02499929815530777,
2437
+ -0.24993225932121277,
2438
+ -0.2499666064977646,
2439
+ -0.2499932497739792,
2440
+ 0.0
2441
+ ],
2442
+ "q01": [
2443
+ -0.019992006458342076,
2444
+ -0.02415412735193968,
2445
+ -0.022941758055239916,
2446
+ -0.11085530579090118,
2447
+ -0.12024572037160397,
2448
+ -0.13314770206809043,
2449
+ 0.0
2450
+ ],
2451
+ "q99": [
2452
+ 0.022886231057345868,
2453
+ 0.022358838934451335,
2454
+ 0.02410089675337076,
2455
+ 0.12370114490389822,
2456
+ 0.11323311634361738,
2457
+ 0.18474749639630164,
2458
+ 1.0
2459
+ ],
2460
+ "std": [
2461
+ 0.008022191002964973,
2462
+ 0.009131455793976784,
2463
+ 0.009574385359883308,
2464
+ 0.04122225195169449,
2465
+ 0.03843000903725624,
2466
+ 0.04606698825955391,
2467
+ 0.4997812509536743
2468
+ ]
2469
+ },
2470
+ "num_trajectories": 570,
2471
+ "num_transitions": 358234,
2472
+ "proprio": {
2473
+ "max": [
2474
+ 0.0,
2475
+ 0.0,
2476
+ 0.0,
2477
+ 0.0,
2478
+ 0.0,
2479
+ 0.0,
2480
+ 0.0
2481
+ ],
2482
+ "mean": [
2483
+ 0.0,
2484
+ 0.0,
2485
+ 0.0,
2486
+ 0.0,
2487
+ 0.0,
2488
+ 0.0,
2489
+ 0.0
2490
+ ],
2491
+ "min": [
2492
+ 0.0,
2493
+ 0.0,
2494
+ 0.0,
2495
+ 0.0,
2496
+ 0.0,
2497
+ 0.0,
2498
+ 0.0
2499
+ ],
2500
+ "q01": [
2501
+ 0.0,
2502
+ 0.0,
2503
+ 0.0,
2504
+ 0.0,
2505
+ 0.0,
2506
+ 0.0,
2507
+ 0.0
2508
+ ],
2509
+ "q99": [
2510
+ 0.0,
2511
+ 0.0,
2512
+ 0.0,
2513
+ 0.0,
2514
+ 0.0,
2515
+ 0.0,
2516
+ 0.0
2517
+ ],
2518
+ "std": [
2519
+ 0.0,
2520
+ 0.0,
2521
+ 0.0,
2522
+ 0.0,
2523
+ 0.0,
2524
+ 0.0,
2525
+ 0.0
2526
+ ]
2527
+ }
2528
+ },
2529
+ "taco_play": {
2530
+ "action": {
2531
+ "mask": [
2532
+ true,
2533
+ true,
2534
+ true,
2535
+ true,
2536
+ true,
2537
+ true,
2538
+ false
2539
+ ],
2540
+ "max": [
2541
+ 1.4915844202041626,
2542
+ 2.1842432022094727,
2543
+ 2.6836395263671875,
2544
+ 5.035226821899414,
2545
+ 2.665864944458008,
2546
+ 4.250768661499023,
2547
+ 1.0
2548
+ ],
2549
+ "mean": [
2550
+ -0.003845921251922846,
2551
+ 0.009671425446867943,
2552
+ 0.012780577875673771,
2553
+ -0.00540378550067544,
2554
+ -0.009606565348803997,
2555
+ -0.002480721101164818,
2556
+ 0.4263913035392761
2557
+ ],
2558
+ "min": [
2559
+ -4.242457866668701,
2560
+ -3.192805051803589,
2561
+ -1.3371467590332031,
2562
+ -4.202683448791504,
2563
+ -2.6722638607025146,
2564
+ -3.3467135429382324,
2565
+ 0.0
2566
+ ],
2567
+ "q01": [
2568
+ -0.7106140398979186,
2569
+ -1.056944659948349,
2570
+ -0.5878450274467468,
2571
+ -0.7682853937149048,
2572
+ -0.7180147767066956,
2573
+ -1.5527938604354858,
2574
+ 0.0
2575
+ ],
2576
+ "q99": [
2577
+ 0.6482916426658629,
2578
+ 1.0051310062408447,
2579
+ 0.9480248689651489,
2580
+ 0.6926478147506714,
2581
+ 0.6351067513227462,
2582
+ 1.628010264635086,
2583
+ 1.0
2584
+ ],
2585
+ "std": [
2586
+ 0.23254039883613586,
2587
+ 0.3629826605319977,
2588
+ 0.2869292199611664,
2589
+ 0.261770635843277,
2590
+ 0.2438892275094986,
2591
+ 0.5216503739356995,
2592
+ 0.4946901500225067
2593
+ ]
2594
+ },
2595
+ "num_trajectories": 3603,
2596
+ "num_transitions": 237798,
2597
+ "proprio": {
2598
+ "max": [
2599
+ 0.0,
2600
+ 0.0,
2601
+ 0.0,
2602
+ 0.0,
2603
+ 0.0,
2604
+ 0.0,
2605
+ 0.0
2606
+ ],
2607
+ "mean": [
2608
+ 0.0,
2609
+ 0.0,
2610
+ 0.0,
2611
+ 0.0,
2612
+ 0.0,
2613
+ 0.0,
2614
+ 0.0
2615
+ ],
2616
+ "min": [
2617
+ 0.0,
2618
+ 0.0,
2619
+ 0.0,
2620
+ 0.0,
2621
+ 0.0,
2622
+ 0.0,
2623
+ 0.0
2624
+ ],
2625
+ "q01": [
2626
+ 0.0,
2627
+ 0.0,
2628
+ 0.0,
2629
+ 0.0,
2630
+ 0.0,
2631
+ 0.0,
2632
+ 0.0
2633
+ ],
2634
+ "q99": [
2635
+ 0.0,
2636
+ 0.0,
2637
+ 0.0,
2638
+ 0.0,
2639
+ 0.0,
2640
+ 0.0,
2641
+ 0.0
2642
+ ],
2643
+ "std": [
2644
+ 0.0,
2645
+ 0.0,
2646
+ 0.0,
2647
+ 0.0,
2648
+ 0.0,
2649
+ 0.0,
2650
+ 0.0
2651
+ ]
2652
+ }
2653
+ },
2654
+ "toto": {
2655
+ "action": {
2656
+ "mask": [
2657
+ true,
2658
+ true,
2659
+ true,
2660
+ true,
2661
+ true,
2662
+ true,
2663
+ false
2664
+ ],
2665
+ "max": [
2666
+ 0.6839867234230042,
2667
+ 0.4454185664653778,
2668
+ 0.7984078526496887,
2669
+ 2.120781660079956,
2670
+ 1.371164321899414,
2671
+ 1.4118704795837402,
2672
+ 0.0
2673
+ ],
2674
+ "mean": [
2675
+ 0.3854214549064636,
2676
+ 0.007769509684294462,
2677
+ 0.3632741868495941,
2678
+ -0.6652028560638428,
2679
+ 0.18903960287570953,
2680
+ 0.03298758342862129,
2681
+ 0.0
2682
+ ],
2683
+ "min": [
2684
+ 0.09922284632921219,
2685
+ -0.5180193781852722,
2686
+ 0.13791072368621826,
2687
+ -2.635117530822754,
2688
+ -1.0734480619430542,
2689
+ -1.9282547235488892,
2690
+ 0.0
2691
+ ],
2692
+ "q01": [
2693
+ 0.1756722891330719,
2694
+ -0.3077590811252594,
2695
+ 0.235383919775486,
2696
+ -2.0908505964279174,
2697
+ -0.6191593289375306,
2698
+ -0.7488683319091797,
2699
+ 0.0
2700
+ ],
2701
+ "q99": [
2702
+ 0.6136963081359863,
2703
+ 0.33704194784164443,
2704
+ 0.6681221985816956,
2705
+ 0.7422861719131538,
2706
+ 0.7955395007133507,
2707
+ 0.740464625358582,
2708
+ 0.0
2709
+ ],
2710
+ "std": [
2711
+ 0.122116319835186,
2712
+ 0.19378569722175598,
2713
+ 0.10178232938051224,
2714
+ 0.5725255608558655,
2715
+ 0.2988460063934326,
2716
+ 0.32599160075187683,
2717
+ 0.0
2718
+ ]
2719
+ },
2720
+ "num_trajectories": 1003,
2721
+ "num_transitions": 325699,
2722
+ "proprio": {
2723
+ "max": [
2724
+ 0.0,
2725
+ 0.0,
2726
+ 0.0,
2727
+ 0.0,
2728
+ 0.0,
2729
+ 0.0,
2730
+ 0.0
2731
+ ],
2732
+ "mean": [
2733
+ 0.0,
2734
+ 0.0,
2735
+ 0.0,
2736
+ 0.0,
2737
+ 0.0,
2738
+ 0.0,
2739
+ 0.0
2740
+ ],
2741
+ "min": [
2742
+ 0.0,
2743
+ 0.0,
2744
+ 0.0,
2745
+ 0.0,
2746
+ 0.0,
2747
+ 0.0,
2748
+ 0.0
2749
+ ],
2750
+ "q01": [
2751
+ 0.0,
2752
+ 0.0,
2753
+ 0.0,
2754
+ 0.0,
2755
+ 0.0,
2756
+ 0.0,
2757
+ 0.0
2758
+ ],
2759
+ "q99": [
2760
+ 0.0,
2761
+ 0.0,
2762
+ 0.0,
2763
+ 0.0,
2764
+ 0.0,
2765
+ 0.0,
2766
+ 0.0
2767
+ ],
2768
+ "std": [
2769
+ 0.0,
2770
+ 0.0,
2771
+ 0.0,
2772
+ 0.0,
2773
+ 0.0,
2774
+ 0.0,
2775
+ 0.0
2776
+ ]
2777
+ }
2778
+ },
2779
+ "ucsd_kitchen_dataset_converted_externally_to_rlds": {
2780
+ "action": {
2781
+ "mask": [
2782
+ true,
2783
+ true,
2784
+ true,
2785
+ true,
2786
+ true,
2787
+ true,
2788
+ false
2789
+ ],
2790
+ "max": [
2791
+ 678.0,
2792
+ 400.0,
2793
+ 507.0,
2794
+ 180.00001525878906,
2795
+ 6.000013828277588,
2796
+ 116.99998474121094,
2797
+ 1.0
2798
+ ],
2799
+ "mean": [
2800
+ 410.37567138671875,
2801
+ 116.9518814086914,
2802
+ 192.35032653808594,
2803
+ -121.22441864013672,
2804
+ -33.84893035888672,
2805
+ 50.016136169433594,
2806
+ 0.741813600063324
2807
+ ],
2808
+ "min": [
2809
+ 172.0,
2810
+ -166.0,
2811
+ -99.99999237060547,
2812
+ -180.00001525878906,
2813
+ -89.0,
2814
+ -96.00010681152344,
2815
+ 0.0
2816
+ ],
2817
+ "q01": [
2818
+ 200.00001052856445,
2819
+ -102.31004211425781,
2820
+ -94.99993370056153,
2821
+ -180.00001525878906,
2822
+ -88.00001525878906,
2823
+ -38.999977111816406,
2824
+ 0.0
2825
+ ],
2826
+ "q99": [
2827
+ 637.0,
2828
+ 368.30999999999995,
2829
+ 493.0,
2830
+ 180.00001525878906,
2831
+ 0.999983012676239,
2832
+ 105.00001525878906,
2833
+ 1.0
2834
+ ],
2835
+ "std": [
2836
+ 122.8149642944336,
2837
+ 108.80091857910156,
2838
+ 130.303466796875,
2839
+ 116.28205108642578,
2840
+ 27.621841430664062,
2841
+ 41.02094650268555,
2842
+ 0.43763357400894165
2843
+ ]
2844
+ },
2845
+ "num_trajectories": 150,
2846
+ "num_transitions": 3970,
2847
+ "proprio": {
2848
+ "max": [
2849
+ 0.0,
2850
+ 0.0,
2851
+ 0.0,
2852
+ 0.0,
2853
+ 0.0,
2854
+ 0.0,
2855
+ 0.0
2856
+ ],
2857
+ "mean": [
2858
+ 0.0,
2859
+ 0.0,
2860
+ 0.0,
2861
+ 0.0,
2862
+ 0.0,
2863
+ 0.0,
2864
+ 0.0
2865
+ ],
2866
+ "min": [
2867
+ 0.0,
2868
+ 0.0,
2869
+ 0.0,
2870
+ 0.0,
2871
+ 0.0,
2872
+ 0.0,
2873
+ 0.0
2874
+ ],
2875
+ "q01": [
2876
+ 0.0,
2877
+ 0.0,
2878
+ 0.0,
2879
+ 0.0,
2880
+ 0.0,
2881
+ 0.0,
2882
+ 0.0
2883
+ ],
2884
+ "q99": [
2885
+ 0.0,
2886
+ 0.0,
2887
+ 0.0,
2888
+ 0.0,
2889
+ 0.0,
2890
+ 0.0,
2891
+ 0.0
2892
+ ],
2893
+ "std": [
2894
+ 0.0,
2895
+ 0.0,
2896
+ 0.0,
2897
+ 0.0,
2898
+ 0.0,
2899
+ 0.0,
2900
+ 0.0
2901
+ ]
2902
+ }
2903
+ },
2904
+ "utaustin_mutex": {
2905
+ "action": {
2906
+ "mask": [
2907
+ true,
2908
+ true,
2909
+ true,
2910
+ true,
2911
+ true,
2912
+ true,
2913
+ false
2914
+ ],
2915
+ "max": [
2916
+ 1.0,
2917
+ 1.0,
2918
+ 1.0,
2919
+ 0.375,
2920
+ 0.375,
2921
+ 0.375,
2922
+ 1.0
2923
+ ],
2924
+ "mean": [
2925
+ 0.06176406517624855,
2926
+ -0.0050054881721735,
2927
+ 0.10216782987117767,
2928
+ -0.03314130753278732,
2929
+ 0.013895021751523018,
2930
+ -0.011317633092403412,
2931
+ 0.5038976669311523
2932
+ ],
2933
+ "min": [
2934
+ -1.0,
2935
+ -1.0,
2936
+ -1.0,
2937
+ -0.375,
2938
+ -0.375,
2939
+ -0.375,
2940
+ 0.0
2941
+ ],
2942
+ "q01": [
2943
+ -0.4285714328289032,
2944
+ -0.9800000190734863,
2945
+ -0.5571428537368774,
2946
+ -0.375,
2947
+ -0.15642857551574707,
2948
+ -0.335357129573822,
2949
+ 0.0
2950
+ ],
2951
+ "q99": [
2952
+ 0.5914285778999329,
2953
+ 0.9714285731315613,
2954
+ 1.0,
2955
+ 0.3278571367263794,
2956
+ 0.207857146859169,
2957
+ 0.25607141852378845,
2958
+ 1.0
2959
+ ],
2960
+ "std": [
2961
+ 0.1875014454126358,
2962
+ 0.4468473196029663,
2963
+ 0.3792876601219177,
2964
+ 0.14097853004932404,
2965
+ 0.06453699618577957,
2966
+ 0.11765266209840775,
2967
+ 0.501045286655426
2968
+ ]
2969
+ },
2970
+ "num_trajectories": 1500,
2971
+ "num_transitions": 361883,
2972
+ "proprio": {
2973
+ "max": [
2974
+ 0.0,
2975
+ 0.0,
2976
+ 0.0,
2977
+ 0.0,
2978
+ 0.0,
2979
+ 0.0,
2980
+ 0.0
2981
+ ],
2982
+ "mean": [
2983
+ 0.0,
2984
+ 0.0,
2985
+ 0.0,
2986
+ 0.0,
2987
+ 0.0,
2988
+ 0.0,
2989
+ 0.0
2990
+ ],
2991
+ "min": [
2992
+ 0.0,
2993
+ 0.0,
2994
+ 0.0,
2995
+ 0.0,
2996
+ 0.0,
2997
+ 0.0,
2998
+ 0.0
2999
+ ],
3000
+ "q01": [
3001
+ 0.0,
3002
+ 0.0,
3003
+ 0.0,
3004
+ 0.0,
3005
+ 0.0,
3006
+ 0.0,
3007
+ 0.0
3008
+ ],
3009
+ "q99": [
3010
+ 0.0,
3011
+ 0.0,
3012
+ 0.0,
3013
+ 0.0,
3014
+ 0.0,
3015
+ 0.0,
3016
+ 0.0
3017
+ ],
3018
+ "std": [
3019
+ 0.0,
3020
+ 0.0,
3021
+ 0.0,
3022
+ 0.0,
3023
+ 0.0,
3024
+ 0.0,
3025
+ 0.0
3026
+ ]
3027
+ }
3028
+ },
3029
+ "viola": {
3030
+ "action": {
3031
+ "mask": [
3032
+ true,
3033
+ true,
3034
+ true,
3035
+ true,
3036
+ true,
3037
+ true,
3038
+ false
3039
+ ],
3040
+ "max": [
3041
+ 1.0,
3042
+ 1.0,
3043
+ 1.0,
3044
+ 0.375,
3045
+ 0.36321428418159485,
3046
+ 0.375,
3047
+ 1.0
3048
+ ],
3049
+ "mean": [
3050
+ 0.04761846736073494,
3051
+ -0.029204588383436203,
3052
+ 0.055867526680231094,
3053
+ -0.002618523547425866,
3054
+ 0.006867336109280586,
3055
+ -0.016821356490254402,
3056
+ 0.7323777675628662
3057
+ ],
3058
+ "min": [
3059
+ -1.0,
3060
+ -1.0,
3061
+ -1.0,
3062
+ -0.375,
3063
+ -0.375,
3064
+ -0.375,
3065
+ 0.0
3066
+ ],
3067
+ "q01": [
3068
+ -0.9628571271896362,
3069
+ -1.0,
3070
+ -1.0,
3071
+ -0.26249998807907104,
3072
+ -0.21321429312229156,
3073
+ -0.3385714292526245,
3074
+ 0.0
3075
+ ],
3076
+ "q99": [
3077
+ 0.9114285707473755,
3078
+ 0.868571400642395,
3079
+ 1.0,
3080
+ 0.2817857265472412,
3081
+ 0.2239285707473755,
3082
+ 0.3557142913341522,
3083
+ 1.0
3084
+ ],
3085
+ "std": [
3086
+ 0.39158064126968384,
3087
+ 0.40765121579170227,
3088
+ 0.40077799558639526,
3089
+ 0.10023975372314453,
3090
+ 0.08443227410316467,
3091
+ 0.10375077277421951,
3092
+ 0.4426046907901764
3093
+ ]
3094
+ },
3095
+ "num_trajectories": 150,
3096
+ "num_transitions": 76324,
3097
+ "proprio": {
3098
+ "max": [
3099
+ 0.0,
3100
+ 0.0,
3101
+ 0.0,
3102
+ 0.0,
3103
+ 0.0,
3104
+ 0.0,
3105
+ 0.0
3106
+ ],
3107
+ "mean": [
3108
+ 0.0,
3109
+ 0.0,
3110
+ 0.0,
3111
+ 0.0,
3112
+ 0.0,
3113
+ 0.0,
3114
+ 0.0
3115
+ ],
3116
+ "min": [
3117
+ 0.0,
3118
+ 0.0,
3119
+ 0.0,
3120
+ 0.0,
3121
+ 0.0,
3122
+ 0.0,
3123
+ 0.0
3124
+ ],
3125
+ "q01": [
3126
+ 0.0,
3127
+ 0.0,
3128
+ 0.0,
3129
+ 0.0,
3130
+ 0.0,
3131
+ 0.0,
3132
+ 0.0
3133
+ ],
3134
+ "q99": [
3135
+ 0.0,
3136
+ 0.0,
3137
+ 0.0,
3138
+ 0.0,
3139
+ 0.0,
3140
+ 0.0,
3141
+ 0.0
3142
+ ],
3143
+ "std": [
3144
+ 0.0,
3145
+ 0.0,
3146
+ 0.0,
3147
+ 0.0,
3148
+ 0.0,
3149
+ 0.0,
3150
+ 0.0
3151
+ ]
3152
+ }
3153
+ }
3154
+ },
3155
+ "num_attention_heads": 28,
3156
+ "num_hidden_layers": 28,
3157
+ "num_key_value_heads": 4,
3158
+ "pad_token_id": 151643,
3159
+ "projector_hidden_act": "gelu",
3160
+ "rms_norm_eps": 1e-06,
3161
+ "rope_scaling": null,
3162
+ "rope_theta": 1000000.0,
3163
+ "sliding_window": null,
3164
+ "tie_word_embeddings": false,
3165
+ "torch_dtype": "bfloat16",
3166
+ "transformers_version": "4.51.3",
3167
+ "use_cache": false,
3168
+ "use_sliding_window": false,
3169
+ "video_token_id": 151656,
3170
+ "vision_config": {
3171
+ "depth": 32,
3172
+ "embed_dim": 1280,
3173
+ "hidden_act": "quick_gelu",
3174
+ "hidden_size": 3584,
3175
+ "in_channels": 3,
3176
+ "in_chans": 3,
3177
+ "initializer_range": 0.02,
3178
+ "mlp_ratio": 4,
3179
+ "model_type": "dream_vl",
3180
+ "num_heads": 16,
3181
+ "patch_size": 14,
3182
+ "spatial_merge_size": 2,
3183
+ "spatial_patch_size": 14,
3184
+ "temporal_patch_size": 2
3185
+ },
3186
+ "vision_end_token_id": 151653,
3187
+ "vision_start_token_id": 151652,
3188
+ "vision_token_id": 151654,
3189
+ "vocab_size": 152064
3190
+ }
configuration_dreamvl.py ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The DreamVL team and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """DreamVL model configuration"""
16
+
17
+ import os
18
+ from typing import Union
19
+
20
+ from transformers.configuration_utils import PretrainedConfig
21
+ from transformers.modeling_rope_utils import rope_config_validation
22
+ from transformers.utils import logging
23
+
24
+
25
+ logger = logging.get_logger("DreamVL."+__name__)
26
+
27
+ class DreamVLVisionConfig(PretrainedConfig):
28
+ model_type = "dream_vl"
29
+ base_config_key = "vision_config"
30
+
31
+ def __init__(
32
+ self,
33
+ depth=32,
34
+ embed_dim=1280,
35
+ hidden_size=3584,
36
+ hidden_act="quick_gelu",
37
+ mlp_ratio=4,
38
+ num_heads=16,
39
+ in_channels=3,
40
+ patch_size=14,
41
+ spatial_merge_size=2,
42
+ temporal_patch_size=2,
43
+ initializer_range=0.02,
44
+ **kwargs,
45
+ ):
46
+ super().__init__(**kwargs)
47
+
48
+ self.depth = depth
49
+ self.embed_dim = embed_dim
50
+ self.hidden_size = hidden_size
51
+ self.hidden_act = hidden_act
52
+ self.mlp_ratio = mlp_ratio
53
+ self.num_heads = num_heads
54
+ self.in_channels = in_channels
55
+ self.patch_size = patch_size
56
+ self.spatial_merge_size = spatial_merge_size
57
+ self.temporal_patch_size = temporal_patch_size
58
+ self.initializer_range = initializer_range
59
+
60
+
61
+ class DreamVLConfig(PretrainedConfig):
62
+ model_type = "dream-vl"
63
+ keys_to_ignore_at_inference = ["past_key_values"]
64
+
65
+ def __init__(
66
+ self,
67
+ vocab_size=151936,
68
+ hidden_size=4096,
69
+ intermediate_size=22016,
70
+ num_hidden_layers=32,
71
+ num_attention_heads=32,
72
+ num_key_value_heads=32,
73
+ hidden_act="silu",
74
+ max_position_embeddings=32768,
75
+ initializer_range=0.02,
76
+ image_token_id = 151655,
77
+ video_token_id = 151656,
78
+ vision_end_token_id = 151653,
79
+ vision_start_token_id = 151652,
80
+ vision_token_id = 151654,
81
+ rms_norm_eps=1e-6,
82
+ use_cache=False,
83
+ tie_word_embeddings=False,
84
+ rope_theta=10000.0,
85
+ use_sliding_window=False,
86
+ sliding_window=4096,
87
+ max_window_layers=28,
88
+ attention_dropout=0.0,
89
+ mask_token_id=151666,
90
+ pad_token_id=151643,
91
+ vision_config=None,
92
+ rope_scaling=None,
93
+ mrope_section=[16,24,24],
94
+ projector_hidden_act=None,
95
+ **kwargs,
96
+ ):
97
+ if isinstance(vision_config, dict):
98
+ self.vision_config = DreamVLVisionConfig(**vision_config)
99
+ elif vision_config is None:
100
+ self.vision_config = DreamVLVisionConfig()
101
+
102
+ self.vocab_size = vocab_size
103
+ self.max_position_embeddings = max_position_embeddings
104
+ self.hidden_size = hidden_size
105
+ self.intermediate_size = intermediate_size
106
+ self.num_hidden_layers = num_hidden_layers
107
+ self.num_attention_heads = num_attention_heads
108
+ self.use_sliding_window = use_sliding_window
109
+ self.sliding_window = sliding_window if use_sliding_window else None
110
+ self.max_window_layers = max_window_layers
111
+ self.projector_hidden_act = projector_hidden_act
112
+
113
+ # for backward compatibility
114
+ if num_key_value_heads is None:
115
+ num_key_value_heads = num_attention_heads
116
+
117
+ self.num_key_value_heads = num_key_value_heads
118
+ self.hidden_act = hidden_act
119
+ self.initializer_range = initializer_range
120
+ self.rms_norm_eps = rms_norm_eps
121
+ self.use_cache = use_cache
122
+ self.rope_theta = rope_theta
123
+ self.rope_scaling = rope_scaling
124
+ self.attention_dropout = attention_dropout
125
+ # Validate the correctness of rotary position embeddings parameters
126
+ # BC: if there is a 'type' field, move it to 'rope_type'.
127
+ if self.rope_scaling is not None and "type" in self.rope_scaling:
128
+ self.rope_scaling["rope_type"] = self.rope_scaling["type"]
129
+ rope_config_validation(self, ignore_keys={"mrope_section"})
130
+ self.mrope_section = mrope_section
131
+
132
+ super().__init__(
133
+ tie_word_embeddings=tie_word_embeddings,
134
+ **kwargs,
135
+ )
136
+ self.mask_token_id = mask_token_id
137
+ self.pad_token_id = pad_token_id
138
+ self.image_token_id = image_token_id
139
+ self.video_token_id = video_token_id
140
+ self.vision_end_token_id = vision_end_token_id
141
+ self.vision_start_token_id = vision_start_token_id
142
+ self.vision_token_id = vision_token_id
143
+
144
+ class DreamVLAConfig(DreamVLConfig):
145
+ model_type = "dream-vla"
146
+ keys_to_ignore_at_inference = ["past_key_values"]
147
+
148
+ def __init__(self, n_action_bins=None, norm_stats=None, **kwargs):
149
+ super().__init__(**kwargs)
150
+ self.n_action_bins = n_action_bins
151
+ self.norm_stats = norm_stats
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 151643,
4
+ "eos_token_id": 151643,
5
+ "pad_token_id": 151643,
6
+ "transformers_version": "4.51.3",
7
+ "use_cache": false
8
+ }
generation_utils.py ADDED
@@ -0,0 +1,557 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The DreamVL team and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ import warnings
17
+ import copy
18
+ from dataclasses import dataclass
19
+ from typing import Any, Dict, Optional, Tuple, Union
20
+
21
+ import torch
22
+ import torch.distributions as dists
23
+ from torch.nn import functional as F
24
+ from transformers import __version__
25
+ from transformers.generation.configuration_utils import (
26
+ GenerationConfig,
27
+ )
28
+ from transformers.utils import (
29
+ ModelOutput,
30
+ is_torchdynamo_compiling,
31
+ logging,
32
+ )
33
+ from transformers.cache_utils import (
34
+ Cache,
35
+ DynamicCache,
36
+ )
37
+ from transformers.generation.utils import GenerationMixin
38
+ from transformers import TextIteratorStreamer
39
+
40
+ logger = logging.get_logger("DreamVL."+__name__)
41
+
42
+ def top_p_logits(logits, top_p=None):
43
+ sorted_logits, sorted_indices = torch.sort(logits, descending=True)
44
+ cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
45
+ sorted_indices_to_remove = cumulative_probs > top_p
46
+ # Shift the indices to the right to keep the first token above the threshold
47
+ sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
48
+ sorted_indices_to_remove[..., 0] = 0
49
+
50
+ mask = torch.zeros_like(logits, dtype=torch.bool, device=logits.device)
51
+ mask = mask.scatter_(-1, sorted_indices, sorted_indices_to_remove)
52
+ logits = logits.masked_fill(mask, torch.finfo(logits.dtype).min)
53
+ return logits
54
+
55
+ def top_k_logits(logits, top_k=None):
56
+ top_k = min(top_k, logits.size(-1)) # Safety check
57
+ # Remove all tokens with a probability less than the last token of the top-k
58
+ indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
59
+ logits = logits.masked_fill(indices_to_remove, torch.finfo(logits.dtype).min)
60
+ return logits
61
+
62
+
63
+ def sample_tokens(logits, temperature=0.0, top_p=None, top_k=None, margin_confidence=False, neg_entropy=False):
64
+
65
+ if temperature > 0:
66
+ logits = logits / temperature
67
+ if top_p is not None and top_p < 1:
68
+ logits = top_p_logits(logits, top_p)
69
+ if top_k is not None:
70
+ logits = top_k_logits(logits, top_k)
71
+ probs = torch.softmax(logits, dim=-1)
72
+
73
+ if temperature > 0:
74
+ try:
75
+ x0 = dists.Categorical(probs=probs).sample()
76
+ confidence = torch.gather(probs, -1, x0.unsqueeze(-1)).squeeze(-1)
77
+ except:
78
+ confidence, x0 = probs.max(dim=-1)
79
+ else:
80
+ confidence, x0 = probs.max(dim=-1)
81
+
82
+ if margin_confidence:
83
+ sorted_probs, _ = torch.sort(probs, dim=-1, descending=True)
84
+ # Extract top1 and top2 probabilities
85
+ top1_probs = sorted_probs[:, 0]
86
+ top2_probs = sorted_probs[:, 1]
87
+ # Calculate confidence as top1 - top2
88
+ confidence = top1_probs - top2_probs
89
+
90
+ if neg_entropy:
91
+ epsilon = 1e-10
92
+ log_probs = torch.log(probs + epsilon)
93
+ confidence = torch.sum(probs * log_probs, dim=-1)
94
+
95
+ return confidence, x0
96
+
97
+
98
+ @dataclass
99
+ class DreamVLModelOutput(ModelOutput):
100
+ sequences: torch.LongTensor = None
101
+ history: Optional[Tuple[torch.FloatTensor]] = None
102
+
103
+
104
+ class DreamVLGenerationConfig(GenerationConfig):
105
+ def __init__(self, **kwargs):
106
+ # cache parameter
107
+ self.use_cache: bool = kwargs.pop("use_cache", False)
108
+ # general generation parameter
109
+ self.temperature: float = kwargs.pop("temperature", 0.0)
110
+ self.top_p: Optional[float] = kwargs.pop("top_p", None)
111
+ self.top_k: Optional[int] = kwargs.pop("top_k", None)
112
+ self.max_length = kwargs.pop("max_length", 20)
113
+ self.max_new_tokens = kwargs.pop("max_new_tokens", None)
114
+ # diffusion specific params
115
+ self.eps: float = kwargs.pop("eps", 1e-3)
116
+ self.steps: int = kwargs.pop("steps", 512)
117
+ self.alg: str = kwargs.pop("alg", 'origin')
118
+ self.alg_temp: Optional[float] = kwargs.pop("alg_temp", None)
119
+ self.eos_penalty: Optional[float] = kwargs.pop("eos_penalty", 0)
120
+
121
+ # Parameters that define the output variables of `generate`
122
+ self.num_return_sequences: int = kwargs.pop("num_return_sequences", 1)
123
+ self.return_dict_in_generate: bool = kwargs.pop("return_dict_in_generate", False)
124
+ self.output_history: bool = kwargs.pop("output_history", False)
125
+
126
+ # Special tokens that can be used at generation time
127
+ self.mask_token_id = kwargs.pop("mask_token_id", None)
128
+ self.pad_token_id = kwargs.pop("pad_token_id", None)
129
+ self.bos_token_id = kwargs.pop("bos_token_id", None)
130
+ self.eos_token_id = kwargs.pop("eos_token_id", None)
131
+
132
+ # Wild card
133
+ self.generation_kwargs = kwargs.pop("generation_kwargs", {})
134
+
135
+ # The remaining attributes do not parametrize `.generate()`, but are informative and/or used by the hub
136
+ # interface.
137
+ self._from_model_config = kwargs.pop("_from_model_config", False)
138
+ self._commit_hash = kwargs.pop("_commit_hash", None)
139
+ self.transformers_version = kwargs.pop("transformers_version", __version__)
140
+
141
+ # Additional attributes without default values
142
+ if not self._from_model_config:
143
+ # we don't want to copy values from the model config if we're initializing a `GenerationConfig` from a
144
+ # model's default configuration file
145
+ for key, value in kwargs.items():
146
+ try:
147
+ setattr(self, key, value)
148
+ except AttributeError as err:
149
+ logger.error(f"Can't set {key} with value {value} for {self}")
150
+ raise err
151
+
152
+ # Validate the values of the attributes
153
+ self.validate(is_init=True)
154
+
155
+ def validate(self, is_init=False):
156
+ pass
157
+
158
+ class DreamVLGenerationMixin:
159
+ @staticmethod
160
+ def _expand_inputs_for_generation(
161
+ expand_size: int = 1,
162
+ input_ids: Optional[torch.LongTensor] = None,
163
+ **model_kwargs
164
+ ) -> Tuple[torch.LongTensor, Dict[str, Any]]:
165
+ """Expands tensors from [batch_size, ...] to [batch_size * expand_size, ...]"""
166
+ pixel_values = model_kwargs.get("pixel_values", None)
167
+ image_grid_thw = model_kwargs.get("image_grid_thw", None)
168
+ if expand_size == 1:
169
+ return GenerationMixin._expand_inputs_for_generation(
170
+ expand_size=expand_size,
171
+ input_ids=input_ids,
172
+ **model_kwargs
173
+ )
174
+ elif pixel_values is None and image_grid_thw is None:
175
+ return GenerationMixin._expand_inputs_for_generation(
176
+ expand_size=expand_size,
177
+ input_ids=input_ids,
178
+ **model_kwargs
179
+ )
180
+ else:
181
+ raise ValueError(
182
+ "Does not support expansion for image inputs. "
183
+ )
184
+
185
+ def _validate_generated_length(self, generation_config, input_ids_length, has_default_max_length):
186
+ """Performs validation related to the resulting generated length"""
187
+
188
+ # Can't throw warnings/exceptions during compilation
189
+ if is_torchdynamo_compiling():
190
+ return
191
+
192
+ # 1. Max length warnings related to poor parameterization
193
+ if has_default_max_length and generation_config.max_new_tokens is None and generation_config.max_length == 20:
194
+ # 20 is the default max_length of the generation config
195
+ logger.warning_once(
196
+ f"Using the model-agnostic default `max_length` (={generation_config.max_length}) to control the "
197
+ "generation length. We recommend setting `max_new_tokens` to control the maximum length of the "
198
+ "generation."
199
+ )
200
+ if input_ids_length >= generation_config.max_length:
201
+ input_ids_string = "input_ids"
202
+ raise ValueError(
203
+ f"Input length of {input_ids_string} is {input_ids_length}, but `max_length` is set to"
204
+ f" {generation_config.max_length}. This can lead to unexpected behavior. You should consider"
205
+ " increasing `max_length` or, better yet, setting `max_new_tokens`."
206
+ )
207
+
208
+ def _prepare_generated_length(
209
+ self,
210
+ generation_config,
211
+ has_default_max_length,
212
+ input_ids_length,
213
+ ):
214
+ """Prepared max and min length in generation configs to avoid clashes between similar attributes"""
215
+
216
+ if generation_config.max_new_tokens is not None:
217
+ if not has_default_max_length and generation_config.max_length is not None:
218
+ logger.warning_once(
219
+ f"Both `max_new_tokens` (={generation_config.max_new_tokens}) and `max_length`(="
220
+ f"{generation_config.max_length}) seem to have been set. `max_new_tokens` will take precedence. "
221
+ "Please refer to the documentation for more information. "
222
+ "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)"
223
+ )
224
+ generation_config.max_length = generation_config.max_new_tokens + input_ids_length
225
+
226
+ elif has_default_max_length:
227
+ if generation_config.max_length == DreamVLGenerationConfig().max_length:
228
+ generation_config.max_length = generation_config.max_length + input_ids_length
229
+ max_position_embeddings = getattr(self.config, "max_position_embeddings", None)
230
+ if max_position_embeddings is not None:
231
+ generation_config.max_length = min(generation_config.max_length, max_position_embeddings)
232
+
233
+ return generation_config
234
+
235
+ def _prepare_generation_config(
236
+ self, generation_config: Optional[DreamVLGenerationConfig], **kwargs: Dict
237
+ ) -> DreamVLGenerationConfig:
238
+ """
239
+ Prepares the base generation config, then applies any generation configuration options from kwargs. This
240
+ function handles retrocompatibility with respect to configuration files.
241
+ """
242
+ # priority: `generation_config` argument > `model.generation_config` (the default generation config)
243
+ using_model_generation_config = False
244
+ if generation_config is None:
245
+ generation_config = DreamVLGenerationConfig.from_model_config(self.config)
246
+ using_model_generation_config = True
247
+
248
+ # `torch.compile` can't compile `copy.deepcopy`, arguments in `kwargs` that are part of `generation_config`
249
+ # will mutate the object with `.update`. As such, passing these arguments through `kwargs` is disabled -- an
250
+ # exception will be raised in `_validate_model_kwargs`
251
+ if not is_torchdynamo_compiling():
252
+ generation_config = copy.deepcopy(generation_config)
253
+ model_kwargs = generation_config.update(**kwargs)
254
+ # If `generation_config` is provided, let's fallback ALL special tokens to the default values for the model
255
+ if not using_model_generation_config:
256
+ if generation_config.bos_token_id is None:
257
+ generation_config.bos_token_id = self.generation_config.bos_token_id
258
+ if generation_config.eos_token_id is None:
259
+ generation_config.eos_token_id = self.generation_config.eos_token_id
260
+ if generation_config.pad_token_id is None:
261
+ generation_config.pad_token_id = self.generation_config.pad_token_id
262
+ if generation_config.mask_token_id is None:
263
+ generation_config.mask_token_id = self.generation_config.mask_token_id
264
+
265
+ return generation_config, model_kwargs
266
+
267
+ def _prepare_special_tokens(
268
+ self,
269
+ generation_config: DreamVLGenerationConfig,
270
+ device: Optional[Union[torch.device, str]] = None,
271
+ ):
272
+ """
273
+ Prepares the special tokens for generation, overwriting the generation config with their processed versions
274
+ converted to tensor.
275
+ Note that `generation_config` is changed in place and stops being serializable after this method is called.
276
+ That is no problem if called within `generate` (`generation_config` is a local copy that doesn't leave the
277
+ function). However, if called outside `generate`, consider creating a copy of `generation_config` first.
278
+ """
279
+
280
+ # Convert special tokens to tensors
281
+ def _tensor_or_none(token, device=None):
282
+ if token is None:
283
+ return token
284
+
285
+ device = device if device is not None else self.device
286
+ if isinstance(token, torch.Tensor):
287
+ return token.to(device)
288
+ return torch.tensor(token, device=device, dtype=torch.long)
289
+
290
+ bos_token_tensor = _tensor_or_none(generation_config.bos_token_id, device=device)
291
+ eos_token_tensor = _tensor_or_none(generation_config.eos_token_id, device=device)
292
+ pad_token_tensor = _tensor_or_none(generation_config.pad_token_id, device=device)
293
+ mask_token_tensor = _tensor_or_none(generation_config.mask_token_id, device=device)
294
+
295
+ # We can have more than one eos token. Always treat it as a 1D tensor (when it exists).
296
+ if eos_token_tensor is not None and eos_token_tensor.ndim == 0:
297
+ eos_token_tensor = eos_token_tensor.unsqueeze(0)
298
+
299
+ # Set pad token if unset (and there are conditions to do so)
300
+ if pad_token_tensor is None and eos_token_tensor is not None:
301
+ pad_token_tensor = eos_token_tensor[0]
302
+ logger.warning_once(f"Setting `pad_token_id` to `eos_token_id`:{pad_token_tensor} for open-end generation.")
303
+
304
+ # Update generation config with the updated special tokens tensors
305
+ # NOTE: this must be written into a different attribute name than the one holding the original special tokens
306
+ # (in their non-tensor form), in order to enable end-to-end compilation. See
307
+ # https://pytorch.org/docs/stable/torch.compiler_cudagraph_trees.html#limitations
308
+ generation_config._bos_token_tensor = bos_token_tensor
309
+ generation_config._eos_token_tensor = eos_token_tensor
310
+ generation_config._pad_token_tensor = pad_token_tensor
311
+ generation_config._mask_token_tensor = mask_token_tensor
312
+
313
+ def _mask_pad_inputs_for_generation(
314
+ self,
315
+ input_ids: torch.LongTensor,
316
+ generation_config: DreamVLGenerationConfig,
317
+ **model_kwargs,
318
+ ) -> Tuple[torch.LongTensor, Dict[str, Any]]:
319
+ """
320
+ pad tokens in the input ids and attentions for generation. This is used to insert mask tokens into the input_ids
321
+ """
322
+ max_length = generation_config.max_length
323
+ mask_token_id = generation_config.mask_token_id
324
+ attention_mask = model_kwargs.get("attention_mask", None)
325
+
326
+ # pad input_ids to max_length
327
+ input_ids = F.pad(input_ids, (0, max_length - input_ids.shape[1]), value=mask_token_id)
328
+ if attention_mask is not None:
329
+ attention_mask = F.pad(attention_mask, (0, max_length - attention_mask.shape[1]), value=1.0)
330
+ model_kwargs["attention_mask"] = attention_mask
331
+ else:
332
+ raise ValueError(
333
+ "attention_mask should be provided. "
334
+ )
335
+
336
+ return input_ids, model_kwargs
337
+
338
+ def _update_model_kwargs_for_generation(
339
+ self,
340
+ outputs: ModelOutput,
341
+ model_kwargs: Dict[str, Any]
342
+ ) -> Dict[str, Any]:
343
+ # update past_key_values keeping its naming used in model code
344
+ if model_kwargs["use_cache"]:
345
+ assert outputs.past_key_values is not None, "Cache should not be None if use_cache is True"
346
+ assert outputs.past_key_values.get_seq_length() == model_kwargs["total_sequence_length"], \
347
+ f"Cache length {outputs.past_key_values.get_seq_length()} should be equal to the total sequence length {model_kwargs['total_sequence_length']}"
348
+ # The crop operation requires "left padding for batch processing"
349
+ outputs.past_key_values.crop(max_length = model_kwargs["prompt_length"])
350
+ # if model_kwargs["past_key_values"].get_seq_length() > 0:
351
+ # assert self.compare_past_key_values(model_kwargs["past_key_values"], outputs.past_key_values), \
352
+ # f"Cache {model_kwargs['past_key_values']} should be equal to the new cache {outputs.past_key_values}"
353
+ else:
354
+ assert outputs.past_key_values is None, "Cache should be None if use_cache is False"
355
+ model_kwargs["past_key_values"] = outputs.past_key_values
356
+
357
+ # update cache position
358
+ if model_kwargs["use_cache"]:
359
+ model_kwargs["cache_position"] = model_kwargs["cache_position"][-(model_kwargs["total_sequence_length"] - model_kwargs["prompt_length"]):]
360
+ else:
361
+ assert model_kwargs["cache_position"] is None, "Cache position should be None if use_cache is False"
362
+
363
+ if model_kwargs.get("rope_deltas", None) is not None:
364
+ assert torch.equal(
365
+ model_kwargs["rope_deltas"], outputs.rope_deltas), \
366
+ f"Rope deltas {model_kwargs['rope_deltas']} should be equal to the new rope deltas {outputs.rope_deltas}"
367
+ model_kwargs["rope_deltas"] = outputs.rope_deltas
368
+ return model_kwargs
369
+
370
+ @torch.no_grad()
371
+ def diffusion_generate(
372
+ self,
373
+ inputs: Optional[torch.Tensor] = None,
374
+ generation_config: Optional[DreamVLGenerationConfig] = None,
375
+ **kwargs,
376
+ ) -> Union[DreamVLModelOutput, torch.LongTensor]:
377
+ # 1. Handle `generation_config` and kwargs that might update it, and validate the `.generate()` call
378
+ generation_config, model_kwargs = self._prepare_generation_config(generation_config, **kwargs)
379
+ generation_tokens_hook_func = model_kwargs.pop("generation_tokens_hook_func", lambda step, x, logits: x)
380
+ generation_logits_hook_func = model_kwargs.pop("generation_logits_hook_func", lambda step, x, logits: logits)
381
+ attention_mask = kwargs.pop("attention_mask", None)
382
+
383
+ # 2. Define model inputs
384
+ assert inputs is not None
385
+ input_ids = inputs
386
+ device = input_ids.device
387
+ self._prepare_special_tokens(generation_config, device=device)
388
+
389
+ # 3. Prepare `max_length`.
390
+ input_ids_length = input_ids.shape[-1]
391
+ has_default_max_length = kwargs.get("max_length") is None and generation_config.max_length is not None
392
+ generation_config = self._prepare_generated_length(
393
+ generation_config=generation_config,
394
+ has_default_max_length=has_default_max_length,
395
+ input_ids_length=input_ids_length,
396
+ )
397
+
398
+ self._validate_generated_length(generation_config, input_ids_length, has_default_max_length)
399
+
400
+ # 4. Check input_ids
401
+ if not is_torchdynamo_compiling() and self.device.type != input_ids.device.type:
402
+ logger.warning_once(
403
+ "You are calling .generate() with the `input_ids` being on a device type different"
404
+ f" than your model's device. `input_ids` is on {input_ids.device.type}, whereas the model"
405
+ f" is on {self.device.type}. You may experience unexpected behaviors or slower generation."
406
+ " Please make sure that you have put `input_ids` to the"
407
+ f" correct device by calling for example input_ids = input_ids.to('{self.device.type}') before"
408
+ " running `.generate()`."
409
+ )
410
+ if (
411
+ hasattr(generation_config, "pad_token_id") and
412
+ torch.any(input_ids == generation_config.pad_token_id) and
413
+ attention_mask is None
414
+ ):
415
+ logger.warning_once(
416
+ "Padding was detected but no attention mask is passed here. For correct "
417
+ "generation results, please set `attention_mask` when batch-padding inputs."
418
+ )
419
+
420
+ # 5. initialize kv cache
421
+ model_kwargs["use_cache"] = generation_config.use_cache
422
+ if model_kwargs["use_cache"]:
423
+ model_kwargs["past_key_values"] = DynamicCache()
424
+ model_kwargs["prompt_length"] = input_ids.shape[1] - 1
425
+ else:
426
+ model_kwargs["past_key_values"] = None
427
+ model_kwargs["prompt_length"] = input_ids.shape[1] - 1
428
+
429
+ # 6. Expand inputs for generation
430
+ input_ids, model_kwargs = self._expand_inputs_for_generation(
431
+ input_ids=input_ids,
432
+ expand_size=generation_config.num_return_sequences,
433
+ **model_kwargs,
434
+ )
435
+
436
+ # 7. pad mask for generation
437
+ input_ids, model_kwargs = self._mask_pad_inputs_for_generation(
438
+ input_ids=input_ids,
439
+ generation_config=generation_config,
440
+ **model_kwargs,
441
+ )
442
+ model_kwargs["total_sequence_length"] = input_ids.shape[1]
443
+
444
+ # 8. initialize cache position
445
+ if model_kwargs["use_cache"]:
446
+ model_kwargs["cache_position"] = torch.ones_like(input_ids[0, :], dtype=torch.int64).cumsum(0) - 1
447
+ else:
448
+ model_kwargs["cache_position"] = None
449
+ # 9. Generate
450
+ result = self._sample(
451
+ input_ids,
452
+ generation_config=generation_config,
453
+ generation_tokens_hook_func=generation_tokens_hook_func,
454
+ generation_logits_hook_func=generation_logits_hook_func,
455
+ **model_kwargs,
456
+ )
457
+ return result
458
+
459
+ def _sample(
460
+ self,
461
+ input_ids: torch.LongTensor,
462
+ generation_config: DreamVLGenerationConfig,
463
+ generation_tokens_hook_func,
464
+ generation_logits_hook_func,
465
+ **model_kwargs,
466
+ ) -> Union[DreamVLModelOutput, torch.LongTensor]:
467
+ # init values
468
+ output_history = generation_config.output_history
469
+ return_dict_in_generate = generation_config.return_dict_in_generate
470
+ max_length = generation_config.max_length
471
+ mask_token_id = generation_config.mask_token_id
472
+ pad_token_id = generation_config.pad_token_id
473
+ steps = generation_config.steps
474
+ eps = generation_config.eps
475
+ alg = generation_config.alg
476
+ alg_temp = generation_config.alg_temp
477
+ temperature = generation_config.temperature
478
+ eos_penalty = generation_config.eos_penalty
479
+ top_p = generation_config.top_p
480
+ top_k = generation_config.top_k
481
+ # print(generation_config.__dict__)
482
+
483
+ histories = [] if (return_dict_in_generate and output_history) else None
484
+
485
+ timesteps = torch.linspace(1, eps, steps + 1, device=input_ids.device)
486
+
487
+ x = generation_tokens_hook_func(None, input_ids, None)
488
+
489
+ # this allows user-defined token control of the intermediate steps
490
+ for i in range(steps):
491
+ model_inputs = self.prepare_inputs_for_generation(x, **model_kwargs)
492
+ x = model_inputs.pop("input_ids").clone()
493
+ mask_index = (x == mask_token_id)
494
+ outputs = self(x, **model_inputs)
495
+
496
+ if 'inputs_embeds' not in model_kwargs:
497
+ # initialize the inputs_embeds for caching
498
+ model_kwargs['inputs_embeds'] = outputs.inputs_embeds
499
+
500
+ model_kwargs = self._update_model_kwargs_for_generation(outputs, model_kwargs)
501
+
502
+ logits = outputs.logits
503
+ assert torch.all(x[:,0] != mask_token_id), "The first token should not be a mask token"
504
+ logits = torch.cat([logits[:,:1], logits[:, :-1]], dim=1)
505
+
506
+ # this allows user-defined logits control of the intermediate steps
507
+ logits = generation_logits_hook_func(i, x, logits)
508
+
509
+ mask_logits = logits[mask_index]
510
+ t = timesteps[i]
511
+ s = timesteps[i + 1]
512
+ mask_logits[:,pad_token_id] += eos_penalty * torch.log(1-t+eps)
513
+
514
+ if alg == 'origin':
515
+ p_transfer = 1 - s / t if i < steps - 1 else 1
516
+ x0 = torch.zeros_like(x[mask_index], device=self.device, dtype=torch.long) + mask_token_id
517
+ transfer_index_t_s = torch.rand(*x0.shape, device=self.device) < p_transfer
518
+ _, x0[transfer_index_t_s]= sample_tokens(mask_logits[transfer_index_t_s], temperature=temperature, top_p=top_p, top_k=top_k)
519
+ x[mask_index] = x0.clone()
520
+ else:
521
+ if alg == 'maskgit_plus':
522
+ confidence, x0 = sample_tokens(mask_logits, temperature=temperature, top_p=top_p, top_k=top_k)
523
+ elif alg == 'topk_margin':
524
+ confidence, x0 = sample_tokens(mask_logits, temperature=temperature, top_p=top_p, top_k=top_k, margin_confidence=True)
525
+ elif alg == 'entropy':
526
+ confidence, x0 = sample_tokens(mask_logits, temperature, top_p=top_p, top_k=top_k, neg_entropy=True)
527
+ else:
528
+ raise RuntimeError(f"Unknown alg: {alg}")
529
+ num_mask_token = mask_index.sum()
530
+ number_transfer_tokens = int(num_mask_token * (1 - s / t)) if i < steps - 1 else num_mask_token
531
+ if number_transfer_tokens > 0:
532
+ if alg_temp is None or alg_temp == 0:
533
+ _, transfer_index = torch.topk(confidence, number_transfer_tokens)
534
+ else:
535
+ confidence = confidence / alg_temp
536
+ confidence = F.softmax(confidence, dim=-1)
537
+ transfer_index = torch.multinomial(confidence, num_samples=number_transfer_tokens)
538
+ x0_ = torch.zeros_like(x0, device=self.device, dtype=torch.long) + mask_token_id
539
+ x0_[transfer_index] = x0[transfer_index].clone()
540
+ x[mask_index] = x0_
541
+
542
+ # this allows user-defined token control of the intermediate steps
543
+ x = generation_tokens_hook_func(i, x, logits)
544
+
545
+ if histories is not None:
546
+ histories.append(x.clone())
547
+
548
+ ## update inputs_embeds of all the mask tokens where some are just unmasked
549
+ model_kwargs['inputs_embeds'][mask_index] = self.get_input_embeddings()(x[mask_index])
550
+
551
+ if return_dict_in_generate:
552
+ return DreamVLModelOutput(
553
+ sequences=x,
554
+ history=histories,
555
+ )
556
+ else:
557
+ return x
image_processing_dreamvl.py ADDED
@@ -0,0 +1,469 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The Qwen team, Alibaba Group and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ """Image processor class for Dream-VL."""
21
+
22
+ import math
23
+ from typing import Dict, List, Optional, Union
24
+
25
+ import numpy as np
26
+
27
+ from transformers.image_processing_utils import BaseImageProcessor, BatchFeature
28
+ from transformers.image_transforms import (
29
+ convert_to_rgb,
30
+ resize,
31
+ to_channel_dimension_format,
32
+ )
33
+ from transformers.image_utils import (
34
+ OPENAI_CLIP_MEAN,
35
+ OPENAI_CLIP_STD,
36
+ ChannelDimension,
37
+ ImageInput,
38
+ PILImageResampling,
39
+ VideoInput,
40
+ get_image_size,
41
+ infer_channel_dimension_format,
42
+ is_scaled_image,
43
+ is_valid_image,
44
+ make_list_of_images,
45
+ to_numpy_array,
46
+ valid_images,
47
+ validate_preprocess_arguments,
48
+ )
49
+ from transformers.utils import TensorType, is_vision_available, logging
50
+
51
+ logger = logging.get_logger(__name__)
52
+
53
+ if is_vision_available():
54
+ from PIL import Image
55
+
56
+
57
+ def make_batched_images(images) -> List[List[ImageInput]]:
58
+ """
59
+ Accepts images in list or nested list format, and makes a list of images for preprocessing.
60
+
61
+ Args:
62
+ images (`Union[List[List[ImageInput]], List[ImageInput], ImageInput]`):
63
+ The input image.
64
+
65
+ Returns:
66
+ list: A list of images.
67
+ """
68
+ if isinstance(images, (list, tuple)) and isinstance(images[0], (list, tuple)) and is_valid_image(images[0][0]):
69
+ return [img for img_list in images for img in img_list]
70
+
71
+ elif isinstance(images, (list, tuple)) and is_valid_image(images[0]):
72
+ return images
73
+
74
+ elif is_valid_image(images):
75
+ return [images]
76
+
77
+ raise ValueError(f"Could not make batched images from {images}")
78
+
79
+
80
+ # Copied from transformers.models.emova_next_video.image_processing_emova_next_video.make_batched_videos
81
+ def make_batched_videos(videos) -> List[VideoInput]:
82
+ if isinstance(videos, (list, tuple)) and isinstance(videos[0], (list, tuple)) and is_valid_image(videos[0][0]):
83
+ return videos
84
+
85
+ elif isinstance(videos, (list, tuple)) and is_valid_image(videos[0]):
86
+ if isinstance(videos[0], Image.Image):
87
+ return [videos]
88
+ elif len(videos[0].shape) == 4:
89
+ return [list(video) for video in videos]
90
+
91
+ elif is_valid_image(videos) and len(videos.shape) == 4:
92
+ return [list(videos)]
93
+
94
+ raise ValueError(f"Could not make batched video from {videos}")
95
+
96
+
97
+ def smart_resize(
98
+ height: int, width: int, factor: int = 28, min_pixels: int = 56 * 56, max_pixels: int = 14 * 14 * 4 * 1280
99
+ ):
100
+ """Rescales the image so that the following conditions are met:
101
+
102
+ 1. Both dimensions (height and width) are divisible by 'factor'.
103
+
104
+ 2. The total number of pixels is within the range ['min_pixels', 'max_pixels'].
105
+
106
+ 3. The aspect ratio of the image is maintained as closely as possible.
107
+
108
+ """
109
+ if height < factor or width < factor:
110
+ # print("height, width", height, width)
111
+ if height < width:
112
+ h_bar = factor
113
+ w_bar = round(width / height * factor)
114
+ else:
115
+ h_bar = round(height / width * factor)
116
+ w_bar = factor
117
+ # print("h_bar, w_bar", h_bar, w_bar)
118
+ height, width = h_bar, w_bar
119
+ # raise ValueError(f"height:{height} or width:{width} must be larger than factor:{factor}")
120
+ elif max(height, width) / min(height, width) > 200:
121
+ raise ValueError(
122
+ f"absolute aspect ratio must be smaller than 200, got {max(height, width) / min(height, width)}"
123
+ )
124
+ h_bar = round(height / factor) * factor
125
+ w_bar = round(width / factor) * factor
126
+ if h_bar * w_bar > max_pixels:
127
+ beta = math.sqrt((height * width) / max_pixels)
128
+ h_bar = math.floor(height / beta / factor) * factor
129
+ w_bar = math.floor(width / beta / factor) * factor
130
+ elif h_bar * w_bar < min_pixels:
131
+ beta = math.sqrt(min_pixels / (height * width))
132
+ h_bar = math.ceil(height * beta / factor) * factor
133
+ w_bar = math.ceil(width * beta / factor) * factor
134
+ return h_bar, w_bar
135
+
136
+
137
+ class DreamVLImageProcessor(BaseImageProcessor):
138
+ r"""
139
+ Constructs a Dream-VL image processor that dynamically resizes images based on the original images.
140
+
141
+ Args:
142
+ do_resize (`bool`, *optional*, defaults to `True`):
143
+ Whether to resize the image's (height, width) dimensions.
144
+ resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
145
+ Resampling filter to use when resizing the image.
146
+ do_rescale (`bool`, *optional*, defaults to `True`):
147
+ Whether to rescale the image by the specified scale `rescale_factor`.
148
+ rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
149
+ Scale factor to use if rescaling the image.
150
+ do_normalize (`bool`, *optional*, defaults to `True`):
151
+ Whether to normalize the image.
152
+ image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`):
153
+ Mean to use if normalizing the image. This is a float or list of floats for each channel in the image.
154
+ image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`):
155
+ Standard deviation to use if normalizing the image. This is a float or list of floats for each channel in the image.
156
+ do_convert_rgb (`bool`, *optional*, defaults to `True`):
157
+ Whether to convert the image to RGB.
158
+ min_pixels (`int`, *optional*, defaults to `56 * 56`):
159
+ The min pixels of the image to resize the image.
160
+ max_pixels (`int`, *optional*, defaults to `28 * 28 * 1280`):
161
+ The max pixels of the image to resize the image.
162
+ patch_size (`int`, *optional*, defaults to 14):
163
+ The spacial patch size of the vision encoder.
164
+ temporal_patch_size (`int`, *optional*, defaults to 2):
165
+ The temporal patch size of the vision encoder.
166
+ merge_size (`int`, *optional*, defaults to 2):
167
+ The merge size of the vision encoder to llm encoder.
168
+ """
169
+
170
+ model_input_names = ["pixel_values", "image_grid_thw", "pixel_values_videos", "video_grid_thw"]
171
+
172
+ def __init__(
173
+ self,
174
+ do_resize: bool = True,
175
+ resample: PILImageResampling = PILImageResampling.BICUBIC,
176
+ do_rescale: bool = True,
177
+ rescale_factor: Union[int, float] = 1 / 255,
178
+ do_normalize: bool = True,
179
+ image_mean: Optional[Union[float, List[float]]] = None,
180
+ image_std: Optional[Union[float, List[float]]] = None,
181
+ do_convert_rgb: bool = True,
182
+ min_pixels: int = 56 * 56,
183
+ max_pixels: int = 28 * 28 * 1280,
184
+ patch_size: int = 14,
185
+ temporal_patch_size: int = 2,
186
+ merge_size: int = 2,
187
+ **kwargs,
188
+ ) -> None:
189
+ super().__init__(**kwargs)
190
+ self.do_resize = do_resize
191
+ self.resample = resample
192
+ self.do_rescale = do_rescale
193
+ self.rescale_factor = rescale_factor
194
+ self.do_normalize = do_normalize
195
+ self.image_mean = image_mean if image_mean is not None else OPENAI_CLIP_MEAN
196
+ self.image_std = image_std if image_std is not None else OPENAI_CLIP_STD
197
+ self.min_pixels = min_pixels
198
+ self.max_pixels = max_pixels
199
+ self.patch_size = patch_size
200
+ self.temporal_patch_size = temporal_patch_size
201
+ self.merge_size = merge_size
202
+ self.size = {"min_pixels": min_pixels, "max_pixels": max_pixels}
203
+ self.do_convert_rgb = do_convert_rgb
204
+
205
+ def _preprocess(
206
+ self,
207
+ images: Union[ImageInput, VideoInput],
208
+ do_resize: bool = None,
209
+ resample: PILImageResampling = None,
210
+ do_rescale: bool = None,
211
+ rescale_factor: float = None,
212
+ do_normalize: bool = None,
213
+ image_mean: Optional[Union[float, List[float]]] = None,
214
+ image_std: Optional[Union[float, List[float]]] = None,
215
+ do_convert_rgb: bool = None,
216
+ data_format: Optional[ChannelDimension] = ChannelDimension.FIRST,
217
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
218
+ ):
219
+ """
220
+ Preprocess an image or batch of images. Copy of the `preprocess` method from `CLIPImageProcessor`.
221
+
222
+ Args:
223
+ images (`ImageInput`):
224
+ Image or batch of images to preprocess. Expects pixel values ranging from 0 to 255. If pixel values range from 0 to 1, set `do_rescale=False`.
225
+ vision_info (`List[Dict]`, *optional*):
226
+ Optional list of dictionaries containing additional information about vision inputs.
227
+ do_resize (`bool`, *optional*, defaults to `self.do_resize`):
228
+ Whether to resize the image.
229
+ resample (`PILImageResampling`, *optional*, defaults to `self.resample`):
230
+ Resampling filter to use if resizing the image. This can be one of the `PILImageResampling` enums.
231
+ do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
232
+ Whether to rescale the image.
233
+ rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
234
+ Scale factor to use if rescaling the image.
235
+ do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
236
+ Whether to normalize the image.
237
+ image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
238
+ Mean to use if normalizing the image. Can be a float or a list of floats corresponding to the number of channels in the image.
239
+ image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
240
+ Standard deviation to use if normalizing the image. Can be a float or a list of floats corresponding to the number of channels in the image.
241
+ do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
242
+ Whether to convert the image to RGB.
243
+ data_format (`ChannelDimension`, *optional*, defaults to `ChannelDimension.FIRST`):
244
+ The channel dimension format for the output image. Can be one of:
245
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
246
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
247
+ - Unset: Use the channel dimension format of the input image.
248
+ input_data_format (`ChannelDimension` or `str`, *optional*):
249
+ The channel dimension format for the input image. Can be one of:
250
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
251
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
252
+ - `"none"` or `ChannelDimension.NONE`: image in (height, width) format. - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
253
+ """
254
+ # import pdb; pdb.set_trace()
255
+ # print("images", images)
256
+ # for image in images:
257
+ # print("image", image.size)
258
+ images = make_list_of_images(images)
259
+
260
+ if do_convert_rgb:
261
+ images = [convert_to_rgb(image) for image in images]
262
+
263
+ # All transformations expect numpy arrays.
264
+ images = [to_numpy_array(image) for image in images]
265
+
266
+ if is_scaled_image(images[0]) and do_rescale:
267
+ logger.warning_once(
268
+ "It looks like you are trying to rescale already rescaled images. If the input"
269
+ " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again."
270
+ )
271
+ if input_data_format is None:
272
+ # We assume that all images have the same channel dimension format.
273
+ input_data_format = infer_channel_dimension_format(images[0])
274
+
275
+ height, width = get_image_size(images[0], channel_dim=input_data_format)
276
+ resized_height, resized_width = height, width
277
+ processed_images = []
278
+ for image in images:
279
+ if do_resize:
280
+ resized_height, resized_width = smart_resize(
281
+ height,
282
+ width,
283
+ factor=self.patch_size * self.merge_size,
284
+ min_pixels=self.min_pixels,
285
+ max_pixels=self.max_pixels,
286
+ )
287
+ image = resize(
288
+ image, size=(resized_height, resized_width), resample=resample, input_data_format=input_data_format
289
+ )
290
+
291
+ if do_rescale:
292
+ image = self.rescale(image, scale=rescale_factor, input_data_format=input_data_format)
293
+
294
+ if do_normalize:
295
+ image = self.normalize(
296
+ image=image, mean=image_mean, std=image_std, input_data_format=input_data_format
297
+ )
298
+
299
+ image = to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format)
300
+ processed_images.append(image)
301
+
302
+ patches = np.array(processed_images)
303
+ if data_format == ChannelDimension.LAST:
304
+ patches = patches.transpose(0, 3, 1, 2)
305
+ if patches.shape[0] == 1:
306
+ patches = np.tile(patches, (self.temporal_patch_size, 1, 1, 1))
307
+ channel = patches.shape[1]
308
+ grid_t = patches.shape[0] // self.temporal_patch_size
309
+ grid_h, grid_w = resized_height // self.patch_size, resized_width // self.patch_size
310
+ patches = patches.reshape(
311
+ grid_t,
312
+ self.temporal_patch_size,
313
+ channel,
314
+ grid_h // self.merge_size,
315
+ self.merge_size,
316
+ self.patch_size,
317
+ grid_w // self.merge_size,
318
+ self.merge_size,
319
+ self.patch_size,
320
+ )
321
+ patches = patches.transpose(0, 3, 6, 4, 7, 2, 1, 5, 8)
322
+ flatten_patches = patches.reshape(
323
+ grid_t * grid_h * grid_w, channel * self.temporal_patch_size * self.patch_size * self.patch_size
324
+ )
325
+
326
+ return flatten_patches, (grid_t, grid_h, grid_w)
327
+
328
+ def preprocess(
329
+ self,
330
+ images: ImageInput,
331
+ videos: VideoInput = None,
332
+ do_resize: bool = None,
333
+ size: Dict[str, int] = None,
334
+ resample: PILImageResampling = None,
335
+ do_rescale: bool = None,
336
+ rescale_factor: float = None,
337
+ do_normalize: bool = None,
338
+ image_mean: Optional[Union[float, List[float]]] = None,
339
+ image_std: Optional[Union[float, List[float]]] = None,
340
+ do_convert_rgb: bool = None,
341
+ return_tensors: Optional[Union[str, TensorType]] = None,
342
+ data_format: Optional[ChannelDimension] = ChannelDimension.FIRST,
343
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
344
+ ):
345
+ """
346
+ Args:
347
+ images (`ImageInput`):
348
+ Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
349
+ passing in images with pixel values between 0 and 1, set `do_rescale=False`.
350
+ videos (`VideoInput`):
351
+ Video to preprocess. Expects a single or batch of videos with pixel values ranging from 0 to 255. If
352
+ passing in videos with pixel values between 0 and 1, set `do_rescale=False`.
353
+ do_resize (`bool`, *optional*, defaults to `self.do_resize`):
354
+ Whether to resize the image.
355
+ size (`Dict[str, int]`, *optional*, defaults to `self.size`):
356
+ Size of the image after resizing. Shortest edge of the image is resized to size["shortest_edge"], with
357
+ the longest edge resized to keep the input aspect ratio.
358
+ resample (`int`, *optional*, defaults to `self.resample`):
359
+ Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`. Only
360
+ has an effect if `do_resize` is set to `True`.
361
+ do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
362
+ Whether to rescale the image.
363
+ rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
364
+ Rescale factor to rescale the image by if `do_rescale` is set to `True`.
365
+ do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
366
+ Whether to normalize the image.
367
+ image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
368
+ Image mean to use for normalization. Only has an effect if `do_normalize` is set to `True`.
369
+ image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
370
+ Image standard deviation to use for normalization. Only has an effect if `do_normalize` is set to
371
+ `True`.
372
+ do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
373
+ Whether to convert the image to RGB.
374
+ return_tensors (`str` or `TensorType`, *optional*):
375
+ The type of tensors to return. Can be one of:
376
+ - Unset: Return a list of `np.ndarray`.
377
+ - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
378
+ - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
379
+ - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
380
+ - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
381
+ data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`):
382
+ The channel dimension format for the output image. Can be one of:
383
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
384
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
385
+ - Unset: Use the channel dimension format of the input image.
386
+ input_data_format (`ChannelDimension` or `str`, *optional*):
387
+ The channel dimension format for the input image. If unset, the channel dimension format is inferred
388
+ from the input image. Can be one of:
389
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
390
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
391
+ - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
392
+
393
+ """
394
+ do_resize = do_resize if do_resize is not None else self.do_resize
395
+ size = size if size is not None else self.size
396
+ resample = resample if resample is not None else self.resample
397
+ do_rescale = do_rescale if do_rescale is not None else self.do_rescale
398
+ rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor
399
+ do_normalize = do_normalize if do_normalize is not None else self.do_normalize
400
+ image_mean = image_mean if image_mean is not None else self.image_mean
401
+ image_std = image_std if image_std is not None else self.image_std
402
+ do_convert_rgb = do_convert_rgb if do_convert_rgb is not None else self.do_convert_rgb
403
+
404
+ if images is not None:
405
+ images = make_batched_images(images)
406
+ if videos is not None:
407
+ videos = make_batched_videos(videos)
408
+
409
+ if images is not None and not valid_images(images):
410
+ raise ValueError(
411
+ "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, "
412
+ "torch.Tensor, tf.Tensor or jax.ndarray."
413
+ )
414
+
415
+ validate_preprocess_arguments(
416
+ rescale_factor=rescale_factor,
417
+ do_normalize=do_normalize,
418
+ image_mean=image_mean,
419
+ image_std=image_std,
420
+ do_resize=do_resize,
421
+ size=size,
422
+ resample=resample,
423
+ )
424
+
425
+ if images is not None:
426
+ pixel_values, vision_grid_thws = [], []
427
+ for image in images:
428
+ patches, image_grid_thw = self._preprocess(
429
+ image,
430
+ do_resize=do_resize,
431
+ resample=resample,
432
+ do_rescale=do_rescale,
433
+ rescale_factor=rescale_factor,
434
+ do_normalize=do_normalize,
435
+ image_mean=image_mean,
436
+ image_std=image_std,
437
+ data_format=data_format,
438
+ do_convert_rgb=do_convert_rgb,
439
+ input_data_format=input_data_format,
440
+ )
441
+ pixel_values.extend(patches)
442
+ vision_grid_thws.append(image_grid_thw)
443
+ pixel_values = np.array(pixel_values)
444
+ vision_grid_thws = np.array(vision_grid_thws)
445
+ data = {"pixel_values": pixel_values, "image_grid_thw": vision_grid_thws}
446
+
447
+ if videos is not None:
448
+ pixel_values, vision_grid_thws = [], []
449
+ for images in videos:
450
+ patches, video_grid_thw = self._preprocess(
451
+ images,
452
+ do_resize=do_resize,
453
+ resample=resample,
454
+ do_rescale=do_rescale,
455
+ rescale_factor=rescale_factor,
456
+ do_normalize=do_normalize,
457
+ image_mean=image_mean,
458
+ image_std=image_std,
459
+ data_format=data_format,
460
+ do_convert_rgb=do_convert_rgb,
461
+ input_data_format=input_data_format,
462
+ )
463
+ pixel_values.extend(patches)
464
+ vision_grid_thws.append(video_grid_thw)
465
+ pixel_values = np.array(pixel_values)
466
+ vision_grid_thws = np.array(vision_grid_thws)
467
+ data = {"pixel_values_videos": pixel_values, "video_grid_thw": vision_grid_thws}
468
+
469
+ return BatchFeature(data=data, tensor_type=return_tensors)
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ebf1e92c435ac2aef8e8e507091bb6d780b8e09a05131e715e290828372423b
3
+ size 4966659944
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b829e0a28bda9f61b7a6ad9c9fc83718bf72dd58746bc35168ca37f792c3af23
3
+ size 4991495816
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc643f116e54b42cf39629d2fb95095789330e23682151f0834d2a59f146a206
3
+ size 4932751040
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e0283517f91866ab14df8738b620bcda622d0f2ed3d0453a172b1230d620ceb
3
+ size 1743319344
model.safetensors.index.json ADDED
@@ -0,0 +1,741 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 16634145792
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00004-of-00004.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00004.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
13
+ "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
14
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
15
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
16
+ "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
17
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
18
+ "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
19
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
20
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
21
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
22
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
23
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
24
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
25
+ "model.layers.1.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
26
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
27
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
28
+ "model.layers.1.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
29
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
30
+ "model.layers.1.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
31
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
32
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
33
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
34
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
35
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
36
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
37
+ "model.layers.10.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
38
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
39
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
40
+ "model.layers.10.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
41
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
42
+ "model.layers.10.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
43
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
44
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
45
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
46
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
47
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
48
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
49
+ "model.layers.11.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
50
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
51
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
52
+ "model.layers.11.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
53
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
54
+ "model.layers.11.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
55
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
56
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
57
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
58
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
59
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
60
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
61
+ "model.layers.12.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
62
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
63
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
64
+ "model.layers.12.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
65
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
66
+ "model.layers.12.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
67
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
68
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
69
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
70
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
71
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
72
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
73
+ "model.layers.13.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
74
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
75
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
76
+ "model.layers.13.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
77
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
78
+ "model.layers.13.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
79
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
80
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
81
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
82
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
83
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
84
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
85
+ "model.layers.14.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
86
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
87
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
88
+ "model.layers.14.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
89
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
90
+ "model.layers.14.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
91
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
92
+ "model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
93
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
94
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
95
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
96
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
97
+ "model.layers.15.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
98
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
99
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
100
+ "model.layers.15.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
101
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
102
+ "model.layers.15.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
103
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
104
+ "model.layers.16.input_layernorm.weight": "model-00003-of-00004.safetensors",
105
+ "model.layers.16.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
106
+ "model.layers.16.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
107
+ "model.layers.16.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
108
+ "model.layers.16.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
109
+ "model.layers.16.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
110
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
111
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
112
+ "model.layers.16.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
113
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
114
+ "model.layers.16.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
115
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
116
+ "model.layers.17.input_layernorm.weight": "model-00003-of-00004.safetensors",
117
+ "model.layers.17.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
118
+ "model.layers.17.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
119
+ "model.layers.17.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
120
+ "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
121
+ "model.layers.17.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
122
+ "model.layers.17.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
123
+ "model.layers.17.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
124
+ "model.layers.17.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
125
+ "model.layers.17.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
126
+ "model.layers.17.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
127
+ "model.layers.17.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
128
+ "model.layers.18.input_layernorm.weight": "model-00003-of-00004.safetensors",
129
+ "model.layers.18.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
130
+ "model.layers.18.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
131
+ "model.layers.18.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
132
+ "model.layers.18.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
133
+ "model.layers.18.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
134
+ "model.layers.18.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
135
+ "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
136
+ "model.layers.18.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
137
+ "model.layers.18.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
138
+ "model.layers.18.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
139
+ "model.layers.18.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
140
+ "model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors",
141
+ "model.layers.19.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
142
+ "model.layers.19.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
143
+ "model.layers.19.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
144
+ "model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
145
+ "model.layers.19.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
146
+ "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
147
+ "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
148
+ "model.layers.19.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
149
+ "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
150
+ "model.layers.19.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
151
+ "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
152
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
153
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
154
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
155
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
156
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
157
+ "model.layers.2.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
158
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
159
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
160
+ "model.layers.2.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
161
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
162
+ "model.layers.2.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
163
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
164
+ "model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
165
+ "model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
166
+ "model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
167
+ "model.layers.20.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
168
+ "model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
169
+ "model.layers.20.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
170
+ "model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
171
+ "model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
172
+ "model.layers.20.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
173
+ "model.layers.20.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
174
+ "model.layers.20.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
175
+ "model.layers.20.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
176
+ "model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors",
177
+ "model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
178
+ "model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
179
+ "model.layers.21.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
180
+ "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
181
+ "model.layers.21.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
182
+ "model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
183
+ "model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
184
+ "model.layers.21.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
185
+ "model.layers.21.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
186
+ "model.layers.21.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
187
+ "model.layers.21.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
188
+ "model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
189
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
190
+ "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
191
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
192
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
193
+ "model.layers.22.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
194
+ "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
195
+ "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
196
+ "model.layers.22.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
197
+ "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
198
+ "model.layers.22.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
199
+ "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
200
+ "model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
201
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
202
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
203
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
204
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
205
+ "model.layers.23.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
206
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
207
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
208
+ "model.layers.23.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
209
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
210
+ "model.layers.23.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
211
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
212
+ "model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
213
+ "model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
214
+ "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
215
+ "model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
216
+ "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
217
+ "model.layers.24.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
218
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
219
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
220
+ "model.layers.24.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
221
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
222
+ "model.layers.24.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
223
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
224
+ "model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
225
+ "model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
226
+ "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
227
+ "model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
228
+ "model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
229
+ "model.layers.25.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
230
+ "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
231
+ "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
232
+ "model.layers.25.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
233
+ "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
234
+ "model.layers.25.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
235
+ "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
236
+ "model.layers.26.input_layernorm.weight": "model-00004-of-00004.safetensors",
237
+ "model.layers.26.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
238
+ "model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
239
+ "model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
240
+ "model.layers.26.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
241
+ "model.layers.26.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
242
+ "model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
243
+ "model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
244
+ "model.layers.26.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
245
+ "model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
246
+ "model.layers.26.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
247
+ "model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
248
+ "model.layers.27.input_layernorm.weight": "model-00004-of-00004.safetensors",
249
+ "model.layers.27.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
250
+ "model.layers.27.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
251
+ "model.layers.27.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
252
+ "model.layers.27.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
253
+ "model.layers.27.self_attn.k_proj.bias": "model-00004-of-00004.safetensors",
254
+ "model.layers.27.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
255
+ "model.layers.27.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
256
+ "model.layers.27.self_attn.q_proj.bias": "model-00004-of-00004.safetensors",
257
+ "model.layers.27.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
258
+ "model.layers.27.self_attn.v_proj.bias": "model-00004-of-00004.safetensors",
259
+ "model.layers.27.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
260
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
261
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
262
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
263
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
264
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
265
+ "model.layers.3.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
266
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
267
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
268
+ "model.layers.3.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
269
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
270
+ "model.layers.3.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
271
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
272
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
273
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
274
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
275
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
276
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
277
+ "model.layers.4.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
278
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
279
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
280
+ "model.layers.4.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
281
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
282
+ "model.layers.4.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
283
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
284
+ "model.layers.5.input_layernorm.weight": "model-00002-of-00004.safetensors",
285
+ "model.layers.5.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
286
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
287
+ "model.layers.5.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
288
+ "model.layers.5.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
289
+ "model.layers.5.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
290
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
291
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
292
+ "model.layers.5.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
293
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
294
+ "model.layers.5.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
295
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
296
+ "model.layers.6.input_layernorm.weight": "model-00002-of-00004.safetensors",
297
+ "model.layers.6.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
298
+ "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
299
+ "model.layers.6.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
300
+ "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
301
+ "model.layers.6.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
302
+ "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
303
+ "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
304
+ "model.layers.6.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
305
+ "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
306
+ "model.layers.6.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
307
+ "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
308
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors",
309
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
310
+ "model.layers.7.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
311
+ "model.layers.7.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
312
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
313
+ "model.layers.7.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
314
+ "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
315
+ "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
316
+ "model.layers.7.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
317
+ "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
318
+ "model.layers.7.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
319
+ "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
320
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
321
+ "model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
322
+ "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
323
+ "model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
324
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
325
+ "model.layers.8.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
326
+ "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
327
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
328
+ "model.layers.8.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
329
+ "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
330
+ "model.layers.8.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
331
+ "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
332
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
333
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
334
+ "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
335
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
336
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
337
+ "model.layers.9.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
338
+ "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
339
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
340
+ "model.layers.9.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
341
+ "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
342
+ "model.layers.9.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
343
+ "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
344
+ "model.norm.weight": "model-00004-of-00004.safetensors",
345
+ "projector.linear_1.bias": "model-00004-of-00004.safetensors",
346
+ "projector.linear_1.weight": "model-00004-of-00004.safetensors",
347
+ "projector.linear_2.bias": "model-00004-of-00004.safetensors",
348
+ "projector.linear_2.weight": "model-00004-of-00004.safetensors",
349
+ "visual.blocks.0.attn.proj.bias": "model-00001-of-00004.safetensors",
350
+ "visual.blocks.0.attn.proj.weight": "model-00001-of-00004.safetensors",
351
+ "visual.blocks.0.attn.qkv.bias": "model-00001-of-00004.safetensors",
352
+ "visual.blocks.0.attn.qkv.weight": "model-00001-of-00004.safetensors",
353
+ "visual.blocks.0.mlp.fc1.bias": "model-00001-of-00004.safetensors",
354
+ "visual.blocks.0.mlp.fc1.weight": "model-00001-of-00004.safetensors",
355
+ "visual.blocks.0.mlp.fc2.bias": "model-00001-of-00004.safetensors",
356
+ "visual.blocks.0.mlp.fc2.weight": "model-00001-of-00004.safetensors",
357
+ "visual.blocks.0.norm1.bias": "model-00001-of-00004.safetensors",
358
+ "visual.blocks.0.norm1.weight": "model-00001-of-00004.safetensors",
359
+ "visual.blocks.0.norm2.bias": "model-00001-of-00004.safetensors",
360
+ "visual.blocks.0.norm2.weight": "model-00001-of-00004.safetensors",
361
+ "visual.blocks.1.attn.proj.bias": "model-00001-of-00004.safetensors",
362
+ "visual.blocks.1.attn.proj.weight": "model-00001-of-00004.safetensors",
363
+ "visual.blocks.1.attn.qkv.bias": "model-00001-of-00004.safetensors",
364
+ "visual.blocks.1.attn.qkv.weight": "model-00001-of-00004.safetensors",
365
+ "visual.blocks.1.mlp.fc1.bias": "model-00001-of-00004.safetensors",
366
+ "visual.blocks.1.mlp.fc1.weight": "model-00001-of-00004.safetensors",
367
+ "visual.blocks.1.mlp.fc2.bias": "model-00001-of-00004.safetensors",
368
+ "visual.blocks.1.mlp.fc2.weight": "model-00001-of-00004.safetensors",
369
+ "visual.blocks.1.norm1.bias": "model-00001-of-00004.safetensors",
370
+ "visual.blocks.1.norm1.weight": "model-00001-of-00004.safetensors",
371
+ "visual.blocks.1.norm2.bias": "model-00001-of-00004.safetensors",
372
+ "visual.blocks.1.norm2.weight": "model-00001-of-00004.safetensors",
373
+ "visual.blocks.10.attn.proj.bias": "model-00001-of-00004.safetensors",
374
+ "visual.blocks.10.attn.proj.weight": "model-00001-of-00004.safetensors",
375
+ "visual.blocks.10.attn.qkv.bias": "model-00001-of-00004.safetensors",
376
+ "visual.blocks.10.attn.qkv.weight": "model-00001-of-00004.safetensors",
377
+ "visual.blocks.10.mlp.fc1.bias": "model-00001-of-00004.safetensors",
378
+ "visual.blocks.10.mlp.fc1.weight": "model-00001-of-00004.safetensors",
379
+ "visual.blocks.10.mlp.fc2.bias": "model-00001-of-00004.safetensors",
380
+ "visual.blocks.10.mlp.fc2.weight": "model-00001-of-00004.safetensors",
381
+ "visual.blocks.10.norm1.bias": "model-00001-of-00004.safetensors",
382
+ "visual.blocks.10.norm1.weight": "model-00001-of-00004.safetensors",
383
+ "visual.blocks.10.norm2.bias": "model-00001-of-00004.safetensors",
384
+ "visual.blocks.10.norm2.weight": "model-00001-of-00004.safetensors",
385
+ "visual.blocks.11.attn.proj.bias": "model-00001-of-00004.safetensors",
386
+ "visual.blocks.11.attn.proj.weight": "model-00001-of-00004.safetensors",
387
+ "visual.blocks.11.attn.qkv.bias": "model-00001-of-00004.safetensors",
388
+ "visual.blocks.11.attn.qkv.weight": "model-00001-of-00004.safetensors",
389
+ "visual.blocks.11.mlp.fc1.bias": "model-00001-of-00004.safetensors",
390
+ "visual.blocks.11.mlp.fc1.weight": "model-00001-of-00004.safetensors",
391
+ "visual.blocks.11.mlp.fc2.bias": "model-00001-of-00004.safetensors",
392
+ "visual.blocks.11.mlp.fc2.weight": "model-00001-of-00004.safetensors",
393
+ "visual.blocks.11.norm1.bias": "model-00001-of-00004.safetensors",
394
+ "visual.blocks.11.norm1.weight": "model-00001-of-00004.safetensors",
395
+ "visual.blocks.11.norm2.bias": "model-00001-of-00004.safetensors",
396
+ "visual.blocks.11.norm2.weight": "model-00001-of-00004.safetensors",
397
+ "visual.blocks.12.attn.proj.bias": "model-00001-of-00004.safetensors",
398
+ "visual.blocks.12.attn.proj.weight": "model-00001-of-00004.safetensors",
399
+ "visual.blocks.12.attn.qkv.bias": "model-00001-of-00004.safetensors",
400
+ "visual.blocks.12.attn.qkv.weight": "model-00001-of-00004.safetensors",
401
+ "visual.blocks.12.mlp.fc1.bias": "model-00001-of-00004.safetensors",
402
+ "visual.blocks.12.mlp.fc1.weight": "model-00001-of-00004.safetensors",
403
+ "visual.blocks.12.mlp.fc2.bias": "model-00001-of-00004.safetensors",
404
+ "visual.blocks.12.mlp.fc2.weight": "model-00001-of-00004.safetensors",
405
+ "visual.blocks.12.norm1.bias": "model-00001-of-00004.safetensors",
406
+ "visual.blocks.12.norm1.weight": "model-00001-of-00004.safetensors",
407
+ "visual.blocks.12.norm2.bias": "model-00001-of-00004.safetensors",
408
+ "visual.blocks.12.norm2.weight": "model-00001-of-00004.safetensors",
409
+ "visual.blocks.13.attn.proj.bias": "model-00001-of-00004.safetensors",
410
+ "visual.blocks.13.attn.proj.weight": "model-00001-of-00004.safetensors",
411
+ "visual.blocks.13.attn.qkv.bias": "model-00001-of-00004.safetensors",
412
+ "visual.blocks.13.attn.qkv.weight": "model-00001-of-00004.safetensors",
413
+ "visual.blocks.13.mlp.fc1.bias": "model-00001-of-00004.safetensors",
414
+ "visual.blocks.13.mlp.fc1.weight": "model-00001-of-00004.safetensors",
415
+ "visual.blocks.13.mlp.fc2.bias": "model-00001-of-00004.safetensors",
416
+ "visual.blocks.13.mlp.fc2.weight": "model-00001-of-00004.safetensors",
417
+ "visual.blocks.13.norm1.bias": "model-00001-of-00004.safetensors",
418
+ "visual.blocks.13.norm1.weight": "model-00001-of-00004.safetensors",
419
+ "visual.blocks.13.norm2.bias": "model-00001-of-00004.safetensors",
420
+ "visual.blocks.13.norm2.weight": "model-00001-of-00004.safetensors",
421
+ "visual.blocks.14.attn.proj.bias": "model-00001-of-00004.safetensors",
422
+ "visual.blocks.14.attn.proj.weight": "model-00001-of-00004.safetensors",
423
+ "visual.blocks.14.attn.qkv.bias": "model-00001-of-00004.safetensors",
424
+ "visual.blocks.14.attn.qkv.weight": "model-00001-of-00004.safetensors",
425
+ "visual.blocks.14.mlp.fc1.bias": "model-00001-of-00004.safetensors",
426
+ "visual.blocks.14.mlp.fc1.weight": "model-00001-of-00004.safetensors",
427
+ "visual.blocks.14.mlp.fc2.bias": "model-00001-of-00004.safetensors",
428
+ "visual.blocks.14.mlp.fc2.weight": "model-00001-of-00004.safetensors",
429
+ "visual.blocks.14.norm1.bias": "model-00001-of-00004.safetensors",
430
+ "visual.blocks.14.norm1.weight": "model-00001-of-00004.safetensors",
431
+ "visual.blocks.14.norm2.bias": "model-00001-of-00004.safetensors",
432
+ "visual.blocks.14.norm2.weight": "model-00001-of-00004.safetensors",
433
+ "visual.blocks.15.attn.proj.bias": "model-00001-of-00004.safetensors",
434
+ "visual.blocks.15.attn.proj.weight": "model-00001-of-00004.safetensors",
435
+ "visual.blocks.15.attn.qkv.bias": "model-00001-of-00004.safetensors",
436
+ "visual.blocks.15.attn.qkv.weight": "model-00001-of-00004.safetensors",
437
+ "visual.blocks.15.mlp.fc1.bias": "model-00001-of-00004.safetensors",
438
+ "visual.blocks.15.mlp.fc1.weight": "model-00001-of-00004.safetensors",
439
+ "visual.blocks.15.mlp.fc2.bias": "model-00001-of-00004.safetensors",
440
+ "visual.blocks.15.mlp.fc2.weight": "model-00001-of-00004.safetensors",
441
+ "visual.blocks.15.norm1.bias": "model-00001-of-00004.safetensors",
442
+ "visual.blocks.15.norm1.weight": "model-00001-of-00004.safetensors",
443
+ "visual.blocks.15.norm2.bias": "model-00001-of-00004.safetensors",
444
+ "visual.blocks.15.norm2.weight": "model-00001-of-00004.safetensors",
445
+ "visual.blocks.16.attn.proj.bias": "model-00001-of-00004.safetensors",
446
+ "visual.blocks.16.attn.proj.weight": "model-00001-of-00004.safetensors",
447
+ "visual.blocks.16.attn.qkv.bias": "model-00001-of-00004.safetensors",
448
+ "visual.blocks.16.attn.qkv.weight": "model-00001-of-00004.safetensors",
449
+ "visual.blocks.16.mlp.fc1.bias": "model-00001-of-00004.safetensors",
450
+ "visual.blocks.16.mlp.fc1.weight": "model-00001-of-00004.safetensors",
451
+ "visual.blocks.16.mlp.fc2.bias": "model-00001-of-00004.safetensors",
452
+ "visual.blocks.16.mlp.fc2.weight": "model-00001-of-00004.safetensors",
453
+ "visual.blocks.16.norm1.bias": "model-00001-of-00004.safetensors",
454
+ "visual.blocks.16.norm1.weight": "model-00001-of-00004.safetensors",
455
+ "visual.blocks.16.norm2.bias": "model-00001-of-00004.safetensors",
456
+ "visual.blocks.16.norm2.weight": "model-00001-of-00004.safetensors",
457
+ "visual.blocks.17.attn.proj.bias": "model-00001-of-00004.safetensors",
458
+ "visual.blocks.17.attn.proj.weight": "model-00001-of-00004.safetensors",
459
+ "visual.blocks.17.attn.qkv.bias": "model-00001-of-00004.safetensors",
460
+ "visual.blocks.17.attn.qkv.weight": "model-00001-of-00004.safetensors",
461
+ "visual.blocks.17.mlp.fc1.bias": "model-00001-of-00004.safetensors",
462
+ "visual.blocks.17.mlp.fc1.weight": "model-00001-of-00004.safetensors",
463
+ "visual.blocks.17.mlp.fc2.bias": "model-00001-of-00004.safetensors",
464
+ "visual.blocks.17.mlp.fc2.weight": "model-00001-of-00004.safetensors",
465
+ "visual.blocks.17.norm1.bias": "model-00001-of-00004.safetensors",
466
+ "visual.blocks.17.norm1.weight": "model-00001-of-00004.safetensors",
467
+ "visual.blocks.17.norm2.bias": "model-00001-of-00004.safetensors",
468
+ "visual.blocks.17.norm2.weight": "model-00001-of-00004.safetensors",
469
+ "visual.blocks.18.attn.proj.bias": "model-00001-of-00004.safetensors",
470
+ "visual.blocks.18.attn.proj.weight": "model-00001-of-00004.safetensors",
471
+ "visual.blocks.18.attn.qkv.bias": "model-00001-of-00004.safetensors",
472
+ "visual.blocks.18.attn.qkv.weight": "model-00001-of-00004.safetensors",
473
+ "visual.blocks.18.mlp.fc1.bias": "model-00001-of-00004.safetensors",
474
+ "visual.blocks.18.mlp.fc1.weight": "model-00001-of-00004.safetensors",
475
+ "visual.blocks.18.mlp.fc2.bias": "model-00001-of-00004.safetensors",
476
+ "visual.blocks.18.mlp.fc2.weight": "model-00001-of-00004.safetensors",
477
+ "visual.blocks.18.norm1.bias": "model-00001-of-00004.safetensors",
478
+ "visual.blocks.18.norm1.weight": "model-00001-of-00004.safetensors",
479
+ "visual.blocks.18.norm2.bias": "model-00001-of-00004.safetensors",
480
+ "visual.blocks.18.norm2.weight": "model-00001-of-00004.safetensors",
481
+ "visual.blocks.19.attn.proj.bias": "model-00001-of-00004.safetensors",
482
+ "visual.blocks.19.attn.proj.weight": "model-00001-of-00004.safetensors",
483
+ "visual.blocks.19.attn.qkv.bias": "model-00001-of-00004.safetensors",
484
+ "visual.blocks.19.attn.qkv.weight": "model-00001-of-00004.safetensors",
485
+ "visual.blocks.19.mlp.fc1.bias": "model-00001-of-00004.safetensors",
486
+ "visual.blocks.19.mlp.fc1.weight": "model-00001-of-00004.safetensors",
487
+ "visual.blocks.19.mlp.fc2.bias": "model-00001-of-00004.safetensors",
488
+ "visual.blocks.19.mlp.fc2.weight": "model-00001-of-00004.safetensors",
489
+ "visual.blocks.19.norm1.bias": "model-00001-of-00004.safetensors",
490
+ "visual.blocks.19.norm1.weight": "model-00001-of-00004.safetensors",
491
+ "visual.blocks.19.norm2.bias": "model-00001-of-00004.safetensors",
492
+ "visual.blocks.19.norm2.weight": "model-00001-of-00004.safetensors",
493
+ "visual.blocks.2.attn.proj.bias": "model-00001-of-00004.safetensors",
494
+ "visual.blocks.2.attn.proj.weight": "model-00001-of-00004.safetensors",
495
+ "visual.blocks.2.attn.qkv.bias": "model-00001-of-00004.safetensors",
496
+ "visual.blocks.2.attn.qkv.weight": "model-00001-of-00004.safetensors",
497
+ "visual.blocks.2.mlp.fc1.bias": "model-00001-of-00004.safetensors",
498
+ "visual.blocks.2.mlp.fc1.weight": "model-00001-of-00004.safetensors",
499
+ "visual.blocks.2.mlp.fc2.bias": "model-00001-of-00004.safetensors",
500
+ "visual.blocks.2.mlp.fc2.weight": "model-00001-of-00004.safetensors",
501
+ "visual.blocks.2.norm1.bias": "model-00001-of-00004.safetensors",
502
+ "visual.blocks.2.norm1.weight": "model-00001-of-00004.safetensors",
503
+ "visual.blocks.2.norm2.bias": "model-00001-of-00004.safetensors",
504
+ "visual.blocks.2.norm2.weight": "model-00001-of-00004.safetensors",
505
+ "visual.blocks.20.attn.proj.bias": "model-00001-of-00004.safetensors",
506
+ "visual.blocks.20.attn.proj.weight": "model-00001-of-00004.safetensors",
507
+ "visual.blocks.20.attn.qkv.bias": "model-00001-of-00004.safetensors",
508
+ "visual.blocks.20.attn.qkv.weight": "model-00001-of-00004.safetensors",
509
+ "visual.blocks.20.mlp.fc1.bias": "model-00001-of-00004.safetensors",
510
+ "visual.blocks.20.mlp.fc1.weight": "model-00001-of-00004.safetensors",
511
+ "visual.blocks.20.mlp.fc2.bias": "model-00001-of-00004.safetensors",
512
+ "visual.blocks.20.mlp.fc2.weight": "model-00001-of-00004.safetensors",
513
+ "visual.blocks.20.norm1.bias": "model-00001-of-00004.safetensors",
514
+ "visual.blocks.20.norm1.weight": "model-00001-of-00004.safetensors",
515
+ "visual.blocks.20.norm2.bias": "model-00001-of-00004.safetensors",
516
+ "visual.blocks.20.norm2.weight": "model-00001-of-00004.safetensors",
517
+ "visual.blocks.21.attn.proj.bias": "model-00001-of-00004.safetensors",
518
+ "visual.blocks.21.attn.proj.weight": "model-00001-of-00004.safetensors",
519
+ "visual.blocks.21.attn.qkv.bias": "model-00001-of-00004.safetensors",
520
+ "visual.blocks.21.attn.qkv.weight": "model-00001-of-00004.safetensors",
521
+ "visual.blocks.21.mlp.fc1.bias": "model-00001-of-00004.safetensors",
522
+ "visual.blocks.21.mlp.fc1.weight": "model-00001-of-00004.safetensors",
523
+ "visual.blocks.21.mlp.fc2.bias": "model-00001-of-00004.safetensors",
524
+ "visual.blocks.21.mlp.fc2.weight": "model-00001-of-00004.safetensors",
525
+ "visual.blocks.21.norm1.bias": "model-00001-of-00004.safetensors",
526
+ "visual.blocks.21.norm1.weight": "model-00001-of-00004.safetensors",
527
+ "visual.blocks.21.norm2.bias": "model-00001-of-00004.safetensors",
528
+ "visual.blocks.21.norm2.weight": "model-00001-of-00004.safetensors",
529
+ "visual.blocks.22.attn.proj.bias": "model-00001-of-00004.safetensors",
530
+ "visual.blocks.22.attn.proj.weight": "model-00001-of-00004.safetensors",
531
+ "visual.blocks.22.attn.qkv.bias": "model-00001-of-00004.safetensors",
532
+ "visual.blocks.22.attn.qkv.weight": "model-00001-of-00004.safetensors",
533
+ "visual.blocks.22.mlp.fc1.bias": "model-00001-of-00004.safetensors",
534
+ "visual.blocks.22.mlp.fc1.weight": "model-00001-of-00004.safetensors",
535
+ "visual.blocks.22.mlp.fc2.bias": "model-00001-of-00004.safetensors",
536
+ "visual.blocks.22.mlp.fc2.weight": "model-00001-of-00004.safetensors",
537
+ "visual.blocks.22.norm1.bias": "model-00001-of-00004.safetensors",
538
+ "visual.blocks.22.norm1.weight": "model-00001-of-00004.safetensors",
539
+ "visual.blocks.22.norm2.bias": "model-00001-of-00004.safetensors",
540
+ "visual.blocks.22.norm2.weight": "model-00001-of-00004.safetensors",
541
+ "visual.blocks.23.attn.proj.bias": "model-00001-of-00004.safetensors",
542
+ "visual.blocks.23.attn.proj.weight": "model-00001-of-00004.safetensors",
543
+ "visual.blocks.23.attn.qkv.bias": "model-00001-of-00004.safetensors",
544
+ "visual.blocks.23.attn.qkv.weight": "model-00001-of-00004.safetensors",
545
+ "visual.blocks.23.mlp.fc1.bias": "model-00001-of-00004.safetensors",
546
+ "visual.blocks.23.mlp.fc1.weight": "model-00001-of-00004.safetensors",
547
+ "visual.blocks.23.mlp.fc2.bias": "model-00001-of-00004.safetensors",
548
+ "visual.blocks.23.mlp.fc2.weight": "model-00001-of-00004.safetensors",
549
+ "visual.blocks.23.norm1.bias": "model-00001-of-00004.safetensors",
550
+ "visual.blocks.23.norm1.weight": "model-00001-of-00004.safetensors",
551
+ "visual.blocks.23.norm2.bias": "model-00001-of-00004.safetensors",
552
+ "visual.blocks.23.norm2.weight": "model-00001-of-00004.safetensors",
553
+ "visual.blocks.24.attn.proj.bias": "model-00001-of-00004.safetensors",
554
+ "visual.blocks.24.attn.proj.weight": "model-00001-of-00004.safetensors",
555
+ "visual.blocks.24.attn.qkv.bias": "model-00001-of-00004.safetensors",
556
+ "visual.blocks.24.attn.qkv.weight": "model-00001-of-00004.safetensors",
557
+ "visual.blocks.24.mlp.fc1.bias": "model-00001-of-00004.safetensors",
558
+ "visual.blocks.24.mlp.fc1.weight": "model-00001-of-00004.safetensors",
559
+ "visual.blocks.24.mlp.fc2.bias": "model-00001-of-00004.safetensors",
560
+ "visual.blocks.24.mlp.fc2.weight": "model-00001-of-00004.safetensors",
561
+ "visual.blocks.24.norm1.bias": "model-00001-of-00004.safetensors",
562
+ "visual.blocks.24.norm1.weight": "model-00001-of-00004.safetensors",
563
+ "visual.blocks.24.norm2.bias": "model-00001-of-00004.safetensors",
564
+ "visual.blocks.24.norm2.weight": "model-00001-of-00004.safetensors",
565
+ "visual.blocks.25.attn.proj.bias": "model-00001-of-00004.safetensors",
566
+ "visual.blocks.25.attn.proj.weight": "model-00001-of-00004.safetensors",
567
+ "visual.blocks.25.attn.qkv.bias": "model-00001-of-00004.safetensors",
568
+ "visual.blocks.25.attn.qkv.weight": "model-00001-of-00004.safetensors",
569
+ "visual.blocks.25.mlp.fc1.bias": "model-00001-of-00004.safetensors",
570
+ "visual.blocks.25.mlp.fc1.weight": "model-00001-of-00004.safetensors",
571
+ "visual.blocks.25.mlp.fc2.bias": "model-00001-of-00004.safetensors",
572
+ "visual.blocks.25.mlp.fc2.weight": "model-00001-of-00004.safetensors",
573
+ "visual.blocks.25.norm1.bias": "model-00001-of-00004.safetensors",
574
+ "visual.blocks.25.norm1.weight": "model-00001-of-00004.safetensors",
575
+ "visual.blocks.25.norm2.bias": "model-00001-of-00004.safetensors",
576
+ "visual.blocks.25.norm2.weight": "model-00001-of-00004.safetensors",
577
+ "visual.blocks.26.attn.proj.bias": "model-00001-of-00004.safetensors",
578
+ "visual.blocks.26.attn.proj.weight": "model-00001-of-00004.safetensors",
579
+ "visual.blocks.26.attn.qkv.bias": "model-00001-of-00004.safetensors",
580
+ "visual.blocks.26.attn.qkv.weight": "model-00001-of-00004.safetensors",
581
+ "visual.blocks.26.mlp.fc1.bias": "model-00001-of-00004.safetensors",
582
+ "visual.blocks.26.mlp.fc1.weight": "model-00001-of-00004.safetensors",
583
+ "visual.blocks.26.mlp.fc2.bias": "model-00001-of-00004.safetensors",
584
+ "visual.blocks.26.mlp.fc2.weight": "model-00001-of-00004.safetensors",
585
+ "visual.blocks.26.norm1.bias": "model-00001-of-00004.safetensors",
586
+ "visual.blocks.26.norm1.weight": "model-00001-of-00004.safetensors",
587
+ "visual.blocks.26.norm2.bias": "model-00001-of-00004.safetensors",
588
+ "visual.blocks.26.norm2.weight": "model-00001-of-00004.safetensors",
589
+ "visual.blocks.27.attn.proj.bias": "model-00001-of-00004.safetensors",
590
+ "visual.blocks.27.attn.proj.weight": "model-00001-of-00004.safetensors",
591
+ "visual.blocks.27.attn.qkv.bias": "model-00001-of-00004.safetensors",
592
+ "visual.blocks.27.attn.qkv.weight": "model-00001-of-00004.safetensors",
593
+ "visual.blocks.27.mlp.fc1.bias": "model-00001-of-00004.safetensors",
594
+ "visual.blocks.27.mlp.fc1.weight": "model-00001-of-00004.safetensors",
595
+ "visual.blocks.27.mlp.fc2.bias": "model-00001-of-00004.safetensors",
596
+ "visual.blocks.27.mlp.fc2.weight": "model-00001-of-00004.safetensors",
597
+ "visual.blocks.27.norm1.bias": "model-00001-of-00004.safetensors",
598
+ "visual.blocks.27.norm1.weight": "model-00001-of-00004.safetensors",
599
+ "visual.blocks.27.norm2.bias": "model-00001-of-00004.safetensors",
600
+ "visual.blocks.27.norm2.weight": "model-00001-of-00004.safetensors",
601
+ "visual.blocks.28.attn.proj.bias": "model-00001-of-00004.safetensors",
602
+ "visual.blocks.28.attn.proj.weight": "model-00001-of-00004.safetensors",
603
+ "visual.blocks.28.attn.qkv.bias": "model-00001-of-00004.safetensors",
604
+ "visual.blocks.28.attn.qkv.weight": "model-00001-of-00004.safetensors",
605
+ "visual.blocks.28.mlp.fc1.bias": "model-00001-of-00004.safetensors",
606
+ "visual.blocks.28.mlp.fc1.weight": "model-00001-of-00004.safetensors",
607
+ "visual.blocks.28.mlp.fc2.bias": "model-00001-of-00004.safetensors",
608
+ "visual.blocks.28.mlp.fc2.weight": "model-00001-of-00004.safetensors",
609
+ "visual.blocks.28.norm1.bias": "model-00001-of-00004.safetensors",
610
+ "visual.blocks.28.norm1.weight": "model-00001-of-00004.safetensors",
611
+ "visual.blocks.28.norm2.bias": "model-00001-of-00004.safetensors",
612
+ "visual.blocks.28.norm2.weight": "model-00001-of-00004.safetensors",
613
+ "visual.blocks.29.attn.proj.bias": "model-00001-of-00004.safetensors",
614
+ "visual.blocks.29.attn.proj.weight": "model-00001-of-00004.safetensors",
615
+ "visual.blocks.29.attn.qkv.bias": "model-00001-of-00004.safetensors",
616
+ "visual.blocks.29.attn.qkv.weight": "model-00001-of-00004.safetensors",
617
+ "visual.blocks.29.mlp.fc1.bias": "model-00001-of-00004.safetensors",
618
+ "visual.blocks.29.mlp.fc1.weight": "model-00001-of-00004.safetensors",
619
+ "visual.blocks.29.mlp.fc2.bias": "model-00001-of-00004.safetensors",
620
+ "visual.blocks.29.mlp.fc2.weight": "model-00001-of-00004.safetensors",
621
+ "visual.blocks.29.norm1.bias": "model-00001-of-00004.safetensors",
622
+ "visual.blocks.29.norm1.weight": "model-00001-of-00004.safetensors",
623
+ "visual.blocks.29.norm2.bias": "model-00001-of-00004.safetensors",
624
+ "visual.blocks.29.norm2.weight": "model-00001-of-00004.safetensors",
625
+ "visual.blocks.3.attn.proj.bias": "model-00001-of-00004.safetensors",
626
+ "visual.blocks.3.attn.proj.weight": "model-00001-of-00004.safetensors",
627
+ "visual.blocks.3.attn.qkv.bias": "model-00001-of-00004.safetensors",
628
+ "visual.blocks.3.attn.qkv.weight": "model-00001-of-00004.safetensors",
629
+ "visual.blocks.3.mlp.fc1.bias": "model-00001-of-00004.safetensors",
630
+ "visual.blocks.3.mlp.fc1.weight": "model-00001-of-00004.safetensors",
631
+ "visual.blocks.3.mlp.fc2.bias": "model-00001-of-00004.safetensors",
632
+ "visual.blocks.3.mlp.fc2.weight": "model-00001-of-00004.safetensors",
633
+ "visual.blocks.3.norm1.bias": "model-00001-of-00004.safetensors",
634
+ "visual.blocks.3.norm1.weight": "model-00001-of-00004.safetensors",
635
+ "visual.blocks.3.norm2.bias": "model-00001-of-00004.safetensors",
636
+ "visual.blocks.3.norm2.weight": "model-00001-of-00004.safetensors",
637
+ "visual.blocks.30.attn.proj.bias": "model-00001-of-00004.safetensors",
638
+ "visual.blocks.30.attn.proj.weight": "model-00001-of-00004.safetensors",
639
+ "visual.blocks.30.attn.qkv.bias": "model-00001-of-00004.safetensors",
640
+ "visual.blocks.30.attn.qkv.weight": "model-00001-of-00004.safetensors",
641
+ "visual.blocks.30.mlp.fc1.bias": "model-00001-of-00004.safetensors",
642
+ "visual.blocks.30.mlp.fc1.weight": "model-00001-of-00004.safetensors",
643
+ "visual.blocks.30.mlp.fc2.bias": "model-00001-of-00004.safetensors",
644
+ "visual.blocks.30.mlp.fc2.weight": "model-00001-of-00004.safetensors",
645
+ "visual.blocks.30.norm1.bias": "model-00001-of-00004.safetensors",
646
+ "visual.blocks.30.norm1.weight": "model-00001-of-00004.safetensors",
647
+ "visual.blocks.30.norm2.bias": "model-00001-of-00004.safetensors",
648
+ "visual.blocks.30.norm2.weight": "model-00001-of-00004.safetensors",
649
+ "visual.blocks.31.attn.proj.bias": "model-00001-of-00004.safetensors",
650
+ "visual.blocks.31.attn.proj.weight": "model-00001-of-00004.safetensors",
651
+ "visual.blocks.31.attn.qkv.bias": "model-00001-of-00004.safetensors",
652
+ "visual.blocks.31.attn.qkv.weight": "model-00001-of-00004.safetensors",
653
+ "visual.blocks.31.mlp.fc1.bias": "model-00001-of-00004.safetensors",
654
+ "visual.blocks.31.mlp.fc1.weight": "model-00001-of-00004.safetensors",
655
+ "visual.blocks.31.mlp.fc2.bias": "model-00001-of-00004.safetensors",
656
+ "visual.blocks.31.mlp.fc2.weight": "model-00001-of-00004.safetensors",
657
+ "visual.blocks.31.norm1.bias": "model-00001-of-00004.safetensors",
658
+ "visual.blocks.31.norm1.weight": "model-00001-of-00004.safetensors",
659
+ "visual.blocks.31.norm2.bias": "model-00001-of-00004.safetensors",
660
+ "visual.blocks.31.norm2.weight": "model-00001-of-00004.safetensors",
661
+ "visual.blocks.4.attn.proj.bias": "model-00001-of-00004.safetensors",
662
+ "visual.blocks.4.attn.proj.weight": "model-00001-of-00004.safetensors",
663
+ "visual.blocks.4.attn.qkv.bias": "model-00001-of-00004.safetensors",
664
+ "visual.blocks.4.attn.qkv.weight": "model-00001-of-00004.safetensors",
665
+ "visual.blocks.4.mlp.fc1.bias": "model-00001-of-00004.safetensors",
666
+ "visual.blocks.4.mlp.fc1.weight": "model-00001-of-00004.safetensors",
667
+ "visual.blocks.4.mlp.fc2.bias": "model-00001-of-00004.safetensors",
668
+ "visual.blocks.4.mlp.fc2.weight": "model-00001-of-00004.safetensors",
669
+ "visual.blocks.4.norm1.bias": "model-00001-of-00004.safetensors",
670
+ "visual.blocks.4.norm1.weight": "model-00001-of-00004.safetensors",
671
+ "visual.blocks.4.norm2.bias": "model-00001-of-00004.safetensors",
672
+ "visual.blocks.4.norm2.weight": "model-00001-of-00004.safetensors",
673
+ "visual.blocks.5.attn.proj.bias": "model-00001-of-00004.safetensors",
674
+ "visual.blocks.5.attn.proj.weight": "model-00001-of-00004.safetensors",
675
+ "visual.blocks.5.attn.qkv.bias": "model-00001-of-00004.safetensors",
676
+ "visual.blocks.5.attn.qkv.weight": "model-00001-of-00004.safetensors",
677
+ "visual.blocks.5.mlp.fc1.bias": "model-00001-of-00004.safetensors",
678
+ "visual.blocks.5.mlp.fc1.weight": "model-00001-of-00004.safetensors",
679
+ "visual.blocks.5.mlp.fc2.bias": "model-00001-of-00004.safetensors",
680
+ "visual.blocks.5.mlp.fc2.weight": "model-00001-of-00004.safetensors",
681
+ "visual.blocks.5.norm1.bias": "model-00001-of-00004.safetensors",
682
+ "visual.blocks.5.norm1.weight": "model-00001-of-00004.safetensors",
683
+ "visual.blocks.5.norm2.bias": "model-00001-of-00004.safetensors",
684
+ "visual.blocks.5.norm2.weight": "model-00001-of-00004.safetensors",
685
+ "visual.blocks.6.attn.proj.bias": "model-00001-of-00004.safetensors",
686
+ "visual.blocks.6.attn.proj.weight": "model-00001-of-00004.safetensors",
687
+ "visual.blocks.6.attn.qkv.bias": "model-00001-of-00004.safetensors",
688
+ "visual.blocks.6.attn.qkv.weight": "model-00001-of-00004.safetensors",
689
+ "visual.blocks.6.mlp.fc1.bias": "model-00001-of-00004.safetensors",
690
+ "visual.blocks.6.mlp.fc1.weight": "model-00001-of-00004.safetensors",
691
+ "visual.blocks.6.mlp.fc2.bias": "model-00001-of-00004.safetensors",
692
+ "visual.blocks.6.mlp.fc2.weight": "model-00001-of-00004.safetensors",
693
+ "visual.blocks.6.norm1.bias": "model-00001-of-00004.safetensors",
694
+ "visual.blocks.6.norm1.weight": "model-00001-of-00004.safetensors",
695
+ "visual.blocks.6.norm2.bias": "model-00001-of-00004.safetensors",
696
+ "visual.blocks.6.norm2.weight": "model-00001-of-00004.safetensors",
697
+ "visual.blocks.7.attn.proj.bias": "model-00001-of-00004.safetensors",
698
+ "visual.blocks.7.attn.proj.weight": "model-00001-of-00004.safetensors",
699
+ "visual.blocks.7.attn.qkv.bias": "model-00001-of-00004.safetensors",
700
+ "visual.blocks.7.attn.qkv.weight": "model-00001-of-00004.safetensors",
701
+ "visual.blocks.7.mlp.fc1.bias": "model-00001-of-00004.safetensors",
702
+ "visual.blocks.7.mlp.fc1.weight": "model-00001-of-00004.safetensors",
703
+ "visual.blocks.7.mlp.fc2.bias": "model-00001-of-00004.safetensors",
704
+ "visual.blocks.7.mlp.fc2.weight": "model-00001-of-00004.safetensors",
705
+ "visual.blocks.7.norm1.bias": "model-00001-of-00004.safetensors",
706
+ "visual.blocks.7.norm1.weight": "model-00001-of-00004.safetensors",
707
+ "visual.blocks.7.norm2.bias": "model-00001-of-00004.safetensors",
708
+ "visual.blocks.7.norm2.weight": "model-00001-of-00004.safetensors",
709
+ "visual.blocks.8.attn.proj.bias": "model-00001-of-00004.safetensors",
710
+ "visual.blocks.8.attn.proj.weight": "model-00001-of-00004.safetensors",
711
+ "visual.blocks.8.attn.qkv.bias": "model-00001-of-00004.safetensors",
712
+ "visual.blocks.8.attn.qkv.weight": "model-00001-of-00004.safetensors",
713
+ "visual.blocks.8.mlp.fc1.bias": "model-00001-of-00004.safetensors",
714
+ "visual.blocks.8.mlp.fc1.weight": "model-00001-of-00004.safetensors",
715
+ "visual.blocks.8.mlp.fc2.bias": "model-00001-of-00004.safetensors",
716
+ "visual.blocks.8.mlp.fc2.weight": "model-00001-of-00004.safetensors",
717
+ "visual.blocks.8.norm1.bias": "model-00001-of-00004.safetensors",
718
+ "visual.blocks.8.norm1.weight": "model-00001-of-00004.safetensors",
719
+ "visual.blocks.8.norm2.bias": "model-00001-of-00004.safetensors",
720
+ "visual.blocks.8.norm2.weight": "model-00001-of-00004.safetensors",
721
+ "visual.blocks.9.attn.proj.bias": "model-00001-of-00004.safetensors",
722
+ "visual.blocks.9.attn.proj.weight": "model-00001-of-00004.safetensors",
723
+ "visual.blocks.9.attn.qkv.bias": "model-00001-of-00004.safetensors",
724
+ "visual.blocks.9.attn.qkv.weight": "model-00001-of-00004.safetensors",
725
+ "visual.blocks.9.mlp.fc1.bias": "model-00001-of-00004.safetensors",
726
+ "visual.blocks.9.mlp.fc1.weight": "model-00001-of-00004.safetensors",
727
+ "visual.blocks.9.mlp.fc2.bias": "model-00001-of-00004.safetensors",
728
+ "visual.blocks.9.mlp.fc2.weight": "model-00001-of-00004.safetensors",
729
+ "visual.blocks.9.norm1.bias": "model-00001-of-00004.safetensors",
730
+ "visual.blocks.9.norm1.weight": "model-00001-of-00004.safetensors",
731
+ "visual.blocks.9.norm2.bias": "model-00001-of-00004.safetensors",
732
+ "visual.blocks.9.norm2.weight": "model-00001-of-00004.safetensors",
733
+ "visual.merger.ln_q.bias": "model-00001-of-00004.safetensors",
734
+ "visual.merger.ln_q.weight": "model-00001-of-00004.safetensors",
735
+ "visual.merger.mlp.0.bias": "model-00001-of-00004.safetensors",
736
+ "visual.merger.mlp.0.weight": "model-00001-of-00004.safetensors",
737
+ "visual.merger.mlp.2.bias": "model-00001-of-00004.safetensors",
738
+ "visual.merger.mlp.2.weight": "model-00001-of-00004.safetensors",
739
+ "visual.patch_embed.proj.weight": "model-00001-of-00004.safetensors"
740
+ }
741
+ }
modeling_dreamvl.py ADDED
@@ -0,0 +1,1899 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The DreamVL team and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ """PyTorch DreamVL model."""
21
+
22
+ import math, os
23
+ from dataclasses import dataclass
24
+ from typing import Any, Dict, List, Optional, Tuple, Union
25
+ import numpy as np
26
+ import torch
27
+ import torch.nn as nn
28
+ import torch.nn.functional as F
29
+ import torch.utils.checkpoint
30
+ from torch.nn import CrossEntropyLoss, LayerNorm
31
+
32
+ from transformers.activations import ACT2FN
33
+ from transformers.cache_utils import Cache, SlidingWindowCache, StaticCache, DynamicCache
34
+ from transformers.modeling_outputs import (
35
+ BaseModelOutputWithPast,
36
+ ModelOutput,
37
+ BaseModelOutput,
38
+ MaskedLMOutput,
39
+ )
40
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS
41
+ from transformers.modeling_utils import PreTrainedModel
42
+ from transformers.utils import (
43
+ add_start_docstrings,
44
+ add_start_docstrings_to_model_forward,
45
+ is_flash_attn_2_available,
46
+ is_flash_attn_greater_or_equal_2_10,
47
+ logging,
48
+ replace_return_docstrings,
49
+ is_torchdynamo_compiling
50
+ )
51
+ from transformers import PretrainedConfig
52
+
53
+ from transformers.modeling_attn_mask_utils import (
54
+ AttentionMaskConverter,
55
+ )
56
+
57
+ from .configuration_dreamvl import DreamVLConfig, DreamVLVisionConfig, DreamVLAConfig
58
+ from .generation_utils import DreamVLGenerationMixin, DreamVLGenerationConfig
59
+
60
+
61
+ if is_flash_attn_2_available():
62
+ from flash_attn import flash_attn_varlen_func
63
+
64
+ from transformers.modeling_flash_attention_utils import _flash_attention_forward
65
+ else:
66
+ flash_attn_varlen_func = None
67
+
68
+
69
+ logger = logging.get_logger("DreamVL."+__name__)
70
+
71
+ _CHECKPOINT_FOR_DOC = "DreamVL-7B"
72
+ _CONFIG_FOR_DOC = "DreamVLConfig"
73
+
74
+ @dataclass
75
+ class DreamVLModelOutput(ModelOutput):
76
+ """
77
+ Base class for DreamVL outputs.
78
+ Args:
79
+ loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
80
+ Language modeling loss (for next-token prediction).
81
+ logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
82
+ Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
83
+ past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
84
+ Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
85
+ `(batch_size, num_heads, sequence_length, embed_size_per_head)`)
86
+ Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
87
+ `past_key_values` input) to speed up sequential decoding.
88
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
89
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
90
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
91
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
92
+ attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
93
+ Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
94
+ sequence_length)`.
95
+ Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
96
+ heads.
97
+ rope_deltas (`torch.LongTensor` of shape `(batch_size, )`, *optional*):
98
+ The rope index difference between sequence length and multimodal rope.
99
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
100
+ The input embeddings, used for caching image feature during inference when use_cache=False.
101
+ """
102
+
103
+ logits: torch.FloatTensor = None
104
+ loss: Optional[torch.FloatTensor] = None
105
+ past_key_values: Optional[List[torch.FloatTensor]] = None
106
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
107
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
108
+ rope_deltas: Optional[torch.LongTensor] = None
109
+ inputs_embeds: Optional[torch.FloatTensor] = None
110
+
111
+
112
+ class DreamVLRotaryEmbedding(nn.Module):
113
+ def __init__(
114
+ self,
115
+ dim=None,
116
+ max_position_embeddings=2048,
117
+ base=10000,
118
+ device=None,
119
+ scaling_factor=1.0,
120
+ rope_type="default",
121
+ config: Optional[DreamVLConfig] = None,
122
+ ):
123
+ super().__init__()
124
+ # TODO (joao): remove the `if` below, only used for BC
125
+ self.rope_kwargs = {}
126
+ if config is None:
127
+ logger.warning_once(
128
+ "`DreamVLRotaryEmbedding` can now be fully parameterized by passing the model config through the "
129
+ "`config` argument. All other arguments will be removed in v4.46"
130
+ )
131
+ self.rope_kwargs = {
132
+ "rope_type": rope_type,
133
+ "factor": scaling_factor,
134
+ "dim": dim,
135
+ "base": base,
136
+ "max_position_embeddings": max_position_embeddings,
137
+ }
138
+ self.rope_type = rope_type
139
+ self.max_seq_len_cached = max_position_embeddings
140
+ self.original_max_seq_len = max_position_embeddings
141
+ else:
142
+ # BC: "rope_type" was originally "type"
143
+ if config.rope_scaling is not None:
144
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
145
+ else:
146
+ self.rope_type = "default"
147
+ self.max_seq_len_cached = config.max_position_embeddings
148
+ self.original_max_seq_len = config.max_position_embeddings
149
+
150
+ self.config = config
151
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
152
+
153
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device, **self.rope_kwargs)
154
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
155
+ self.original_inv_freq = self.inv_freq
156
+
157
+ def reset_parameters(self):
158
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, self.inv_freq.device, **self.rope_kwargs)
159
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
160
+ self.original_inv_freq = self.inv_freq
161
+
162
+
163
+ def _dynamic_frequency_update(self, position_ids, device):
164
+ """
165
+ dynamic RoPE layers should recompute `inv_freq` in the following situations:
166
+ 1 - growing beyond the cached sequence length (allow scaling)
167
+ 2 - the current sequence length is in the original scale (avoid losing precision with small sequences)
168
+ """
169
+ seq_len = torch.max(position_ids) + 1
170
+ if seq_len > self.max_seq_len_cached: # growth
171
+ inv_freq, self.attention_scaling = self.rope_init_fn(
172
+ self.config, device, seq_len=seq_len, **self.rope_kwargs
173
+ )
174
+ self.register_buffer("inv_freq", inv_freq, persistent=False) # TODO joao: may break with compilation
175
+ self.max_seq_len_cached = seq_len
176
+
177
+ if seq_len < self.original_max_seq_len and self.max_seq_len_cached > self.original_max_seq_len: # reset
178
+ self.register_buffer("inv_freq", self.original_inv_freq, persistent=False)
179
+ self.max_seq_len_cached = self.original_max_seq_len
180
+
181
+ @torch.no_grad()
182
+ def forward(self, x, position_ids):
183
+ if "dynamic" in self.rope_type:
184
+ self._dynamic_frequency_update(position_ids, device=x.device)
185
+
186
+ # Core RoPE block. In contrast to other models, DreamVL has different position ids for thw grids
187
+ # So we expand the inv_freq to shape (3, ...)
188
+ inv_freq_expanded = self.inv_freq[None, None, :, None].float().expand(3, position_ids.shape[1], -1, 1)
189
+ position_ids_expanded = position_ids[:, :, None, :].float() # shape (3, bs, 1, positions)
190
+ # Force float32 (see https://github.com/huggingface/transformers/pull/29285)
191
+ device_type = x.device.type
192
+ device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu"
193
+ with torch.autocast(device_type=device_type, enabled=False):
194
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(2, 3)
195
+ emb = torch.cat((freqs, freqs), dim=-1)
196
+ cos = emb.cos()
197
+ sin = emb.sin()
198
+
199
+ # Advanced RoPE types (e.g. yarn) apply a post-processing scaling factor, equivalent to scaling attention
200
+ cos = cos * self.attention_scaling
201
+ sin = sin * self.attention_scaling
202
+
203
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
204
+
205
+ # Copied from transformers.models.qwen2.modeling_qwen2.Qwen2RMSNorm
206
+ class DreamVLRMSNorm(nn.Module):
207
+ def __init__(self, hidden_size, eps=1e-6):
208
+ """
209
+ DreamVLRMSNorm is equivalent to T5LayerNorm
210
+ """
211
+ super().__init__()
212
+ self.weight = nn.Parameter(torch.ones(hidden_size))
213
+ self.variance_epsilon = eps
214
+
215
+ def forward(self, hidden_states):
216
+ input_dtype = hidden_states.dtype
217
+ hidden_states = hidden_states.to(torch.float32)
218
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
219
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
220
+ return self.weight * hidden_states.to(input_dtype)
221
+
222
+ def extra_repr(self):
223
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
224
+
225
+ # Copied from transformers.models.llama.modeling_llama.rotate_half
226
+ def rotate_half(x):
227
+ """Rotates half the hidden dims of the input."""
228
+ x1 = x[..., : x.shape[-1] // 2]
229
+ x2 = x[..., x.shape[-1] // 2 :]
230
+ return torch.cat((-x2, x1), dim=-1)
231
+
232
+
233
+ def apply_multimodal_rotary_pos_emb(q, k, cos, sin, mrope_section, unsqueeze_dim=1):
234
+ """Applies Rotary Position Embedding with Multimodal Sections to the query and key tensors (https://qwenlm.github.io/blog/qwen2-vl/).
235
+ Explanation:
236
+ Multimodal 3D rotary position embedding is an extension to 1D rotary position embedding. The input embedding
237
+ sequence contains vision (images / videos) embedding and text embedding or just contains text embedding. For
238
+ vision embedding part, we apply rotary position embedding on temporal, height and width dimension seperately.
239
+ Here we split the channel dimension to 3 chunks for the temporal, height and width rotary position embedding.
240
+ For text embedding part, we just apply 1D rotary position embedding. The three rotary position index (temporal,
241
+ height and width) of text embedding is always the same, so the text embedding rotary position embedding has no
242
+ difference with modern LLMs.
243
+ Args:
244
+ q (`torch.Tensor`): The query tensor.
245
+ k (`torch.Tensor`): The key tensor.
246
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
247
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
248
+ position_ids (`torch.Tensor`):
249
+ The position indices of the tokens corresponding to the query and key tensors. For example, this can be
250
+ used to pass offsetted position ids when working with a KV-cache.
251
+ mrope_section(`List(int)`):
252
+ Multimodal rope section is for channel dimension of temporal, height and width in rope calculation.
253
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
254
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
255
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
256
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
257
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
258
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
259
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
260
+ Returns:
261
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
262
+ """
263
+ mrope_section = mrope_section * 2
264
+ cos = torch.cat([m[i % 3] for i, m in enumerate(cos.split(mrope_section, dim=-1))], dim=-1).unsqueeze(
265
+ unsqueeze_dim
266
+ )
267
+ sin = torch.cat([m[i % 3] for i, m in enumerate(sin.split(mrope_section, dim=-1))], dim=-1).unsqueeze(
268
+ unsqueeze_dim
269
+ )
270
+
271
+ q_embed = (q * cos) + (rotate_half(q) * sin)
272
+ k_embed = (k * cos) + (rotate_half(k) * sin)
273
+ return q_embed, k_embed
274
+
275
+
276
+ def apply_rotary_pos_emb_vision(
277
+ q: torch.Tensor, k: torch.Tensor, cos: torch.Tensor, sin: torch.Tensor
278
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
279
+ orig_q_dtype = q.dtype
280
+ orig_k_dtype = k.dtype
281
+ q, k = q.float(), k.float()
282
+ cos, sin = cos.unsqueeze(-2).float(), sin.unsqueeze(-2).float()
283
+ q_embed = (q * cos) + (rotate_half(q) * sin)
284
+ k_embed = (k * cos) + (rotate_half(k) * sin)
285
+ q_embed = q_embed.to(orig_q_dtype)
286
+ k_embed = k_embed.to(orig_k_dtype)
287
+ return q_embed, k_embed
288
+
289
+ # Copied from transformers.models.qwen2.modeling_qwen2.Qwen2MLP
290
+ class DreamVLMLP(nn.Module):
291
+ def __init__(self, config):
292
+ super().__init__()
293
+ self.hidden_size = config.hidden_size
294
+ self.intermediate_size = config.intermediate_size
295
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
296
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
297
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
298
+ self.act_fn = ACT2FN[config.hidden_act]
299
+
300
+ def forward(self, hidden_state):
301
+ return self.down_proj(self.act_fn(self.gate_proj(hidden_state)) * self.up_proj(hidden_state))
302
+
303
+
304
+ # Copied from transformers.models.llama.modeling_llama.repeat_kv
305
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
306
+ """
307
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
308
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
309
+ """
310
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
311
+ if n_rep == 1:
312
+ return hidden_states
313
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
314
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
315
+
316
+
317
+ class DreamVLAttention(nn.Module):
318
+ """
319
+ Multi-headed attention from 'Attention Is All You Need' paper. Modified to use sliding window attention: Longformer
320
+ and "Generating Long Sequences with Sparse Transformers".
321
+ """
322
+
323
+ def __init__(self, config: DreamVLConfig, layer_idx: Optional[int] = None):
324
+ super().__init__()
325
+ self.config = config
326
+ self.layer_idx = layer_idx
327
+ if layer_idx is None:
328
+ logger.warning_once(
329
+ f"Instantiating {self.__class__.__name__} without passing `layer_idx` is not recommended and will "
330
+ "to errors during the forward call, if caching is used. Please make sure to provide a `layer_idx` "
331
+ "when creating this class."
332
+ )
333
+
334
+ self.hidden_size = config.hidden_size
335
+ self.num_heads = config.num_attention_heads
336
+ self.head_dim = self.hidden_size // self.num_heads
337
+ self.num_key_value_heads = config.num_key_value_heads
338
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
339
+ self.max_position_embeddings = config.max_position_embeddings
340
+ self.rope_theta = config.rope_theta
341
+ self.is_causal = False # not used in Dream
342
+ self.attention_dropout = config.attention_dropout
343
+ self.rope_scaling = config.rope_scaling # in Dream rope scaling is None
344
+ self.mrope_section = config.mrope_section
345
+
346
+ if (self.head_dim * self.num_heads) != self.hidden_size:
347
+ raise ValueError(
348
+ f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
349
+ f" and `num_heads`: {self.num_heads})."
350
+ )
351
+ self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=True)
352
+ self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=True)
353
+ self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=True)
354
+ self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
355
+
356
+ self.rotary_emb = DreamVLRotaryEmbedding(config=self.config)
357
+
358
+ def forward(
359
+ self,
360
+ hidden_states: torch.Tensor,
361
+ attention_mask: Optional[torch.Tensor] = None,
362
+ position_ids: Optional[torch.LongTensor] = None,
363
+ past_key_value: Optional[Cache] = None,
364
+ output_attentions: bool = False,
365
+ use_cache: bool = False,
366
+ cache_position: Optional[torch.LongTensor] = None,
367
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.46
368
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
369
+ bsz, q_len, _ = hidden_states.size()
370
+
371
+ query_states = self.q_proj(hidden_states)
372
+ key_states = self.k_proj(hidden_states)
373
+ value_states = self.v_proj(hidden_states)
374
+
375
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
376
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
377
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
378
+
379
+ if position_embeddings is None:
380
+ logger.warning_once(
381
+ "The attention layers in this model are transitioning from computing the RoPE embeddings internally "
382
+ "through `position_ids` (2D tensor with the indexes of the tokens), to using externally computed "
383
+ "`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.46 `position_ids` will be "
384
+ "removed and `position_embeddings` will be mandatory."
385
+ )
386
+ cos, sin = self.rotary_emb(value_states, position_ids)
387
+ else:
388
+ cos, sin = position_embeddings
389
+ query_states, key_states = apply_multimodal_rotary_pos_emb(
390
+ query_states, key_states, cos, sin, self.mrope_section
391
+ )
392
+
393
+ if past_key_value is not None:
394
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position} # Specific to RoPE models
395
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
396
+
397
+ # repeat k/v heads if n_kv_heads < n_heads
398
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
399
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
400
+
401
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
402
+ if attention_mask is not None: # no matter the length, we just slice it
403
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
404
+ attn_weights = attn_weights + causal_mask
405
+
406
+ # Fix precision issues in DreamVL float16 inference
407
+ # Replace inf values with zeros in attention weights to prevent NaN propagation
408
+ if query_states.dtype == torch.float16:
409
+ attn_weights = torch.where(torch.isinf(attn_weights), torch.zeros_like(attn_weights), attn_weights)
410
+
411
+ # upcast attention to fp32
412
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
413
+ attn_weights = nn.functional.dropout(attn_weights, p=self.attention_dropout, training=self.training)
414
+ attn_output = torch.matmul(attn_weights, value_states)
415
+
416
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
417
+ raise ValueError(
418
+ f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
419
+ f" {attn_output.size()}"
420
+ )
421
+
422
+ attn_output = attn_output.transpose(1, 2).contiguous()
423
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
424
+
425
+ attn_output = self.o_proj(attn_output)
426
+
427
+ if not output_attentions:
428
+ attn_weights = None
429
+
430
+ return attn_output, attn_weights, past_key_value
431
+
432
+
433
+ class DreamVLFlashAttention2(DreamVLAttention):
434
+
435
+ def forward(
436
+ self,
437
+ hidden_states: torch.Tensor,
438
+ attention_mask: Optional[torch.Tensor] = None,
439
+ position_ids: Optional[torch.LongTensor] = None,
440
+ past_key_value: Optional[Cache] = None,
441
+ output_attentions: bool = False,
442
+ use_cache: bool = False,
443
+ cache_position: Optional[torch.LongTensor] = None,
444
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
445
+ ):
446
+ bsz, q_len, _ = hidden_states.size()
447
+
448
+ query_states = self.q_proj(hidden_states)
449
+ key_states = self.k_proj(hidden_states)
450
+ value_states = self.v_proj(hidden_states)
451
+
452
+ query_states = query_states.view(bsz, q_len, -1, self.head_dim).transpose(1, 2)
453
+ key_states = key_states.view(bsz, q_len, -1, self.head_dim).transpose(1, 2)
454
+ value_states = value_states.view(bsz, q_len, -1, self.head_dim).transpose(1, 2)
455
+
456
+ # Because the input can be padded, the absolute sequence length depends on the max position id.
457
+ if position_embeddings is None:
458
+ logger.warning_once(
459
+ "The attention layers in this model are transitioning from computing the RoPE embeddings internally "
460
+ "through `position_ids` (2D tensor with the indexes of the tokens), to using externally computed "
461
+ "`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.46 `position_ids` will be "
462
+ "removed and `position_embeddings` will be mandatory."
463
+ )
464
+ cos, sin = self.rotary_emb(value_states, position_ids)
465
+ else:
466
+ cos, sin = position_embeddings
467
+ query_states, key_states = apply_multimodal_rotary_pos_emb(
468
+ query_states, key_states, cos, sin, self.mrope_section
469
+ )
470
+
471
+ # repeat k/v heads if n_kv_heads < n_heads
472
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
473
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
474
+ dropout_rate = 0.0 if not self.training else self.attention_dropout
475
+
476
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
477
+ # therefore the input hidden states gets silently casted in float32. Hence, we need
478
+ # cast them back in float16 just to be sure everything works as expected.
479
+ input_dtype = query_states.dtype
480
+ if input_dtype == torch.float32:
481
+ if torch.is_autocast_enabled():
482
+ target_dtype = torch.get_autocast_gpu_dtype()
483
+ # Handle the case where the model is quantized
484
+ elif hasattr(self.config, "_pre_quantization_dtype"):
485
+ target_dtype = self.config._pre_quantization_dtype
486
+ else:
487
+ target_dtype = self.q_proj.weight.dtype
488
+
489
+ logger.warning_once(
490
+ f"The input hidden states seems to be silently casted in float32, this might be related to"
491
+ f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
492
+ f" {target_dtype}."
493
+ )
494
+
495
+ query_states = query_states.to(target_dtype)
496
+ key_states = key_states.to(target_dtype)
497
+ value_states = value_states.to(target_dtype)
498
+
499
+ # Reashape to the expected shape for Flash Attention
500
+ query_states = query_states.transpose(1, 2)
501
+ key_states = key_states.transpose(1, 2)
502
+ value_states = value_states.transpose(1, 2)
503
+
504
+ if (
505
+ self.config.use_sliding_window
506
+ and getattr(self.config, "sliding_window", None) is not None
507
+ and self.layer_idx >= self.config.max_window_layers
508
+ ):
509
+ sliding_window = self.config.sliding_window
510
+ else:
511
+ sliding_window = None
512
+
513
+ attn_output = _flash_attention_forward(
514
+ query_states,
515
+ key_states,
516
+ value_states,
517
+ attention_mask,
518
+ q_len,
519
+ dropout=dropout_rate,
520
+ sliding_window=sliding_window,
521
+ is_causal=False
522
+ )
523
+
524
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
525
+ attn_output = self.o_proj(attn_output)
526
+
527
+ if not output_attentions:
528
+ attn_weights = None
529
+
530
+ return attn_output, attn_weights, past_key_value
531
+
532
+
533
+ class DreamVLSdpaAttention(DreamVLAttention):
534
+ """
535
+ DreamVL attention module using torch.nn.functional.scaled_dot_product_attention. This module inherits from
536
+ `DreamVLAttention` as the weights of the module stays untouched. The only changes are on the forward pass to adapt to
537
+ SDPA API.
538
+ """
539
+
540
+ # Adapted from DreamVLAttention.forward
541
+ def forward(
542
+ self,
543
+ hidden_states: torch.Tensor,
544
+ attention_mask: Optional[torch.Tensor] = None,
545
+ position_ids: Optional[torch.LongTensor] = None,
546
+ past_key_value: Optional[Cache] = None,
547
+ output_attentions: bool = False,
548
+ use_cache: bool = False,
549
+ cache_position: Optional[torch.LongTensor] = None,
550
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.46
551
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
552
+ if output_attentions:
553
+ # TODO: Improve this warning with e.g. `model.config.attn_implementation = "manual"` once this is implemented.
554
+ logger.warning_once(
555
+ "DreamVLModel is using DreamVLSdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to the manual attention implementation, "
556
+ 'but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
557
+ )
558
+ return super().forward(
559
+ hidden_states=hidden_states,
560
+ attention_mask=attention_mask,
561
+ position_ids=position_ids,
562
+ past_key_value=past_key_value,
563
+ output_attentions=output_attentions,
564
+ use_cache=use_cache,
565
+ # cache_position=cache_position, # not used in Dream
566
+ )
567
+
568
+ bsz, q_len, _ = hidden_states.size()
569
+
570
+ query_states = self.q_proj(hidden_states)
571
+ key_states = self.k_proj(hidden_states)
572
+ value_states = self.v_proj(hidden_states)
573
+
574
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
575
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
576
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
577
+
578
+ if position_embeddings is None:
579
+ logger.warning_once(
580
+ "The attention layers in this model are transitioning from computing the RoPE embeddings internally "
581
+ "through `position_ids` (2D tensor with the indexes of the tokens), to using externally computed "
582
+ "`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.46 `position_ids` will be "
583
+ "removed and `position_embeddings` will be mandatory."
584
+ )
585
+ cos, sin = self.rotary_emb(value_states, position_ids)
586
+ else:
587
+ cos, sin = position_embeddings
588
+ query_states, key_states = apply_multimodal_rotary_pos_emb(
589
+ query_states, key_states, cos, sin, self.mrope_section
590
+ )
591
+
592
+ if past_key_value is not None:
593
+ logger.warning_once(
594
+ f"In {self.__class__}, cache is used."
595
+ )
596
+ # print("cache is used")
597
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position} # Specific to RoPE models
598
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
599
+
600
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
601
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
602
+
603
+ # SDPA with memory-efficient backend is currently (torch==2.1.2) bugged with non-contiguous inputs with custom attn_mask,
604
+ # Reference: https://github.com/pytorch/pytorch/issues/112577.
605
+ if query_states.device.type == "cuda" and attention_mask is not None:
606
+ query_states = query_states.contiguous()
607
+ key_states = key_states.contiguous()
608
+ value_states = value_states.contiguous()
609
+
610
+ if isinstance(attention_mask, torch.Tensor) and len(attention_mask.shape) == 2:
611
+ # attention_mask is of shape [B, N], here broadcast to [B, 1, N, N]
612
+ attention_mask = torch.logical_and(
613
+ attention_mask.unsqueeze(1).unsqueeze(-2),
614
+ attention_mask.unsqueeze(1).unsqueeze(-1),
615
+ )
616
+ # We dispatch to SDPA's Flash Attention or Efficient kernels via this `is_causal` if statement instead of an inline conditional assignment
617
+ # in SDPA to support both torch.compile's dynamic shapes and full graph options. An inline conditional prevents dynamic shapes from compiling.
618
+ # The q_len > 1 is necessary to match with AttentionMaskConverter.to_causal_4d that does not create a causal mask in case q_len == 1.
619
+ # is_causal = True if causal_mask is None and q_len > 1 else False # not used in Dream
620
+
621
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
622
+ query_states,
623
+ key_states,
624
+ value_states,
625
+ attn_mask=attention_mask if isinstance(attention_mask, torch.Tensor) else None,
626
+ dropout_p=self.attention_dropout if self.training else 0.0,
627
+ is_causal=False, # hard coded
628
+ )
629
+ if torch.__version__ < "2.5":
630
+ attn_output = torch.nan_to_num(attn_output, nan=0.0)
631
+
632
+ attn_output = attn_output.transpose(1, 2).contiguous()
633
+ attn_output = attn_output.view(bsz, q_len, self.hidden_size)
634
+
635
+ attn_output = self.o_proj(attn_output)
636
+
637
+ return attn_output, None, past_key_value
638
+
639
+
640
+ DreamVL_ATTENTION_CLASSES = {
641
+ "eager": DreamVLAttention,
642
+ "flash_attention_2": DreamVLFlashAttention2,
643
+ "sdpa": DreamVLSdpaAttention,
644
+ }
645
+
646
+
647
+ class DreamVLDecoderLayer(nn.Module):
648
+ def __init__(self, config: DreamVLConfig, layer_idx: int):
649
+ super().__init__()
650
+ self.hidden_size = config.hidden_size
651
+
652
+ if config.sliding_window and config._attn_implementation != "flash_attention_2":
653
+ logger.warning_once(
654
+ f"Sliding Window Attention is enabled but not implemented for `{config._attn_implementation}`; "
655
+ "unexpected results may be encountered."
656
+ )
657
+ self.self_attn = DreamVL_ATTENTION_CLASSES[config._attn_implementation](config, layer_idx)
658
+ # self.self_attn = DreamVLSdpaAttention(config, layer_idx)
659
+
660
+ self.mlp = DreamVLMLP(config)
661
+ self.input_layernorm = DreamVLRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
662
+ self.post_attention_layernorm = DreamVLRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
663
+
664
+ def forward(
665
+ self,
666
+ hidden_states: torch.Tensor,
667
+ attention_mask: Optional[torch.Tensor] = None,
668
+ position_ids: Optional[torch.LongTensor] = None,
669
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
670
+ output_attentions: Optional[bool] = False,
671
+ use_cache: Optional[bool] = False,
672
+ cache_position: Optional[torch.LongTensor] = None,
673
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.46
674
+ **kwargs,
675
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
676
+ """
677
+ Args:
678
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
679
+ attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
680
+ `(batch, sequence_length)` where padding elements are indicated by 0.
681
+ output_attentions (`bool`, *optional*):
682
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
683
+ returned tensors for more detail.
684
+ use_cache (`bool`, *optional*):
685
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
686
+ (see `past_key_values`).
687
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
688
+ cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*):
689
+ Indices depicting the position of the input sequence tokens in the sequence.
690
+ position_embeddings (`Tuple[torch.FloatTensor, torch.FloatTensor]`, *optional*):
691
+ Tuple containing the cosine and sine positional embeddings of shape `(batch_size, seq_len, head_dim)`,
692
+ with `head_dim` being the embedding dimension of each attention head.
693
+ kwargs (`dict`, *optional*):
694
+ Arbitrary kwargs to be ignored, used for FSDP and other methods that injects code
695
+ into the model
696
+ """
697
+
698
+ residual = hidden_states
699
+
700
+ hidden_states = self.input_layernorm(hidden_states)
701
+
702
+ # Self Attention
703
+ hidden_states, self_attn_weights, present_key_value = self.self_attn(
704
+ hidden_states=hidden_states,
705
+ attention_mask=attention_mask,
706
+ position_ids=position_ids,
707
+ past_key_value=past_key_value,
708
+ output_attentions=output_attentions,
709
+ use_cache=use_cache,
710
+ cache_position=cache_position,
711
+ position_embeddings=position_embeddings,
712
+ )
713
+ hidden_states = residual + hidden_states
714
+
715
+ # Fully Connected
716
+ residual = hidden_states
717
+ hidden_states = self.post_attention_layernorm(hidden_states)
718
+ hidden_states = self.mlp(hidden_states)
719
+ hidden_states = residual + hidden_states
720
+
721
+ outputs = (hidden_states,)
722
+
723
+ if output_attentions:
724
+ outputs += (self_attn_weights,)
725
+
726
+ if use_cache:
727
+ outputs += (present_key_value,)
728
+
729
+ return outputs
730
+
731
+ ######## START VISION ########
732
+ class VisionRotaryEmbedding(nn.Module):
733
+ def __init__(self, dim: int, theta: float = 10000.0) -> None:
734
+ super().__init__()
735
+ inv_freq = 1.0 / (theta ** (torch.arange(0, dim, 2, dtype=torch.float) / dim))
736
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
737
+
738
+ def forward(self, seqlen: int) -> torch.Tensor:
739
+ seq = torch.arange(seqlen, device=self.inv_freq.device, dtype=self.inv_freq.dtype)
740
+ freqs = torch.outer(seq, self.inv_freq)
741
+ return freqs
742
+
743
+
744
+ class PatchEmbed(nn.Module):
745
+ def __init__(
746
+ self,
747
+ patch_size: int = 14,
748
+ temporal_patch_size: int = 2,
749
+ in_channels: int = 3,
750
+ embed_dim: int = 1152,
751
+ ) -> None:
752
+ super().__init__()
753
+ self.patch_size = patch_size
754
+ self.temporal_patch_size = temporal_patch_size
755
+ self.in_channels = in_channels
756
+ self.embed_dim = embed_dim
757
+
758
+ kernel_size = [temporal_patch_size, patch_size, patch_size]
759
+ self.proj = nn.Conv3d(in_channels, embed_dim, kernel_size=kernel_size, stride=kernel_size, bias=False)
760
+
761
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
762
+ target_dtype = self.proj.weight.dtype
763
+ hidden_states = hidden_states.view(
764
+ -1, self.in_channels, self.temporal_patch_size, self.patch_size, self.patch_size
765
+ )
766
+ hidden_states = self.proj(hidden_states.to(dtype=target_dtype)).view(-1, self.embed_dim)
767
+ return hidden_states
768
+
769
+
770
+ class PatchMerger(nn.Module):
771
+ def __init__(self, dim: int, context_dim: int, spatial_merge_size: int = 2) -> None:
772
+ super().__init__()
773
+ self.hidden_size = context_dim * (spatial_merge_size**2)
774
+ self.ln_q = LayerNorm(context_dim, eps=1e-6)
775
+ self.mlp = nn.Sequential(
776
+ nn.Linear(self.hidden_size, self.hidden_size),
777
+ nn.GELU(),
778
+ nn.Linear(self.hidden_size, dim),
779
+ )
780
+
781
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
782
+ x = self.mlp(self.ln_q(x).view(-1, self.hidden_size))
783
+ return x
784
+
785
+ class VisionMlp(nn.Module):
786
+ def __init__(self, dim: int, hidden_dim: int, hidden_act: str) -> None:
787
+ super().__init__()
788
+ self.fc1 = nn.Linear(dim, hidden_dim)
789
+ self.act = ACT2FN[hidden_act]
790
+ self.fc2 = nn.Linear(hidden_dim, dim)
791
+
792
+ def forward(self, x) -> torch.Tensor:
793
+ return self.fc2(self.act(self.fc1(x)))
794
+
795
+ class VisionAttention(nn.Module):
796
+ def __init__(self, dim: int, num_heads: int = 16) -> None:
797
+ super().__init__()
798
+ self.num_heads = num_heads
799
+ self.head_dim = dim // num_heads
800
+ self.qkv = nn.Linear(dim, dim * 3, bias=True)
801
+ self.proj = nn.Linear(dim, dim)
802
+
803
+ def forward(
804
+ self,
805
+ hidden_states: torch.Tensor,
806
+ cu_seqlens: torch.Tensor,
807
+ rotary_pos_emb: Optional[torch.Tensor] = None,
808
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
809
+ ) -> torch.Tensor:
810
+ seq_length = hidden_states.shape[0]
811
+ q, k, v = self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)
812
+ if position_embeddings is None:
813
+ logger.warning_once(
814
+ "The attention layers in this model are transitioning from computing the RoPE embeddings internally "
815
+ "through `rotary_pos_emb` (2D tensor of RoPE theta values), to using externally computed "
816
+ "`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.54 `rotary_pos_emb` will be "
817
+ "removed and `position_embeddings` will be mandatory."
818
+ )
819
+ emb = torch.cat((rotary_pos_emb, rotary_pos_emb), dim=-1)
820
+ cos = emb.cos()
821
+ sin = emb.sin()
822
+ else:
823
+ cos, sin = position_embeddings
824
+ q, k = apply_rotary_pos_emb_vision(q, k, cos, sin)
825
+
826
+ attention_mask = torch.full(
827
+ [1, seq_length, seq_length], torch.finfo(q.dtype).min, device=q.device, dtype=q.dtype
828
+ )
829
+ for i in range(1, len(cu_seqlens)):
830
+ attention_mask[..., cu_seqlens[i - 1] : cu_seqlens[i], cu_seqlens[i - 1] : cu_seqlens[i]] = 0
831
+
832
+ q = q.transpose(0, 1)
833
+ k = k.transpose(0, 1)
834
+ v = v.transpose(0, 1)
835
+ attn_weights = torch.matmul(q, k.transpose(1, 2)) / math.sqrt(self.head_dim)
836
+ attn_weights = attn_weights + attention_mask
837
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(q.dtype)
838
+ attn_output = torch.matmul(attn_weights, v)
839
+ attn_output = attn_output.transpose(0, 1)
840
+ attn_output = attn_output.reshape(seq_length, -1)
841
+ attn_output = self.proj(attn_output)
842
+ return attn_output
843
+
844
+ class VisionFlashAttention2(nn.Module):
845
+ def __init__(self, dim: int, num_heads: int = 16) -> None:
846
+ super().__init__()
847
+ self.num_heads = num_heads
848
+ self.qkv = nn.Linear(dim, dim * 3, bias=True)
849
+ self.proj = nn.Linear(dim, dim)
850
+
851
+ def forward(
852
+ self,
853
+ hidden_states: torch.Tensor,
854
+ cu_seqlens: torch.Tensor,
855
+ rotary_pos_emb: Optional[torch.Tensor] = None,
856
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
857
+ ) -> torch.Tensor:
858
+ seq_length = hidden_states.shape[0]
859
+ q, k, v = self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)
860
+ if position_embeddings is None:
861
+ logger.warning_once(
862
+ "The attention layers in this model are transitioning from computing the RoPE embeddings internally "
863
+ "through `rotary_pos_emb` (2D tensor of RoPE theta values), to using externally computed "
864
+ "`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.54 `rotary_pos_emb` will be "
865
+ "removed and `position_embeddings` will be mandatory."
866
+ )
867
+ emb = torch.cat((rotary_pos_emb, rotary_pos_emb), dim=-1)
868
+ cos = emb.cos()
869
+ sin = emb.sin()
870
+ else:
871
+ cos, sin = position_embeddings
872
+ q, k = apply_rotary_pos_emb_vision(q, k, cos, sin)
873
+
874
+ max_seqlen = (cu_seqlens[1:] - cu_seqlens[:-1]).max().item()
875
+ attn_output = flash_attn_varlen_func(q, k, v, cu_seqlens, cu_seqlens, max_seqlen, max_seqlen).reshape(
876
+ seq_length, -1
877
+ )
878
+ attn_output = self.proj(attn_output)
879
+ return attn_output
880
+
881
+ class VisionSdpaAttention(nn.Module):
882
+ def __init__(self, dim: int, num_heads: int = 16) -> None:
883
+ super().__init__()
884
+ self.num_heads = num_heads
885
+ self.qkv = nn.Linear(dim, dim * 3, bias=True)
886
+ self.proj = nn.Linear(dim, dim)
887
+
888
+ def forward(
889
+ self,
890
+ hidden_states: torch.Tensor,
891
+ cu_seqlens: torch.Tensor,
892
+ rotary_pos_emb: Optional[torch.Tensor] = None,
893
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
894
+ ) -> torch.Tensor:
895
+ seq_length = hidden_states.shape[0]
896
+ q, k, v = self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)
897
+ if position_embeddings is None:
898
+ logger.warning_once(
899
+ "The attention layers in this model are transitioning from computing the RoPE embeddings internally "
900
+ "through `rotary_pos_emb` (2D tensor of RoPE theta values), to using externally computed "
901
+ "`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.54 `rotary_pos_emb` will be "
902
+ "removed and `position_embeddings` will be mandatory."
903
+ )
904
+ emb = torch.cat((rotary_pos_emb, rotary_pos_emb), dim=-1)
905
+ cos = emb.cos()
906
+ sin = emb.sin()
907
+ else:
908
+ cos, sin = position_embeddings
909
+ q, k = apply_rotary_pos_emb_vision(q, k, cos, sin)
910
+
911
+ attention_mask = torch.zeros([1, seq_length, seq_length], device=q.device, dtype=torch.bool)
912
+ for i in range(1, len(cu_seqlens)):
913
+ attention_mask[..., cu_seqlens[i - 1] : cu_seqlens[i], cu_seqlens[i - 1] : cu_seqlens[i]] = True
914
+ q = q.transpose(0, 1)
915
+ k = k.transpose(0, 1)
916
+ v = v.transpose(0, 1)
917
+ attn_output = F.scaled_dot_product_attention(
918
+ q.unsqueeze(0), k.unsqueeze(0), v.unsqueeze(0), attention_mask, dropout_p=0.0
919
+ )
920
+ attn_output = attn_output.squeeze(0).transpose(0, 1)
921
+ attn_output = attn_output.reshape(seq_length, -1)
922
+ attn_output = self.proj(attn_output)
923
+ return attn_output
924
+
925
+
926
+ VISION_ATTENTION_CLASSES = {
927
+ "eager": VisionAttention,
928
+ "flash_attention_2": VisionFlashAttention2,
929
+ "sdpa": VisionSdpaAttention,
930
+ }
931
+
932
+ class VisionBlock(nn.Module):
933
+ def __init__(self, config, attn_implementation: str = "sdpa") -> None:
934
+ super().__init__()
935
+ self.norm1 = LayerNorm(config.embed_dim, eps=1e-6)
936
+ self.norm2 = LayerNorm(config.embed_dim, eps=1e-6)
937
+ mlp_hidden_dim = int(config.embed_dim * config.mlp_ratio)
938
+
939
+ self.attn = VISION_ATTENTION_CLASSES[attn_implementation](
940
+ config.embed_dim, num_heads=config.num_heads
941
+ )
942
+ self.mlp = VisionMlp(dim=config.embed_dim, hidden_dim=mlp_hidden_dim, hidden_act=config.hidden_act)
943
+
944
+ def forward(
945
+ self,
946
+ hidden_states: torch.Tensor,
947
+ cu_seqlens: torch.Tensor,
948
+ rotary_pos_emb: Optional[torch.Tensor] = None,
949
+ position_embeddings: Optional[tuple[torch.Tensor, torch.Tensor]] = None,
950
+ ) -> torch.Tensor:
951
+ hidden_states = hidden_states + self.attn(
952
+ self.norm1(hidden_states),
953
+ cu_seqlens=cu_seqlens,
954
+ rotary_pos_emb=rotary_pos_emb,
955
+ position_embeddings=position_embeddings,
956
+ )
957
+ hidden_states = hidden_states + self.mlp(self.norm2(hidden_states))
958
+ return hidden_states
959
+
960
+
961
+ ######## END VISION ########
962
+
963
+ DreamVL_START_DOCSTRING = r"""
964
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
965
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
966
+ etc.)
967
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
968
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
969
+ and behavior.
970
+ Parameters:
971
+ config ([`DreamVLConfig`]):
972
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
973
+ load the weights associated with the model, only the configuration. Check out the
974
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
975
+ """
976
+
977
+
978
+ @add_start_docstrings(
979
+ "The bare DreamVL Model outputting raw hidden-states without any specific head on top.",
980
+ DreamVL_START_DOCSTRING,
981
+ )
982
+ class DreamVLPreTrainedModel(PreTrainedModel):
983
+ config_class = DreamVLConfig
984
+ base_model_prefix = "model"
985
+ supports_gradient_checkpointing = True
986
+ _no_split_modules = ["DreamVLDecoderLayer", "DreamVLVisionBlock"]
987
+ _skip_keys_device_placement = "past_key_values"
988
+ _supports_flash_attn_2 = True
989
+ _supports_sdpa = True
990
+ _supports_cache_class = True
991
+ _supports_quantized_cache = True
992
+ _supports_static_cache = True
993
+
994
+ def _init_weights(self, module):
995
+ std = self.config.initializer_range
996
+ if isinstance(module, (nn.Linear, nn.Conv3d)):
997
+ module.weight.data.normal_(mean=0.0, std=std)
998
+ if module.bias is not None:
999
+ module.bias.data.zero_()
1000
+ elif isinstance(module, nn.Embedding):
1001
+ module.weight.data.normal_(mean=0.0, std=std)
1002
+ if module.padding_idx is not None:
1003
+ module.weight.data[module.padding_idx].zero_()
1004
+ elif isinstance(module, DreamVLRMSNorm):
1005
+ module.weight.data.fill_(1.0)
1006
+
1007
+ @classmethod
1008
+ def from_pretrained(
1009
+ cls,
1010
+ pretrained_model_name_or_path: Optional[Union[str, os.PathLike]],
1011
+ *model_args,
1012
+ config: Optional[Union[PretrainedConfig, str, os.PathLike]] = None,
1013
+ cache_dir: Optional[Union[str, os.PathLike]] = None,
1014
+ ignore_mismatched_sizes: bool = False,
1015
+ force_download: bool = False,
1016
+ local_files_only: bool = False,
1017
+ token: Optional[Union[str, bool]] = None,
1018
+ revision: str = "main",
1019
+ use_safetensors: Optional[bool] = None,
1020
+ weights_only: bool = True,
1021
+ **kwargs,
1022
+ ):
1023
+ _model = super().from_pretrained(
1024
+ pretrained_model_name_or_path,
1025
+ *model_args,
1026
+ config=config,
1027
+ cache_dir=cache_dir,
1028
+ ignore_mismatched_sizes=ignore_mismatched_sizes,
1029
+ force_download=force_download,
1030
+ local_files_only=local_files_only,
1031
+ token=token,
1032
+ revision=revision,
1033
+ use_safetensors=use_safetensors,
1034
+ weights_only=weights_only,
1035
+ **kwargs,
1036
+ )
1037
+ # NOTE(Lin): we need to override the generation config
1038
+ # because the generation config loaded in `from_pretrained`
1039
+ # does not include all the attributes of DreamVLGenerationConfig
1040
+ resume_download = kwargs.get("resume_download", None)
1041
+ proxies = kwargs.get("proxies", None)
1042
+ subfolder = kwargs.get("subfolder", "")
1043
+ from_auto_class = kwargs.get("_from_auto", False)
1044
+ from_pipeline = kwargs.get("_from_pipeline", None)
1045
+ _model.generation_config = DreamVLGenerationConfig.from_pretrained(
1046
+ pretrained_model_name_or_path,
1047
+ cache_dir=cache_dir,
1048
+ force_download=force_download,
1049
+ resume_download=resume_download,
1050
+ proxies=proxies,
1051
+ local_files_only=local_files_only,
1052
+ token=token,
1053
+ revision=revision,
1054
+ subfolder=subfolder,
1055
+ _from_auto=from_auto_class,
1056
+ _from_pipeline=from_pipeline,
1057
+ )
1058
+ return _model
1059
+
1060
+
1061
+ class DreamVLVisionTransformerPretrainedModel(DreamVLPreTrainedModel):
1062
+ config_class = DreamVLVisionConfig
1063
+ _no_split_modules = ["DreamVLVisionBlock"]
1064
+
1065
+ def __init__(self, config) -> None:
1066
+ super().__init__(config)
1067
+ self.spatial_merge_size = config.spatial_merge_size
1068
+
1069
+ self.patch_embed = PatchEmbed(
1070
+ patch_size=config.patch_size,
1071
+ temporal_patch_size=config.temporal_patch_size,
1072
+ in_channels=config.in_channels,
1073
+ embed_dim=config.embed_dim,
1074
+ )
1075
+
1076
+ head_dim = config.embed_dim // config.num_heads
1077
+ self.rotary_pos_emb = VisionRotaryEmbedding(head_dim // 2)
1078
+
1079
+ self.blocks = nn.ModuleList(
1080
+ [VisionBlock(config, config._attn_implementation) for _ in range(config.depth)]
1081
+ )
1082
+ self.merger = PatchMerger(
1083
+ dim=config.hidden_size, context_dim=config.embed_dim, spatial_merge_size=config.spatial_merge_size
1084
+ )
1085
+ self.gradient_checkpointing = False
1086
+
1087
+ def rot_pos_emb(self, grid_thw):
1088
+ pos_ids = []
1089
+ for t, h, w in grid_thw:
1090
+ hpos_ids = torch.arange(h).unsqueeze(1).expand(-1, w)
1091
+ hpos_ids = hpos_ids.reshape(
1092
+ h // self.spatial_merge_size,
1093
+ self.spatial_merge_size,
1094
+ w // self.spatial_merge_size,
1095
+ self.spatial_merge_size,
1096
+ )
1097
+ hpos_ids = hpos_ids.permute(0, 2, 1, 3)
1098
+ hpos_ids = hpos_ids.flatten()
1099
+
1100
+ wpos_ids = torch.arange(w).unsqueeze(0).expand(h, -1)
1101
+ wpos_ids = wpos_ids.reshape(
1102
+ h // self.spatial_merge_size,
1103
+ self.spatial_merge_size,
1104
+ w // self.spatial_merge_size,
1105
+ self.spatial_merge_size,
1106
+ )
1107
+ wpos_ids = wpos_ids.permute(0, 2, 1, 3)
1108
+ wpos_ids = wpos_ids.flatten()
1109
+ pos_ids.append(torch.stack([hpos_ids, wpos_ids], dim=-1).repeat(t, 1))
1110
+ pos_ids = torch.cat(pos_ids, dim=0)
1111
+ max_grid_size = grid_thw[:, 1:].max()
1112
+ rotary_pos_emb_full = self.rotary_pos_emb(max_grid_size)
1113
+ rotary_pos_emb = rotary_pos_emb_full[pos_ids].flatten(1)
1114
+ return rotary_pos_emb
1115
+
1116
+ def forward(self, hidden_states: torch.Tensor, grid_thw: torch.Tensor) -> torch.Tensor:
1117
+ r"""
1118
+ grid_thw (`torch.LongTensor` of shape `(num_images, 3)`):
1119
+ The temporal, height and width dimensions of feature shape for each image. Each row contains [t, h, w] values.
1120
+ """
1121
+ hidden_states = self.patch_embed(hidden_states)
1122
+ rotary_pos_emb = self.rot_pos_emb(grid_thw)
1123
+ emb = torch.cat((rotary_pos_emb, rotary_pos_emb), dim=-1)
1124
+ position_embeddings = (emb.cos(), emb.sin())
1125
+
1126
+ cu_seqlens = torch.repeat_interleave(grid_thw[:, 1] * grid_thw[:, 2], grid_thw[:, 0]).cumsum(
1127
+ dim=0,
1128
+ # Select dtype based on the following factors:
1129
+ # - FA2 requires that cu_seqlens_q must have dtype int32
1130
+ # - torch.onnx.export requires that cu_seqlens_q must have same dtype as grid_thw
1131
+ # See https://github.com/huggingface/transformers/pull/34852 for more information
1132
+ dtype=grid_thw.dtype if torch.jit.is_tracing() else torch.int32,
1133
+ )
1134
+ cu_seqlens = F.pad(cu_seqlens, (1, 0), value=0)
1135
+
1136
+ for blk in self.blocks:
1137
+ if self.gradient_checkpointing and self.training:
1138
+ hidden_states = self._gradient_checkpointing_func(
1139
+ blk.__call__, hidden_states, cu_seqlens, None, position_embeddings
1140
+ )
1141
+ else:
1142
+ hidden_states = blk(hidden_states, cu_seqlens=cu_seqlens, position_embeddings=position_embeddings)
1143
+
1144
+ return self.merger(hidden_states)
1145
+
1146
+ # Copied from transformers.models.llava.modeling_llava.LlavaMultiModalProjector with Llava->DreamVL
1147
+ class DreamVLMultiModalProjector(nn.Module):
1148
+ def __init__(self, config: DreamVLConfig):
1149
+ super().__init__()
1150
+
1151
+ self.linear_1 = nn.Linear(config.vision_config.hidden_size, config.hidden_size, bias=True)
1152
+ self.act = ACT2FN[config.projector_hidden_act]
1153
+ self.linear_2 = nn.Linear(config.hidden_size, config.hidden_size, bias=True)
1154
+
1155
+ def forward(self, image_features):
1156
+ hidden_states = self.linear_1(image_features)
1157
+ hidden_states = self.act(hidden_states)
1158
+ hidden_states = self.linear_2(hidden_states)
1159
+ return hidden_states
1160
+
1161
+ @add_start_docstrings(
1162
+ "The bare DreamVL Model outputting raw hidden-states without any specific head on top.",
1163
+ DreamVL_START_DOCSTRING,
1164
+ )
1165
+ class DreamVLBaseModel(DreamVLPreTrainedModel):
1166
+ def __init__(self, config: DreamVLConfig):
1167
+ super().__init__(config)
1168
+ self.padding_idx = config.pad_token_id
1169
+ self.vocab_size = config.vocab_size
1170
+
1171
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
1172
+ self.layers = nn.ModuleList(
1173
+ [DreamVLDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
1174
+ )
1175
+ self._attn_implementation = config._attn_implementation
1176
+ self.norm = DreamVLRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
1177
+ self.rotary_emb = DreamVLRotaryEmbedding(config=config)
1178
+
1179
+ self.gradient_checkpointing = False
1180
+ # Initialize weights and apply final processing
1181
+ self.post_init()
1182
+
1183
+ def get_input_embeddings(self):
1184
+ return self.embed_tokens
1185
+
1186
+ def set_input_embeddings(self, value):
1187
+ self.embed_tokens = value
1188
+
1189
+ def forward(
1190
+ self,
1191
+ input_ids: torch.LongTensor = None,
1192
+ attention_mask: Optional[torch.Tensor] = None,
1193
+ position_ids: Optional[torch.LongTensor] = None,
1194
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1195
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1196
+ use_cache: Optional[bool] = None,
1197
+ output_attentions: Optional[bool] = None,
1198
+ output_hidden_states: Optional[bool] = None,
1199
+ return_dict: Optional[bool] = None,
1200
+ cache_position: Optional[torch.LongTensor] = None,
1201
+ ) -> Union[Tuple, BaseModelOutput]:
1202
+
1203
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1204
+ output_hidden_states = (
1205
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1206
+ )
1207
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1208
+
1209
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1210
+
1211
+ if (input_ids is None) ^ (inputs_embeds is not None):
1212
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
1213
+
1214
+ if self.gradient_checkpointing and self.training:
1215
+ if use_cache:
1216
+ use_cache = False
1217
+
1218
+ if inputs_embeds is None:
1219
+ inputs_embeds = self.embed_tokens(input_ids)
1220
+
1221
+ if use_cache and past_key_values is None:
1222
+ logger.warning_once(
1223
+ "This should not be triggered, in either training or inference, but if it is, please report it to us."
1224
+ )
1225
+ past_key_values = DynamicCache()
1226
+
1227
+ if cache_position is None:
1228
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
1229
+ cache_position = torch.arange(
1230
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
1231
+ )
1232
+
1233
+ # the hard coded `3` is for temporal, height and width.
1234
+ if position_ids is None:
1235
+ logger.warning_once(
1236
+ "This should not be triggered, in either training or inference, but if it is, please report it to us."
1237
+ )
1238
+ position_ids = cache_position.view(1, 1, -1).expand(3, inputs_embeds.shape[0], -1)
1239
+ elif position_ids.dim() == 2:
1240
+ logger.warning_once(
1241
+ "This should not be triggered, in either training or inference, but if it is, please report it to us."
1242
+ )
1243
+ position_ids = position_ids[None, ...].expand(3, position_ids.shape[0], -1)
1244
+
1245
+ hidden_states = inputs_embeds
1246
+
1247
+ # create position embeddings to be shared across the decoder layers
1248
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
1249
+
1250
+ # decoder layers
1251
+ all_hidden_states = () if output_hidden_states else None
1252
+ all_self_attns = () if output_attentions else None
1253
+
1254
+ for decoder_layer in self.layers:
1255
+ if output_hidden_states:
1256
+ all_hidden_states += (hidden_states,)
1257
+
1258
+ if self.gradient_checkpointing and self.training:
1259
+ layer_outputs = self._gradient_checkpointing_func(
1260
+ decoder_layer.__call__,
1261
+ hidden_states,
1262
+ attention_mask,
1263
+ position_ids,
1264
+ past_key_values,
1265
+ output_attentions,
1266
+ use_cache,
1267
+ cache_position,
1268
+ position_embeddings,
1269
+ )
1270
+ else:
1271
+ layer_outputs = decoder_layer(
1272
+ hidden_states,
1273
+ attention_mask=attention_mask,
1274
+ position_ids=position_ids,
1275
+ past_key_value=past_key_values,
1276
+ output_attentions=output_attentions,
1277
+ use_cache=use_cache,
1278
+ cache_position=cache_position,
1279
+ position_embeddings=position_embeddings,
1280
+ )
1281
+
1282
+ hidden_states = layer_outputs[0]
1283
+
1284
+ if output_attentions:
1285
+ all_self_attns += (layer_outputs[1],)
1286
+
1287
+ hidden_states = self.norm(hidden_states)
1288
+
1289
+ # add hidden states from the last decoder layer
1290
+ if output_hidden_states:
1291
+ all_hidden_states += (hidden_states,)
1292
+
1293
+ if not return_dict:
1294
+ return tuple(v for v in [hidden_states, all_hidden_states, all_self_attns] if v is not None)
1295
+ return BaseModelOutputWithPast(
1296
+ last_hidden_state=hidden_states,
1297
+ past_key_values=past_key_values,
1298
+ hidden_states=all_hidden_states,
1299
+ attentions=all_self_attns,
1300
+ )
1301
+
1302
+ DreamVL_INPUTS_DOCSTRING = r"""
1303
+ Args:
1304
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
1305
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
1306
+ it.
1307
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
1308
+ [`PreTrainedTokenizer.__call__`] for details.
1309
+ [What are input IDs?](../glossary#input-ids)
1310
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
1311
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
1312
+ - 1 for tokens that are **not masked**,
1313
+ - 0 for tokens that are **masked**.
1314
+ [What are attention masks?](../glossary#attention-mask)
1315
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
1316
+ [`PreTrainedTokenizer.__call__`] for details.
1317
+ If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
1318
+ `past_key_values`).
1319
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
1320
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
1321
+ information on the default strategy.
1322
+ - 1 indicates the head is **not masked**,
1323
+ - 0 indicates the head is **masked**.
1324
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1325
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
1326
+ config.n_positions - 1]`. [What are position IDs?](../glossary#position-ids)
1327
+ past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
1328
+ Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
1329
+ `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
1330
+ `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
1331
+ Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
1332
+ blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
1333
+ If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
1334
+ don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
1335
+ `decoder_input_ids` of shape `(batch_size, sequence_length)`.
1336
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
1337
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
1338
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
1339
+ model's internal embedding lookup matrix.
1340
+ use_cache (`bool`, *optional*):
1341
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
1342
+ `past_key_values`).
1343
+ output_attentions (`bool`, *optional*):
1344
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
1345
+ tensors for more detail.
1346
+ output_hidden_states (`bool`, *optional*):
1347
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
1348
+ more detail.
1349
+ return_dict (`bool`, *optional*):
1350
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
1351
+ pixel_values (`torch.FloatTensor` of shape `(seq_length, num_channels * image_size * image_size)):
1352
+ The tensors corresponding to the input images. Pixel values can be obtained using
1353
+ [`AutoImageProcessor`]. See [`DreamVLImageProcessor.__call__`] for details. [`DreamVLProcessor`] uses
1354
+ [`DreamVLImageProcessor`] for processing images.
1355
+ pixel_values_videos (`torch.FloatTensor` of shape `(seq_length, num_channels * temporal_size * image_size * image_size)):
1356
+ The tensors corresponding to the input videos. Pixel values can be obtained using
1357
+ [`AutoImageProcessor`]. See [`DreamVLImageProcessor.__call__`] for details. [`DreamVLProcessor`] uses
1358
+ [`DreamVLImageProcessor`] for processing videos.
1359
+ image_grid_thw (`torch.LongTensor` of shape `(num_images, 3)`, *optional*):
1360
+ The temporal, height and width of feature shape of each image in LLM.
1361
+ video_grid_thw (`torch.LongTensor` of shape `(num_videos, 3)`, *optional*):
1362
+ The temporal, height and width of feature shape of each video in LLM.
1363
+ rope_deltas (`torch.LongTensor` of shape `(batch_size, )`, *optional*):
1364
+ The rope index difference between sequence length and multimodal rope.
1365
+ """
1366
+
1367
+
1368
+ class DreamVLModel(DreamVLGenerationMixin, DreamVLPreTrainedModel):
1369
+ _tied_weights_keys = ["lm_head.weight"]
1370
+
1371
+ def __init__(self, config):
1372
+ super().__init__(config)
1373
+ self.visual = DreamVLVisionTransformerPretrainedModel._from_config(config.vision_config)
1374
+ self.model = DreamVLBaseModel(config)
1375
+ self.projector = DreamVLMultiModalProjector(config)
1376
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1377
+ # Initialize weights and apply final processing
1378
+ self.post_init()
1379
+
1380
+ def reset_rope_parameters(self):
1381
+ self.model.rotary_emb.reset_parameters()
1382
+ for layer in self.model.layers:
1383
+ layer.self_attn.rotary_emb.reset_parameters()
1384
+
1385
+ def get_input_embeddings(self):
1386
+ return self.model.embed_tokens
1387
+
1388
+ def set_input_embeddings(self, value):
1389
+ self.model.embed_tokens = value
1390
+
1391
+ def get_output_embeddings(self):
1392
+ return self.lm_head
1393
+
1394
+ def set_output_embeddings(self, new_embeddings):
1395
+ self.lm_head = new_embeddings
1396
+
1397
+ def set_decoder(self, decoder):
1398
+ self.model = decoder
1399
+
1400
+ def get_decoder(self):
1401
+ return self.model
1402
+
1403
+ def get_rope_index(
1404
+ self,
1405
+ input_ids: torch.LongTensor,
1406
+ image_grid_thw: Optional[torch.LongTensor] = None,
1407
+ video_grid_thw: Optional[torch.LongTensor] = None,
1408
+ attention_mask: Optional[torch.Tensor] = None,
1409
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
1410
+ """
1411
+ Calculate the 3D rope index based on image and video's temporal, height and width in LLM.
1412
+ Explanation:
1413
+ Each embedding sequence contains vision embedding and text embedding or just contains text embedding.
1414
+ For pure text embedding sequence, the rotary position embedding has no difference with mordern LLMs.
1415
+ Examples:
1416
+ input_ids: [T T T T T], here T is for text.
1417
+ temporal position_ids: [0, 1, 2, 3, 4]
1418
+ height position_ids: [0, 1, 2, 3, 4]
1419
+ width position_ids: [0, 1, 2, 3, 4]
1420
+ For vision and text embedding sequence, we calculate 3D rotary position embedding for vision part
1421
+ and 1D rotary position embeddin for text part.
1422
+ Examples:
1423
+ Assume we have a video input with 3 temporal patches, 2 height patches and 2 width patches.
1424
+ input_ids: [V V V V V V V V V V V V T T T T T], here V is for vision.
1425
+ vision temporal position_ids: [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2]
1426
+ vision height position_ids: [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1]
1427
+ vision width position_ids: [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1]
1428
+ text temporal position_ids: [3, 4, 5, 6, 7]
1429
+ text height position_ids: [3, 4, 5, 6, 7]
1430
+ text width position_ids: [3, 4, 5, 6, 7]
1431
+ Here we calculate the text start position_ids as the max vision position_ids plus 1.
1432
+ Args:
1433
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
1434
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
1435
+ it.
1436
+ image_grid_thw (`torch.LongTensor` of shape `(num_images, 3)`, *optional*):
1437
+ The temporal, height and width of feature shape of each image in LLM.
1438
+ video_grid_thw (`torch.LongTensor` of shape `(num_videos, 3)`, *optional*):
1439
+ The temporal, height and width of feature shape of each video in LLM.
1440
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
1441
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
1442
+ - 1 for tokens that are **not masked**,
1443
+ - 0 for tokens that are **masked**.
1444
+ Returns:
1445
+ position_ids (`torch.LongTensor` of shape `(3, batch_size, sequence_length)`)
1446
+ mrope_position_deltas (`torch.Tensor` of shape `(batch_size)`)
1447
+ """
1448
+ spatial_merge_size = self.config.vision_config.spatial_merge_size
1449
+ image_token_id = self.config.image_token_id
1450
+ video_token_id = self.config.video_token_id
1451
+ vision_start_token_id = self.config.vision_start_token_id
1452
+ mrope_position_deltas = []
1453
+ if image_grid_thw is not None or video_grid_thw is not None:
1454
+ total_input_ids = input_ids
1455
+ if attention_mask is None:
1456
+ attention_mask = torch.ones_like(total_input_ids)
1457
+ position_ids = torch.ones(
1458
+ 3, input_ids.shape[0], input_ids.shape[1], dtype=input_ids.dtype, device=input_ids.device
1459
+ )
1460
+ image_index, video_index = 0, 0
1461
+ for i, input_ids in enumerate(total_input_ids):
1462
+ input_ids = input_ids[attention_mask[i] == 1]
1463
+ image_nums, video_nums = 0, 0
1464
+ vision_start_indices = torch.argwhere(input_ids == vision_start_token_id).squeeze(1)
1465
+ vision_tokens = input_ids[vision_start_indices + 1]
1466
+ image_nums = (vision_tokens == image_token_id).sum()
1467
+ video_nums = (vision_tokens == video_token_id).sum()
1468
+ input_tokens = input_ids.tolist()
1469
+ llm_pos_ids_list: list = []
1470
+ st = 0
1471
+ remain_images, remain_videos = image_nums, video_nums
1472
+ for _ in range(image_nums + video_nums):
1473
+ if image_token_id in input_tokens and remain_images > 0:
1474
+ ed_image = input_tokens.index(image_token_id, st)
1475
+ else:
1476
+ ed_image = len(input_tokens) + 1
1477
+ if video_token_id in input_tokens and remain_videos > 0:
1478
+ ed_video = input_tokens.index(video_token_id, st)
1479
+ else:
1480
+ ed_video = len(input_tokens) + 1
1481
+ if ed_image < ed_video:
1482
+ t, h, w = (
1483
+ image_grid_thw[image_index][0],
1484
+ image_grid_thw[image_index][1],
1485
+ image_grid_thw[image_index][2],
1486
+ )
1487
+ image_index += 1
1488
+ remain_images -= 1
1489
+ ed = ed_image
1490
+ else:
1491
+ t, h, w = (
1492
+ video_grid_thw[video_index][0],
1493
+ video_grid_thw[video_index][1],
1494
+ video_grid_thw[video_index][2],
1495
+ )
1496
+ video_index += 1
1497
+ remain_videos -= 1
1498
+ ed = ed_video
1499
+ llm_grid_t, llm_grid_h, llm_grid_w = (
1500
+ t.item(),
1501
+ h.item() // spatial_merge_size,
1502
+ w.item() // spatial_merge_size,
1503
+ )
1504
+ text_len = ed - st
1505
+
1506
+ st_idx = llm_pos_ids_list[-1].max() + 1 if len(llm_pos_ids_list) > 0 else 0
1507
+ llm_pos_ids_list.append(torch.arange(text_len).view(1, -1).expand(3, -1) + st_idx)
1508
+
1509
+ t_index = torch.arange(llm_grid_t).view(-1, 1).expand(-1, llm_grid_h * llm_grid_w).flatten()
1510
+ h_index = torch.arange(llm_grid_h).view(1, -1, 1).expand(llm_grid_t, -1, llm_grid_w).flatten()
1511
+ w_index = torch.arange(llm_grid_w).view(1, 1, -1).expand(llm_grid_t, llm_grid_h, -1).flatten()
1512
+ llm_pos_ids_list.append(torch.stack([t_index, h_index, w_index]) + text_len + st_idx)
1513
+ st = ed + llm_grid_t * llm_grid_h * llm_grid_w
1514
+
1515
+ if st < len(input_tokens):
1516
+ st_idx = llm_pos_ids_list[-1].max() + 1 if len(llm_pos_ids_list) > 0 else 0
1517
+ text_len = len(input_tokens) - st
1518
+ llm_pos_ids_list.append(torch.arange(text_len).view(1, -1).expand(3, -1) + st_idx)
1519
+
1520
+ llm_positions = torch.cat(llm_pos_ids_list, dim=1).reshape(3, -1)
1521
+ position_ids[..., i, attention_mask[i] == 1] = llm_positions.to(position_ids.device)
1522
+ mrope_position_deltas.append(llm_positions.max() + 1 - len(total_input_ids[i]))
1523
+ mrope_position_deltas = torch.tensor(mrope_position_deltas, device=input_ids.device).unsqueeze(1)
1524
+ return position_ids, mrope_position_deltas
1525
+ else:
1526
+ if attention_mask is not None:
1527
+ position_ids = attention_mask.long().cumsum(-1) - 1
1528
+ position_ids.masked_fill_(attention_mask == 0, 1)
1529
+ position_ids = position_ids.unsqueeze(0).expand(3, -1, -1).to(input_ids.device)
1530
+ max_position_ids = position_ids.max(0, keepdim=False)[0].max(-1, keepdim=True)[0]
1531
+ mrope_position_deltas = max_position_ids + 1 - attention_mask.shape[-1]
1532
+ else:
1533
+ position_ids = (
1534
+ torch.arange(input_ids.shape[1], device=input_ids.device)
1535
+ .view(1, 1, -1)
1536
+ .expand(3, input_ids.shape[0], -1)
1537
+ )
1538
+ mrope_position_deltas = torch.zeros(
1539
+ [input_ids.shape[0], 1],
1540
+ device=input_ids.device,
1541
+ dtype=input_ids.dtype,
1542
+ )
1543
+
1544
+ return position_ids, mrope_position_deltas
1545
+
1546
+ def get_video_features(
1547
+ self, pixel_values_videos: torch.FloatTensor, video_grid_thw: Optional[torch.LongTensor] = None
1548
+ ):
1549
+ """
1550
+ Encodes videos into continuous embeddings that can be forwarded to the language model.
1551
+
1552
+ Args:
1553
+ pixel_values_videos (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)`):
1554
+ The tensors corresponding to the input videos.
1555
+ video_grid_thw (`torch.LongTensor` of shape `(num_videos, 3)`, *optional*):
1556
+ The temporal, height and width of feature shape of each video in LLM.
1557
+ """
1558
+ pixel_values_videos = pixel_values_videos.type(self.visual.dtype)
1559
+ video_embeds = self.visual(pixel_values_videos, grid_thw=video_grid_thw)
1560
+ split_sizes = (video_grid_thw.prod(-1) // self.visual.spatial_merge_size**2).tolist()
1561
+ video_embeds = torch.split(video_embeds, split_sizes)
1562
+ return video_embeds
1563
+
1564
+ def get_image_features(self, pixel_values: torch.FloatTensor, image_grid_thw: Optional[torch.LongTensor] = None):
1565
+ """
1566
+ Encodes images into continuous embeddings that can be forwarded to the language model.
1567
+
1568
+ Args:
1569
+ pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)`):
1570
+ The tensors corresponding to the input images.
1571
+ image_grid_thw (`torch.LongTensor` of shape `(num_images, 3)`, *optional*):
1572
+ The temporal, height and width of feature shape of each image in LLM.
1573
+ """
1574
+ pixel_values = pixel_values.type(self.visual.dtype)
1575
+ image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw)
1576
+ split_sizes = (image_grid_thw.prod(-1) // self.visual.spatial_merge_size**2).tolist()
1577
+ image_embeds = torch.split(image_embeds, split_sizes)
1578
+ return image_embeds
1579
+
1580
+ @add_start_docstrings_to_model_forward(DreamVL_INPUTS_DOCSTRING)
1581
+ @replace_return_docstrings(output_type=DreamVLModelOutput, config_class=_CONFIG_FOR_DOC)
1582
+ def forward(
1583
+ self,
1584
+ input_ids: torch.LongTensor = None,
1585
+ attention_mask: Optional[torch.Tensor] = None,
1586
+ position_ids: Optional[torch.LongTensor] = None,
1587
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1588
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1589
+ labels: Optional[torch.LongTensor] = None,
1590
+ use_cache: Optional[bool] = None,
1591
+ output_attentions: Optional[bool] = None,
1592
+ output_hidden_states: Optional[bool] = None,
1593
+ return_dict: Optional[bool] = None,
1594
+ pixel_values: Optional[torch.Tensor] = None,
1595
+ pixel_values_videos: Optional[torch.FloatTensor] = None,
1596
+ image_grid_thw: Optional[torch.LongTensor] = None,
1597
+ video_grid_thw: Optional[torch.LongTensor] = None,
1598
+ rope_deltas: Optional[torch.LongTensor] = None,
1599
+ cache_position: Optional[torch.LongTensor] = None,
1600
+ num_logits_to_keep: int = 0,
1601
+ **loss_kwargs,
1602
+ ) -> Union[Tuple, DreamVLModelOutput]:
1603
+ r"""
1604
+ Args:
1605
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1606
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
1607
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
1608
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
1609
+ Returns:
1610
+ Example:
1611
+ ```python
1612
+ >>> from PIL import Image
1613
+ >>> import requests
1614
+ >>> from transformers import AutoProcessor, DreamVLForConditionalGeneration
1615
+ >>> model = DreamVLForConditionalGeneration.from_pretrained(" ")
1616
+ >>> processor = AutoProcessor.from_pretrained(" ")
1617
+ >>> messages = [
1618
+ {
1619
+ "role": "user",
1620
+ "content": [
1621
+ {"type": "image"},
1622
+ {"type": "text", "text": "What is shown in this image?"},
1623
+ ],
1624
+ },
1625
+ ]
1626
+ >>> url = "https://www.ilankelman.org/stopsigns/australia.jpg"
1627
+ >>> image = Image.open(requests.get(url, stream=True).raw)
1628
+ >>> text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
1629
+ >>> inputs = processor(text=[text], images=[image], vision_infos=[vision_infos])
1630
+ >>> # Generate
1631
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
1632
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
1633
+ "The image shows a street scene with a red stop sign in the foreground. In the background, there is a large red gate with Chinese characters ..."
1634
+ ```"""
1635
+
1636
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1637
+ output_hidden_states = (
1638
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1639
+ )
1640
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1641
+
1642
+ if inputs_embeds is None:
1643
+ inputs_embeds = self.get_input_embeddings()(input_ids)
1644
+ if pixel_values is not None:
1645
+ image_embeds = self.get_image_features(pixel_values, image_grid_thw)
1646
+ image_embeds = torch.cat(image_embeds, dim=0)
1647
+ n_image_tokens = (input_ids == self.config.image_token_id).sum()
1648
+ n_image_features = image_embeds.shape[0]
1649
+ if not is_torchdynamo_compiling() and n_image_tokens != n_image_features:
1650
+ raise ValueError(
1651
+ f"Image features and image tokens do not match: tokens: {n_image_tokens}, features {n_image_features}"
1652
+ )
1653
+
1654
+ mask = input_ids == self.config.image_token_id
1655
+ mask_unsqueezed = mask.unsqueeze(-1)
1656
+ mask_expanded = mask_unsqueezed.expand_as(inputs_embeds)
1657
+
1658
+ image_mask = mask_expanded.to(inputs_embeds.device)
1659
+ image_embeds = image_embeds.to(inputs_embeds.device, inputs_embeds.dtype)
1660
+ image_embeds_projected = self.projector(image_embeds)
1661
+
1662
+ inputs_embeds = inputs_embeds.masked_scatter(image_mask, image_embeds_projected)
1663
+
1664
+ if pixel_values_videos is not None:
1665
+ video_embeds = self.get_video_features(pixel_values_videos, video_grid_thw)
1666
+ video_embeds = torch.cat(video_embeds, dim=0)
1667
+ n_video_tokens = (input_ids == self.config.video_token_id).sum()
1668
+ n_video_features = video_embeds.shape[0]
1669
+ if not is_torchdynamo_compiling() and n_video_tokens != n_video_features:
1670
+ raise ValueError(
1671
+ f"Video features and video tokens do not match: tokens: {n_video_tokens}, features {n_video_features}"
1672
+ )
1673
+
1674
+ mask = input_ids == self.config.video_token_id
1675
+ mask_unsqueezed = mask.unsqueeze(-1)
1676
+ mask_expanded = mask_unsqueezed.expand_as(inputs_embeds)
1677
+
1678
+ video_mask = mask_expanded.to(inputs_embeds.device)
1679
+ video_embeds = video_embeds.to(inputs_embeds.device, inputs_embeds.dtype)
1680
+ video_embeds_projected = self.projector(video_embeds)
1681
+
1682
+ inputs_embeds = inputs_embeds.masked_scatter(video_mask, video_embeds_projected)
1683
+
1684
+ outputs = self.model(
1685
+ attention_mask=attention_mask,
1686
+ position_ids=position_ids,
1687
+ past_key_values=past_key_values,
1688
+ inputs_embeds=inputs_embeds,
1689
+ use_cache=use_cache,
1690
+ output_attentions=output_attentions,
1691
+ output_hidden_states=output_hidden_states,
1692
+ return_dict=return_dict,
1693
+ cache_position=cache_position,
1694
+ )
1695
+
1696
+ hidden_states = outputs[0]
1697
+ # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
1698
+ logits = self.lm_head(hidden_states[:, -num_logits_to_keep:, :])
1699
+
1700
+ loss = None
1701
+ if labels is not None:
1702
+ loss = self.loss_function(logits, labels, self.vocab_size, **loss_kwargs)
1703
+
1704
+ if not return_dict:
1705
+ output = (logits,) + outputs[1:]
1706
+ return (loss,) + output if loss is not None else output
1707
+
1708
+ return DreamVLModelOutput(
1709
+ logits=logits,
1710
+ loss=loss,
1711
+ past_key_values=outputs.past_key_values,
1712
+ hidden_states=outputs.hidden_states,
1713
+ attentions=outputs.attentions,
1714
+ rope_deltas=rope_deltas,
1715
+ inputs_embeds=inputs_embeds
1716
+ )
1717
+
1718
+ def prepare_inputs_for_generation(
1719
+ self,
1720
+ input_ids,
1721
+ past_key_values=None,
1722
+ attention_mask=None,
1723
+ inputs_embeds=None,
1724
+ cache_position=None,
1725
+ position_ids=None,
1726
+ use_cache=True,
1727
+ pixel_values=None,
1728
+ pixel_values_videos=None,
1729
+ image_grid_thw=None,
1730
+ video_grid_thw=None,
1731
+ rope_deltas = None,
1732
+ **kwargs,
1733
+ ):
1734
+ # never remove input ids
1735
+
1736
+ if use_cache:
1737
+ if past_key_values is None:
1738
+ raise ValueError(
1739
+ "If `use_cache=True`, `past_key_values` must be provided. Please make sure to pass `past_key_values` to the model."
1740
+ )
1741
+ else:
1742
+ pass
1743
+ else:
1744
+ past_key_values = None
1745
+
1746
+ if use_cache:
1747
+ if cache_position is None:
1748
+ raise ValueError(
1749
+ "If `use_cache=True`, `cache_position` must be provided. Please make sure to pass `cache_position` to the model."
1750
+ )
1751
+ else:
1752
+ pass
1753
+ else:
1754
+ cache_position = None
1755
+
1756
+ if use_cache:
1757
+ if input_ids.shape[1] != cache_position.shape[0]:
1758
+ input_ids = input_ids[:, cache_position]
1759
+ else:
1760
+ pass
1761
+ else:
1762
+ pass
1763
+
1764
+ if position_ids is None:
1765
+ if not use_cache:
1766
+ position_ids, rope_deltas = self.get_rope_index(
1767
+ input_ids, image_grid_thw, video_grid_thw, attention_mask
1768
+ )
1769
+ else:
1770
+ if cache_position[0] == 0:
1771
+ position_ids, rope_deltas = self.get_rope_index(
1772
+ input_ids, image_grid_thw, video_grid_thw, attention_mask
1773
+ )
1774
+ else:
1775
+ batch_size, seq_length = input_ids.shape
1776
+ delta = (
1777
+ cache_position[0] + rope_deltas if cache_position is not None and rope_deltas is not None else 0
1778
+ )
1779
+ position_ids = torch.arange(seq_length, device=input_ids.device)
1780
+ position_ids = position_ids.view(1, -1).expand(batch_size, -1)
1781
+ position_ids = position_ids.add(delta)
1782
+ position_ids = position_ids.unsqueeze(0).expand(3, -1, -1)
1783
+
1784
+ else:
1785
+ raise NotImplementedError(
1786
+ "position_ids is not None, please check the code in prepare_inputs_for_generation"
1787
+ )
1788
+
1789
+ if use_cache:
1790
+ if cache_position[0] != 0:
1791
+ pixel_values = None
1792
+ pixel_values_videos = None
1793
+ logger.debug(f"after prefill, the pixel_values and pixel_values_videos are None.")
1794
+ else:
1795
+ pass
1796
+
1797
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1798
+ # if inputs_embeds is not None:
1799
+ # raise NotImplementedError(
1800
+ # "inputs_embeds is not None, please check the code in prepare_inputs_for_generation"
1801
+ # )
1802
+ # else:
1803
+ # model_inputs = {"input_ids": input_ids, "inputs_embeds": None}
1804
+
1805
+ model_inputs = {
1806
+ "input_ids": input_ids,
1807
+ "inputs_embeds": inputs_embeds,
1808
+ "position_ids": position_ids,
1809
+ "past_key_values": past_key_values,
1810
+ "use_cache": use_cache,
1811
+ "attention_mask": attention_mask,
1812
+ "pixel_values": pixel_values,
1813
+ "pixel_values_videos": pixel_values_videos,
1814
+ "image_grid_thw": image_grid_thw,
1815
+ "video_grid_thw": video_grid_thw,
1816
+ "cache_position": cache_position,
1817
+ "rope_deltas": rope_deltas,
1818
+ }
1819
+
1820
+ return model_inputs
1821
+
1822
+
1823
+ class DreamVLAForActionPrediction(DreamVLModel):
1824
+ config_class: PretrainedConfig = DreamVLAConfig
1825
+
1826
+ def __init__(self, config: DreamVLAConfig) -> None:
1827
+ super().__init__(config)
1828
+ self.norm_stats = config.norm_stats
1829
+
1830
+ # Compute action bins
1831
+ self.bins = np.linspace(-1, 1, config.n_action_bins)
1832
+ self.bin_centers = (self.bins[:-1] + self.bins[1:]) / 2.0
1833
+
1834
+ def predict_action(
1835
+ self, input_ids: Optional[torch.LongTensor] = None, unnorm_key: Optional[str] = None,
1836
+ vocab_size = None, action_chunk=1, action_sep=False, **kwargs: str
1837
+ ) -> np.ndarray:
1838
+ """Thin wrapper around .diffusion_generate() that decodes predicted actions and unnormalizes them."""
1839
+ # Run VLA inference
1840
+ action_dim = self.get_action_dim(unnorm_key)
1841
+ if 'max_new_tokens' not in kwargs:
1842
+ logger.info(f"max_new_tokens is not set, generate one action (max_new_tokens={action_dim}) by default.")
1843
+ kwargs['max_new_tokens'] = action_dim
1844
+ elif kwargs['max_new_tokens'] < action_dim:
1845
+ logger.warning(f"max_new_tokens is too small, reset to one action (max_new_tokens={action_dim}).")
1846
+ kwargs['max_new_tokens'] = action_dim
1847
+
1848
+ # restrict predict token to be action tokens
1849
+ # action_start_idx = self.vocab_size - self.bin_centers.shape[0]
1850
+ # def action_logits_hook(step, x, logits):
1851
+ # logits[:,:,:action_start_idx] -= torch.inf
1852
+ # return logits
1853
+ generated_ids = self.diffusion_generate(input_ids, **kwargs)
1854
+ pred_action_dim = action_dim + (1 if action_sep else 0)
1855
+ # Extract predicted action tokens and translate into (normalized) continuous actions, assume batch size = 1
1856
+ predicted_action_token_ids = generated_ids[0, input_ids.shape[1]:].cpu().numpy()
1857
+ discretized_actions = vocab_size - predicted_action_token_ids[:pred_action_dim*action_chunk]
1858
+ discretized_actions = np.clip(discretized_actions - 1, a_min=0, a_max=self.bin_centers.shape[0] - 1)
1859
+ normalized_actions = self.bin_centers[discretized_actions]
1860
+ normalized_actions = normalized_actions.reshape(-1, pred_action_dim)[:, :action_dim] # [action_chunk, action_dim]
1861
+
1862
+ # Unnormalize actions
1863
+ action_norm_stats = self.get_action_stats(unnorm_key)
1864
+ mask = action_norm_stats.get("mask", np.ones_like(action_norm_stats["q01"], dtype=bool))
1865
+ mask = np.array(mask).reshape(1, -1)
1866
+ action_high, action_low = np.array(action_norm_stats["q99"]).reshape(1, -1), np.array(action_norm_stats["q01"]).reshape(1, -1)
1867
+ actions = np.where(
1868
+ mask,
1869
+ 0.5 * (normalized_actions + 1) * (action_high - action_low) + action_low,
1870
+ normalized_actions,
1871
+ )
1872
+
1873
+ return actions
1874
+
1875
+ @staticmethod
1876
+ def _check_unnorm_key(norm_stats: Dict[str, Dict[str, Any]], unnorm_key: Optional[str]) -> str:
1877
+ if unnorm_key is None:
1878
+ assert len(norm_stats) == 1, (
1879
+ f"Your model was trained on more than one dataset, "
1880
+ f"please pass a `unnorm_key` from the following options to choose the statistics "
1881
+ f"used for un-normalizing actions: {norm_stats.keys()}"
1882
+ )
1883
+ unnorm_key = next(iter(norm_stats.keys()))
1884
+
1885
+ assert unnorm_key in norm_stats, (
1886
+ f"The `unnorm_key` you chose is not in the set of available dataset statistics, "
1887
+ f"please choose from: {norm_stats.keys()}"
1888
+ )
1889
+ return unnorm_key
1890
+
1891
+ def get_action_dim(self, unnorm_key: Optional[str] = None) -> int:
1892
+ """Get the dimensionality of the policy's action space."""
1893
+ unnorm_key = self._check_unnorm_key(self.norm_stats, unnorm_key)
1894
+ return len(self.norm_stats[unnorm_key]["action"]["q01"])
1895
+
1896
+ def get_action_stats(self, unnorm_key: Optional[str] = None) -> Dict[str, Any]:
1897
+ """Get all the logged statistics for the given dataset."""
1898
+ unnorm_key = self._check_unnorm_key(self.norm_stats, unnorm_key)
1899
+ return self.norm_stats[unnorm_key]["action"]
preprocessor_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoImageProcessor": "image_processing_dreamvl.DreamVLImageProcessor",
4
+ "AutoProcessor": "processing_dreamvl.DreamVLProcessor"
5
+ },
6
+ "do_convert_rgb": true,
7
+ "do_normalize": true,
8
+ "do_rescale": true,
9
+ "do_resize": true,
10
+ "image_mean": [
11
+ 0.48145466,
12
+ 0.4578275,
13
+ 0.40821073
14
+ ],
15
+ "image_processor_type": "DreamVLImageProcessor",
16
+ "image_std": [
17
+ 0.26862954,
18
+ 0.26130258,
19
+ 0.27577711
20
+ ],
21
+ "max_pixels": 3211264,
22
+ "merge_size": 2,
23
+ "min_pixels": 3136,
24
+ "patch_size": 14,
25
+ "processor_class": "DreamVLProcessor",
26
+ "resample": 3,
27
+ "rescale_factor": 0.00392156862745098,
28
+ "size": {
29
+ "max_pixels": 3211264,
30
+ "min_pixels": 3136
31
+ },
32
+ "temporal_patch_size": 2
33
+ }
processing_dreamvl.py ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The Qwen team, Alibaba Group and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ """
21
+ Processor class for Dream-VL.
22
+ """
23
+
24
+ from typing import List, Union
25
+
26
+ try:
27
+ from typing import Unpack
28
+ except ImportError:
29
+ from typing_extensions import Unpack
30
+
31
+ from transformers.feature_extraction_utils import BatchFeature
32
+ from transformers.image_utils import ImageInput, VideoInput
33
+ from transformers.processing_utils import (
34
+ ProcessingKwargs,
35
+ ProcessorMixin,
36
+ )
37
+ from transformers.tokenization_utils_base import PreTokenizedInput, TextInput
38
+ from transformers.utils import logging
39
+
40
+ logger = logging.get_logger(__name__)
41
+
42
+
43
+ class DreamVLProcessorKwargs(ProcessingKwargs, total=False):
44
+ _defaults = {
45
+ "text_kwargs": {
46
+ "padding": False,
47
+ },
48
+ }
49
+
50
+
51
+ class DreamVLProcessor(ProcessorMixin):
52
+ r"""
53
+ Constructs a Dream-VL processor which wraps a Dream-VL image processor and a Dream tokenizer into a single processor.
54
+ [`DreamVLProcessor`] offers all the functionalities of [`DreamVLImageProcessor`] and [`DreamTokenizer`]. See the
55
+ [`~DreamVLProcessor.__call__`] and [`~DreamVLProcessor.decode`] for more information.
56
+ Args:
57
+ image_processor ([`DreamVLImageProcessor`], *optional*):
58
+ The image processor is a required input.
59
+ tokenizer ([`DreamTokenizer`], *optional*):
60
+ The tokenizer is a required input.
61
+ chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages
62
+ in a chat into a tokenizable string.
63
+ """
64
+
65
+ attributes = ["image_processor", "tokenizer"]
66
+ valid_kwargs = ["chat_template"]
67
+ image_processor_class = "AutoImageProcessor"
68
+ tokenizer_class = ("AutoTokenizer")
69
+
70
+ def __init__(self, image_processor=None, tokenizer=None, chat_template=None, **kwargs):
71
+ super().__init__(image_processor, tokenizer, chat_template=chat_template)
72
+
73
+ def __call__(
74
+ self,
75
+ images: ImageInput = None,
76
+ text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
77
+ videos: VideoInput = None,
78
+ **kwargs: Unpack[DreamVLProcessorKwargs],
79
+ ) -> BatchFeature:
80
+ """
81
+ Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
82
+ and `kwargs` arguments to DreamTokenizer's [`~DreamTokenizer.__call__`] if `text` is not `None` to encode
83
+ the text. To prepare the vision inputs, this method forwards the `vision_infos` and `kwrags` arguments to
84
+ DreamVLImageProcessor's [`~DreamVLImageProcessor.__call__`] if `vision_infos` is not `None`.
85
+
86
+ Args:
87
+ images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
88
+ The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
89
+ tensor. Both channels-first and channels-last formats are supported.
90
+ text (`str`, `List[str]`, `List[List[str]]`):
91
+ The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
92
+ (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
93
+ `is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
94
+ videos (`np.ndarray`, `torch.Tensor`, `List[np.ndarray]`, `List[torch.Tensor]`):
95
+ The image or batch of videos to be prepared. Each video can be a 4D NumPy array or PyTorch
96
+ tensor, or a nested list of 3D frames. Both channels-first and channels-last formats are supported.
97
+ return_tensors (`str` or [`~utils.TensorType`], *optional*):
98
+ If set, will return tensors of a particular framework. Acceptable values are:
99
+ - `'tf'`: Return TensorFlow `tf.constant` objects.
100
+ - `'pt'`: Return PyTorch `torch.Tensor` objects.
101
+ - `'np'`: Return NumPy `np.ndarray` objects.
102
+ - `'jax'`: Return JAX `jnp.ndarray` objects.
103
+
104
+ Returns:
105
+ [`BatchFeature`]: A [`BatchFeature`] with the following fields:
106
+
107
+ - **input_ids** -- List of token ids to be fed to a model. Returned when `text` is not `None`.
108
+ - **attention_mask** -- List of indices specifying which tokens should be attended to by the model (when
109
+ `return_attention_mask=True` or if *"attention_mask"* is in `self.model_input_names` and if `text` is not
110
+ `None`).
111
+ - **pixel_values** -- Pixel values to be fed to a model. Returned when `images` is not `None`.
112
+ - **pixel_values_videos** -- Pixel values of videos to be fed to a model. Returned when `videos` is not `None`.
113
+ - **image_grid_thw** -- List of image 3D grid in LLM. Returned when `images` is not `None`.
114
+ - **video_grid_thw** -- List of video 3D grid in LLM. Returned when `videos` is not `None`.
115
+ """
116
+ output_kwargs = self._merge_kwargs(
117
+ DreamVLProcessorKwargs,
118
+ tokenizer_init_kwargs=self.tokenizer.init_kwargs,
119
+ **kwargs,
120
+ )
121
+ if images is not None:
122
+ image_inputs = self.image_processor(images=images, videos=None, **output_kwargs["images_kwargs"])
123
+ image_grid_thw = image_inputs["image_grid_thw"]
124
+ else:
125
+ image_inputs = {}
126
+ image_grid_thw = None
127
+
128
+ if videos is not None:
129
+ videos_inputs = self.image_processor(images=None, videos=videos, **output_kwargs["videos_kwargs"])
130
+ video_grid_thw = videos_inputs["video_grid_thw"]
131
+ else:
132
+ videos_inputs = {}
133
+ video_grid_thw = None
134
+
135
+ if not isinstance(text, list):
136
+ text = [text]
137
+
138
+ if image_grid_thw is not None:
139
+ merge_length = self.image_processor.merge_size ** 2
140
+ index = 0
141
+ for i in range(len(text)):
142
+ while "<|image_pad|>" in text[i]:
143
+ text[i] = text[i].replace(
144
+ "<|image_pad|>", "<|placeholder|>" * (image_grid_thw[index].prod() // merge_length), 1
145
+ )
146
+ index += 1
147
+ text[i] = text[i].replace("<|placeholder|>", "<|image_pad|>")
148
+
149
+ if video_grid_thw is not None:
150
+ merge_length = self.image_processor.merge_size ** 2
151
+ index = 0
152
+ for i in range(len(text)):
153
+ while "<|video_pad|>" in text[i]:
154
+ text[i] = text[i].replace(
155
+ "<|video_pad|>", "<|placeholder|>" * (video_grid_thw[index].prod() // merge_length), 1
156
+ )
157
+ index += 1
158
+ text[i] = text[i].replace("<|placeholder|>", "<|video_pad|>")
159
+
160
+ _ = output_kwargs["text_kwargs"].pop("padding_side", None)
161
+ text_inputs = self.tokenizer(text, **output_kwargs["text_kwargs"])
162
+
163
+ return BatchFeature(data={**text_inputs, **image_inputs, **videos_inputs})
164
+
165
+ def batch_decode(self, *args, **kwargs):
166
+ """
167
+ This method forwards all its arguments to DreamTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please
168
+ refer to the docstring of this method for more information.
169
+ """
170
+ return self.tokenizer.batch_decode(*args, **kwargs)
171
+
172
+ def decode(self, *args, **kwargs):
173
+ """
174
+ This method forwards all its arguments to DreamTokenizer's [`~PreTrainedTokenizer.decode`]. Please refer to
175
+ the docstring of this method for more information.
176
+ """
177
+ return self.tokenizer.decode(*args, **kwargs)
178
+
179
+ @property
180
+ def model_input_names(self):
181
+ tokenizer_input_names = self.tokenizer.model_input_names
182
+ image_processor_input_names = self.image_processor.model_input_names
183
+ return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
processor_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_dreamvl.DreamVLProcessor"
4
+ },
5
+ "processor_class": "DreamVLProcessor"
6
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|beginoftext|>",
4
+ "<|mask|>"
5
+ ],
6
+ "bos_token": {
7
+ "content": "<|beginoftext|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "eos_token": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "mask_token": {
21
+ "content": "<|mask|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false
26
+ },
27
+ "pad_token": {
28
+ "content": "<|endoftext|>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ }
34
+ }
tokenization_dream.py ADDED
@@ -0,0 +1,331 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The Dream team, HKUNLP Group and The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on Qwen's implementations in this library.
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """Tokenization classes for Dream."""
17
+
18
+ import json
19
+ import os
20
+ import unicodedata
21
+ from functools import lru_cache
22
+ from typing import Optional, Tuple
23
+
24
+ import regex as re
25
+
26
+ from transformers.tokenization_utils import AddedToken, PreTrainedTokenizer
27
+ from transformers.utils import logging
28
+
29
+
30
+ logger = logging.get_logger(__name__)
31
+
32
+ VOCAB_FILES_NAMES = {
33
+ "vocab_file": "vocab.json",
34
+ "merges_file": "merges.txt",
35
+ }
36
+
37
+
38
+ MAX_MODEL_INPUT_SIZES = {"dream/dream-tokenizer": 32768}
39
+
40
+ PRETOKENIZE_REGEX = r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+"""
41
+
42
+
43
+ @lru_cache()
44
+ # Copied from transformers.models.gpt2.tokenization_gpt2.bytes_to_unicode
45
+ def bytes_to_unicode():
46
+ """
47
+ Returns list of utf-8 byte and a mapping to unicode strings. We specifically avoids mapping to whitespace/control
48
+ characters the bpe code barfs on.
49
+ The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab
50
+ if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for
51
+ decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup
52
+ tables between utf-8 bytes and unicode strings.
53
+ """
54
+ bs = (
55
+ list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
56
+ )
57
+ cs = bs[:]
58
+ n = 0
59
+ for b in range(2**8):
60
+ if b not in bs:
61
+ bs.append(b)
62
+ cs.append(2**8 + n)
63
+ n += 1
64
+ cs = [chr(n) for n in cs]
65
+ return dict(zip(bs, cs))
66
+
67
+
68
+ # Copied from transformers.models.gpt2.tokenization_gpt2.get_pairs
69
+ def get_pairs(word):
70
+ """
71
+ Return set of symbol pairs in a word.
72
+ Word is represented as tuple of symbols (symbols being variable-length strings).
73
+ """
74
+ pairs = set()
75
+ prev_char = word[0]
76
+ for char in word[1:]:
77
+ pairs.add((prev_char, char))
78
+ prev_char = char
79
+ return pairs
80
+
81
+
82
+ class DreamTokenizer(PreTrainedTokenizer):
83
+ """
84
+ Construct a Dream tokenizer. Based on byte-level Byte-Pair-Encoding.
85
+ Same with GPT2Tokenizer, this tokenizer has been trained to treat spaces like parts of the tokens so a word will
86
+ be encoded differently whether it is at the beginning of the sentence (without space) or not:
87
+ ```python
88
+ >>> from transformers import AutoTokenizer
89
+ >>> tokenizer = AutoTokenizer.from_pretrained("Dream-org/Dream-v0-Base-7B", trust_remote_code=True)
90
+ >>> tokenizer("Hello world")["input_ids"]
91
+ [9707, 1879]
92
+ >>> tokenizer(" Hello world")["input_ids"]
93
+ [21927, 1879]
94
+ ```
95
+ This is expected.
96
+ You should not use GPT2Tokenizer instead, because of the different pretokenization rules.
97
+ This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
98
+ this superclass for more information regarding those methods.
99
+ Args:
100
+ vocab_file (`str`):
101
+ Path to the vocabulary file.
102
+ merges_file (`str`):
103
+ Path to the merges file.
104
+ errors (`str`, *optional*, defaults to `"replace"`):
105
+ Paradigm to follow when decoding bytes to UTF-8. See
106
+ [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
107
+ unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
108
+ The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
109
+ token instead.
110
+ bos_token (`str`, *optional*):
111
+ The beginning of sequence token. Not applicable for this tokenizer.
112
+ eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
113
+ The end of sequence token.
114
+ pad_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
115
+ The token used for padding, for example when batching sequences of different lengths.
116
+ clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
117
+ Whether or not the model should cleanup the spaces that were added when splitting the input text during the
118
+ tokenization process. Not applicable to this tokenizer, since tokenization does not add spaces.
119
+ split_special_tokens (`bool`, *optional*, defaults to `False`):
120
+ Whether or not the special tokens should be split during the tokenization process. The default behavior is
121
+ to not split special tokens. This means that if `<|endoftext|>` is the `eos_token`, then `tokenizer.tokenize("<|endoftext|>") =
122
+ ['<|endoftext|>`]. Otherwise, if `split_special_tokens=True`, then `tokenizer.tokenize("<|endoftext|>")` will be give `['<',
123
+ '|', 'endo', 'ft', 'ext', '|', '>']`. This argument is only supported for `slow` tokenizers for the moment.
124
+ """
125
+
126
+ vocab_files_names = VOCAB_FILES_NAMES
127
+ model_input_names = ["input_ids", "attention_mask"]
128
+
129
+ def __init__(
130
+ self,
131
+ vocab_file,
132
+ merges_file,
133
+ errors="replace",
134
+ unk_token="<|endoftext|>",
135
+ bos_token=None,
136
+ eos_token="<|endoftext|>",
137
+ pad_token="<|endoftext|>",
138
+ clean_up_tokenization_spaces=False,
139
+ split_special_tokens=False,
140
+ **kwargs,
141
+ ):
142
+ # Dream vocab does not contain control tokens; added tokens need to be special
143
+ bos_token = (
144
+ AddedToken(bos_token, lstrip=False, rstrip=False, special=True, normalized=False)
145
+ if isinstance(bos_token, str)
146
+ else bos_token
147
+ )
148
+ eos_token = (
149
+ AddedToken(eos_token, lstrip=False, rstrip=False, special=True, normalized=False)
150
+ if isinstance(eos_token, str)
151
+ else eos_token
152
+ )
153
+ unk_token = (
154
+ AddedToken(unk_token, lstrip=False, rstrip=False, special=True, normalized=False)
155
+ if isinstance(unk_token, str)
156
+ else unk_token
157
+ )
158
+ pad_token = (
159
+ AddedToken(pad_token, lstrip=False, rstrip=False, special=True, normalized=False)
160
+ if isinstance(pad_token, str)
161
+ else pad_token
162
+ )
163
+
164
+ with open(vocab_file, encoding="utf-8") as vocab_handle:
165
+ self.encoder = json.load(vocab_handle)
166
+ self.decoder = {v: k for k, v in self.encoder.items()}
167
+ self.errors = errors # how to handle errors in decoding
168
+ self.byte_encoder = bytes_to_unicode()
169
+ self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
170
+ bpe_merges = []
171
+ with open(merges_file, encoding="utf-8") as merges_handle:
172
+ for i, line in enumerate(merges_handle):
173
+ line = line.strip()
174
+ if (i == 0 and line.startswith("#version:")) or not line:
175
+ continue
176
+ bpe_merges.append(tuple(line.split()))
177
+ self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
178
+ # NOTE: the cache can grow without bound and will get really large for long running processes
179
+ # (esp. for texts of language that do not use space between word, e.g. Chinese); technically
180
+ # not a memory leak but appears as one.
181
+ # GPT2Tokenizer has the same problem, so let's be consistent.
182
+ self.cache = {}
183
+
184
+ self.pat = re.compile(PRETOKENIZE_REGEX)
185
+
186
+ if kwargs.get("add_prefix_space", False):
187
+ logger.warning_once(
188
+ f"{self.__class__.__name} does not support `add_prefix_space`, setting it to True has no effect."
189
+ )
190
+
191
+ super().__init__(
192
+ errors=errors,
193
+ bos_token=bos_token,
194
+ eos_token=eos_token,
195
+ pad_token=pad_token,
196
+ unk_token=unk_token,
197
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
198
+ split_special_tokens=split_special_tokens,
199
+ **kwargs,
200
+ )
201
+
202
+ @property
203
+ def vocab_size(self) -> int:
204
+ return len(self.encoder)
205
+
206
+ # Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer.get_vocab
207
+ def get_vocab(self):
208
+ return dict(self.encoder, **self.added_tokens_encoder)
209
+
210
+ # Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer.bpe
211
+ def bpe(self, token):
212
+ if token in self.cache:
213
+ return self.cache[token]
214
+ word = tuple(token)
215
+ pairs = get_pairs(word)
216
+
217
+ if not pairs:
218
+ return token
219
+
220
+ while True:
221
+ bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
222
+ if bigram not in self.bpe_ranks:
223
+ break
224
+ first, second = bigram
225
+ new_word = []
226
+ i = 0
227
+ while i < len(word):
228
+ try:
229
+ j = word.index(first, i)
230
+ except ValueError:
231
+ new_word.extend(word[i:])
232
+ break
233
+ else:
234
+ new_word.extend(word[i:j])
235
+ i = j
236
+
237
+ if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
238
+ new_word.append(first + second)
239
+ i += 2
240
+ else:
241
+ new_word.append(word[i])
242
+ i += 1
243
+ new_word = tuple(new_word)
244
+ word = new_word
245
+ if len(word) == 1:
246
+ break
247
+ else:
248
+ pairs = get_pairs(word)
249
+ word = " ".join(word)
250
+ self.cache[token] = word
251
+ return word
252
+
253
+ # Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer._tokenize
254
+ def _tokenize(self, text):
255
+ """Tokenize a string."""
256
+ bpe_tokens = []
257
+ for token in re.findall(self.pat, text):
258
+ token = "".join(
259
+ self.byte_encoder[b] for b in token.encode("utf-8")
260
+ ) # Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case)
261
+ bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(" "))
262
+ return bpe_tokens
263
+
264
+ # Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer._convert_token_to_id
265
+ def _convert_token_to_id(self, token):
266
+ """Converts a token (str) in an id using the vocab."""
267
+ return self.encoder.get(token, self.encoder.get(self.unk_token))
268
+
269
+ # Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer._convert_id_to_token
270
+ def _convert_id_to_token(self, index):
271
+ """Converts an index (integer) in a token (str) using the vocab."""
272
+ return self.decoder.get(index)
273
+
274
+ # Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer.convert_tokens_to_string
275
+ def convert_tokens_to_string(self, tokens):
276
+ """Converts a sequence of tokens (string) in a single string."""
277
+ text = "".join(tokens)
278
+ text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors)
279
+ return text
280
+
281
+ def decode(
282
+ self,
283
+ token_ids,
284
+ skip_special_tokens: bool = False,
285
+ clean_up_tokenization_spaces: Optional[bool] = False,
286
+ spaces_between_special_tokens: bool = False,
287
+ **kwargs,
288
+ ) -> str:
289
+ # `spaces_between_special_tokens` defaults to True for _decode in slow tokenizers
290
+ # and cannot be configured elsewhere, but it should default to False for DreamTokenizer
291
+ return super().decode(
292
+ token_ids,
293
+ skip_special_tokens=skip_special_tokens,
294
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
295
+ spaces_between_special_tokens=spaces_between_special_tokens,
296
+ **kwargs,
297
+ )
298
+
299
+ # Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer.save_vocabulary
300
+ def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
301
+ if not os.path.isdir(save_directory):
302
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
303
+ return
304
+ vocab_file = os.path.join(
305
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
306
+ )
307
+ merge_file = os.path.join(
308
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"]
309
+ )
310
+
311
+ with open(vocab_file, "w", encoding="utf-8") as f:
312
+ f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n")
313
+
314
+ index = 0
315
+ with open(merge_file, "w", encoding="utf-8") as writer:
316
+ writer.write("#version: 0.2\n")
317
+ for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]):
318
+ if index != token_index:
319
+ logger.warning(
320
+ f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive."
321
+ " Please check that the tokenizer is not corrupted!"
322
+ )
323
+ index = token_index
324
+ writer.write(" ".join(bpe_tokens) + "\n")
325
+ index += 1
326
+
327
+ return vocab_file, merge_file
328
+
329
+ def prepare_for_tokenization(self, text, **kwargs):
330
+ text = unicodedata.normalize("NFC", text)
331
+ return (text, kwargs)
tokenizer_config.json ADDED
@@ -0,0 +1,2271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<|beginoftext|>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": true
188
+ },
189
+ "151666": {
190
+ "content": "<|mask|>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": true
196
+ },
197
+ "151667": {
198
+ "content": "<|action_0|>",
199
+ "lstrip": false,
200
+ "normalized": true,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "<|action_1|>",
207
+ "lstrip": false,
208
+ "normalized": true,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ },
213
+ "151669": {
214
+ "content": "<|action_2|>",
215
+ "lstrip": false,
216
+ "normalized": true,
217
+ "rstrip": false,
218
+ "single_word": false,
219
+ "special": false
220
+ },
221
+ "151670": {
222
+ "content": "<|action_3|>",
223
+ "lstrip": false,
224
+ "normalized": true,
225
+ "rstrip": false,
226
+ "single_word": false,
227
+ "special": false
228
+ },
229
+ "151671": {
230
+ "content": "<|action_4|>",
231
+ "lstrip": false,
232
+ "normalized": true,
233
+ "rstrip": false,
234
+ "single_word": false,
235
+ "special": false
236
+ },
237
+ "151672": {
238
+ "content": "<|action_5|>",
239
+ "lstrip": false,
240
+ "normalized": true,
241
+ "rstrip": false,
242
+ "single_word": false,
243
+ "special": false
244
+ },
245
+ "151673": {
246
+ "content": "<|action_6|>",
247
+ "lstrip": false,
248
+ "normalized": true,
249
+ "rstrip": false,
250
+ "single_word": false,
251
+ "special": false
252
+ },
253
+ "151674": {
254
+ "content": "<|action_7|>",
255
+ "lstrip": false,
256
+ "normalized": true,
257
+ "rstrip": false,
258
+ "single_word": false,
259
+ "special": false
260
+ },
261
+ "151675": {
262
+ "content": "<|action_8|>",
263
+ "lstrip": false,
264
+ "normalized": true,
265
+ "rstrip": false,
266
+ "single_word": false,
267
+ "special": false
268
+ },
269
+ "151676": {
270
+ "content": "<|action_9|>",
271
+ "lstrip": false,
272
+ "normalized": true,
273
+ "rstrip": false,
274
+ "single_word": false,
275
+ "special": false
276
+ },
277
+ "151677": {
278
+ "content": "<|action_10|>",
279
+ "lstrip": false,
280
+ "normalized": true,
281
+ "rstrip": false,
282
+ "single_word": false,
283
+ "special": false
284
+ },
285
+ "151678": {
286
+ "content": "<|action_11|>",
287
+ "lstrip": false,
288
+ "normalized": true,
289
+ "rstrip": false,
290
+ "single_word": false,
291
+ "special": false
292
+ },
293
+ "151679": {
294
+ "content": "<|action_12|>",
295
+ "lstrip": false,
296
+ "normalized": true,
297
+ "rstrip": false,
298
+ "single_word": false,
299
+ "special": false
300
+ },
301
+ "151680": {
302
+ "content": "<|action_13|>",
303
+ "lstrip": false,
304
+ "normalized": true,
305
+ "rstrip": false,
306
+ "single_word": false,
307
+ "special": false
308
+ },
309
+ "151681": {
310
+ "content": "<|action_14|>",
311
+ "lstrip": false,
312
+ "normalized": true,
313
+ "rstrip": false,
314
+ "single_word": false,
315
+ "special": false
316
+ },
317
+ "151682": {
318
+ "content": "<|action_15|>",
319
+ "lstrip": false,
320
+ "normalized": true,
321
+ "rstrip": false,
322
+ "single_word": false,
323
+ "special": false
324
+ },
325
+ "151683": {
326
+ "content": "<|action_16|>",
327
+ "lstrip": false,
328
+ "normalized": true,
329
+ "rstrip": false,
330
+ "single_word": false,
331
+ "special": false
332
+ },
333
+ "151684": {
334
+ "content": "<|action_17|>",
335
+ "lstrip": false,
336
+ "normalized": true,
337
+ "rstrip": false,
338
+ "single_word": false,
339
+ "special": false
340
+ },
341
+ "151685": {
342
+ "content": "<|action_18|>",
343
+ "lstrip": false,
344
+ "normalized": true,
345
+ "rstrip": false,
346
+ "single_word": false,
347
+ "special": false
348
+ },
349
+ "151686": {
350
+ "content": "<|action_19|>",
351
+ "lstrip": false,
352
+ "normalized": true,
353
+ "rstrip": false,
354
+ "single_word": false,
355
+ "special": false
356
+ },
357
+ "151687": {
358
+ "content": "<|action_20|>",
359
+ "lstrip": false,
360
+ "normalized": true,
361
+ "rstrip": false,
362
+ "single_word": false,
363
+ "special": false
364
+ },
365
+ "151688": {
366
+ "content": "<|action_21|>",
367
+ "lstrip": false,
368
+ "normalized": true,
369
+ "rstrip": false,
370
+ "single_word": false,
371
+ "special": false
372
+ },
373
+ "151689": {
374
+ "content": "<|action_22|>",
375
+ "lstrip": false,
376
+ "normalized": true,
377
+ "rstrip": false,
378
+ "single_word": false,
379
+ "special": false
380
+ },
381
+ "151690": {
382
+ "content": "<|action_23|>",
383
+ "lstrip": false,
384
+ "normalized": true,
385
+ "rstrip": false,
386
+ "single_word": false,
387
+ "special": false
388
+ },
389
+ "151691": {
390
+ "content": "<|action_24|>",
391
+ "lstrip": false,
392
+ "normalized": true,
393
+ "rstrip": false,
394
+ "single_word": false,
395
+ "special": false
396
+ },
397
+ "151692": {
398
+ "content": "<|action_25|>",
399
+ "lstrip": false,
400
+ "normalized": true,
401
+ "rstrip": false,
402
+ "single_word": false,
403
+ "special": false
404
+ },
405
+ "151693": {
406
+ "content": "<|action_26|>",
407
+ "lstrip": false,
408
+ "normalized": true,
409
+ "rstrip": false,
410
+ "single_word": false,
411
+ "special": false
412
+ },
413
+ "151694": {
414
+ "content": "<|action_27|>",
415
+ "lstrip": false,
416
+ "normalized": true,
417
+ "rstrip": false,
418
+ "single_word": false,
419
+ "special": false
420
+ },
421
+ "151695": {
422
+ "content": "<|action_28|>",
423
+ "lstrip": false,
424
+ "normalized": true,
425
+ "rstrip": false,
426
+ "single_word": false,
427
+ "special": false
428
+ },
429
+ "151696": {
430
+ "content": "<|action_29|>",
431
+ "lstrip": false,
432
+ "normalized": true,
433
+ "rstrip": false,
434
+ "single_word": false,
435
+ "special": false
436
+ },
437
+ "151697": {
438
+ "content": "<|action_30|>",
439
+ "lstrip": false,
440
+ "normalized": true,
441
+ "rstrip": false,
442
+ "single_word": false,
443
+ "special": false
444
+ },
445
+ "151698": {
446
+ "content": "<|action_31|>",
447
+ "lstrip": false,
448
+ "normalized": true,
449
+ "rstrip": false,
450
+ "single_word": false,
451
+ "special": false
452
+ },
453
+ "151699": {
454
+ "content": "<|action_32|>",
455
+ "lstrip": false,
456
+ "normalized": true,
457
+ "rstrip": false,
458
+ "single_word": false,
459
+ "special": false
460
+ },
461
+ "151700": {
462
+ "content": "<|action_33|>",
463
+ "lstrip": false,
464
+ "normalized": true,
465
+ "rstrip": false,
466
+ "single_word": false,
467
+ "special": false
468
+ },
469
+ "151701": {
470
+ "content": "<|action_34|>",
471
+ "lstrip": false,
472
+ "normalized": true,
473
+ "rstrip": false,
474
+ "single_word": false,
475
+ "special": false
476
+ },
477
+ "151702": {
478
+ "content": "<|action_35|>",
479
+ "lstrip": false,
480
+ "normalized": true,
481
+ "rstrip": false,
482
+ "single_word": false,
483
+ "special": false
484
+ },
485
+ "151703": {
486
+ "content": "<|action_36|>",
487
+ "lstrip": false,
488
+ "normalized": true,
489
+ "rstrip": false,
490
+ "single_word": false,
491
+ "special": false
492
+ },
493
+ "151704": {
494
+ "content": "<|action_37|>",
495
+ "lstrip": false,
496
+ "normalized": true,
497
+ "rstrip": false,
498
+ "single_word": false,
499
+ "special": false
500
+ },
501
+ "151705": {
502
+ "content": "<|action_38|>",
503
+ "lstrip": false,
504
+ "normalized": true,
505
+ "rstrip": false,
506
+ "single_word": false,
507
+ "special": false
508
+ },
509
+ "151706": {
510
+ "content": "<|action_39|>",
511
+ "lstrip": false,
512
+ "normalized": true,
513
+ "rstrip": false,
514
+ "single_word": false,
515
+ "special": false
516
+ },
517
+ "151707": {
518
+ "content": "<|action_40|>",
519
+ "lstrip": false,
520
+ "normalized": true,
521
+ "rstrip": false,
522
+ "single_word": false,
523
+ "special": false
524
+ },
525
+ "151708": {
526
+ "content": "<|action_41|>",
527
+ "lstrip": false,
528
+ "normalized": true,
529
+ "rstrip": false,
530
+ "single_word": false,
531
+ "special": false
532
+ },
533
+ "151709": {
534
+ "content": "<|action_42|>",
535
+ "lstrip": false,
536
+ "normalized": true,
537
+ "rstrip": false,
538
+ "single_word": false,
539
+ "special": false
540
+ },
541
+ "151710": {
542
+ "content": "<|action_43|>",
543
+ "lstrip": false,
544
+ "normalized": true,
545
+ "rstrip": false,
546
+ "single_word": false,
547
+ "special": false
548
+ },
549
+ "151711": {
550
+ "content": "<|action_44|>",
551
+ "lstrip": false,
552
+ "normalized": true,
553
+ "rstrip": false,
554
+ "single_word": false,
555
+ "special": false
556
+ },
557
+ "151712": {
558
+ "content": "<|action_45|>",
559
+ "lstrip": false,
560
+ "normalized": true,
561
+ "rstrip": false,
562
+ "single_word": false,
563
+ "special": false
564
+ },
565
+ "151713": {
566
+ "content": "<|action_46|>",
567
+ "lstrip": false,
568
+ "normalized": true,
569
+ "rstrip": false,
570
+ "single_word": false,
571
+ "special": false
572
+ },
573
+ "151714": {
574
+ "content": "<|action_47|>",
575
+ "lstrip": false,
576
+ "normalized": true,
577
+ "rstrip": false,
578
+ "single_word": false,
579
+ "special": false
580
+ },
581
+ "151715": {
582
+ "content": "<|action_48|>",
583
+ "lstrip": false,
584
+ "normalized": true,
585
+ "rstrip": false,
586
+ "single_word": false,
587
+ "special": false
588
+ },
589
+ "151716": {
590
+ "content": "<|action_49|>",
591
+ "lstrip": false,
592
+ "normalized": true,
593
+ "rstrip": false,
594
+ "single_word": false,
595
+ "special": false
596
+ },
597
+ "151717": {
598
+ "content": "<|action_50|>",
599
+ "lstrip": false,
600
+ "normalized": true,
601
+ "rstrip": false,
602
+ "single_word": false,
603
+ "special": false
604
+ },
605
+ "151718": {
606
+ "content": "<|action_51|>",
607
+ "lstrip": false,
608
+ "normalized": true,
609
+ "rstrip": false,
610
+ "single_word": false,
611
+ "special": false
612
+ },
613
+ "151719": {
614
+ "content": "<|action_52|>",
615
+ "lstrip": false,
616
+ "normalized": true,
617
+ "rstrip": false,
618
+ "single_word": false,
619
+ "special": false
620
+ },
621
+ "151720": {
622
+ "content": "<|action_53|>",
623
+ "lstrip": false,
624
+ "normalized": true,
625
+ "rstrip": false,
626
+ "single_word": false,
627
+ "special": false
628
+ },
629
+ "151721": {
630
+ "content": "<|action_54|>",
631
+ "lstrip": false,
632
+ "normalized": true,
633
+ "rstrip": false,
634
+ "single_word": false,
635
+ "special": false
636
+ },
637
+ "151722": {
638
+ "content": "<|action_55|>",
639
+ "lstrip": false,
640
+ "normalized": true,
641
+ "rstrip": false,
642
+ "single_word": false,
643
+ "special": false
644
+ },
645
+ "151723": {
646
+ "content": "<|action_56|>",
647
+ "lstrip": false,
648
+ "normalized": true,
649
+ "rstrip": false,
650
+ "single_word": false,
651
+ "special": false
652
+ },
653
+ "151724": {
654
+ "content": "<|action_57|>",
655
+ "lstrip": false,
656
+ "normalized": true,
657
+ "rstrip": false,
658
+ "single_word": false,
659
+ "special": false
660
+ },
661
+ "151725": {
662
+ "content": "<|action_58|>",
663
+ "lstrip": false,
664
+ "normalized": true,
665
+ "rstrip": false,
666
+ "single_word": false,
667
+ "special": false
668
+ },
669
+ "151726": {
670
+ "content": "<|action_59|>",
671
+ "lstrip": false,
672
+ "normalized": true,
673
+ "rstrip": false,
674
+ "single_word": false,
675
+ "special": false
676
+ },
677
+ "151727": {
678
+ "content": "<|action_60|>",
679
+ "lstrip": false,
680
+ "normalized": true,
681
+ "rstrip": false,
682
+ "single_word": false,
683
+ "special": false
684
+ },
685
+ "151728": {
686
+ "content": "<|action_61|>",
687
+ "lstrip": false,
688
+ "normalized": true,
689
+ "rstrip": false,
690
+ "single_word": false,
691
+ "special": false
692
+ },
693
+ "151729": {
694
+ "content": "<|action_62|>",
695
+ "lstrip": false,
696
+ "normalized": true,
697
+ "rstrip": false,
698
+ "single_word": false,
699
+ "special": false
700
+ },
701
+ "151730": {
702
+ "content": "<|action_63|>",
703
+ "lstrip": false,
704
+ "normalized": true,
705
+ "rstrip": false,
706
+ "single_word": false,
707
+ "special": false
708
+ },
709
+ "151731": {
710
+ "content": "<|action_64|>",
711
+ "lstrip": false,
712
+ "normalized": true,
713
+ "rstrip": false,
714
+ "single_word": false,
715
+ "special": false
716
+ },
717
+ "151732": {
718
+ "content": "<|action_65|>",
719
+ "lstrip": false,
720
+ "normalized": true,
721
+ "rstrip": false,
722
+ "single_word": false,
723
+ "special": false
724
+ },
725
+ "151733": {
726
+ "content": "<|action_66|>",
727
+ "lstrip": false,
728
+ "normalized": true,
729
+ "rstrip": false,
730
+ "single_word": false,
731
+ "special": false
732
+ },
733
+ "151734": {
734
+ "content": "<|action_67|>",
735
+ "lstrip": false,
736
+ "normalized": true,
737
+ "rstrip": false,
738
+ "single_word": false,
739
+ "special": false
740
+ },
741
+ "151735": {
742
+ "content": "<|action_68|>",
743
+ "lstrip": false,
744
+ "normalized": true,
745
+ "rstrip": false,
746
+ "single_word": false,
747
+ "special": false
748
+ },
749
+ "151736": {
750
+ "content": "<|action_69|>",
751
+ "lstrip": false,
752
+ "normalized": true,
753
+ "rstrip": false,
754
+ "single_word": false,
755
+ "special": false
756
+ },
757
+ "151737": {
758
+ "content": "<|action_70|>",
759
+ "lstrip": false,
760
+ "normalized": true,
761
+ "rstrip": false,
762
+ "single_word": false,
763
+ "special": false
764
+ },
765
+ "151738": {
766
+ "content": "<|action_71|>",
767
+ "lstrip": false,
768
+ "normalized": true,
769
+ "rstrip": false,
770
+ "single_word": false,
771
+ "special": false
772
+ },
773
+ "151739": {
774
+ "content": "<|action_72|>",
775
+ "lstrip": false,
776
+ "normalized": true,
777
+ "rstrip": false,
778
+ "single_word": false,
779
+ "special": false
780
+ },
781
+ "151740": {
782
+ "content": "<|action_73|>",
783
+ "lstrip": false,
784
+ "normalized": true,
785
+ "rstrip": false,
786
+ "single_word": false,
787
+ "special": false
788
+ },
789
+ "151741": {
790
+ "content": "<|action_74|>",
791
+ "lstrip": false,
792
+ "normalized": true,
793
+ "rstrip": false,
794
+ "single_word": false,
795
+ "special": false
796
+ },
797
+ "151742": {
798
+ "content": "<|action_75|>",
799
+ "lstrip": false,
800
+ "normalized": true,
801
+ "rstrip": false,
802
+ "single_word": false,
803
+ "special": false
804
+ },
805
+ "151743": {
806
+ "content": "<|action_76|>",
807
+ "lstrip": false,
808
+ "normalized": true,
809
+ "rstrip": false,
810
+ "single_word": false,
811
+ "special": false
812
+ },
813
+ "151744": {
814
+ "content": "<|action_77|>",
815
+ "lstrip": false,
816
+ "normalized": true,
817
+ "rstrip": false,
818
+ "single_word": false,
819
+ "special": false
820
+ },
821
+ "151745": {
822
+ "content": "<|action_78|>",
823
+ "lstrip": false,
824
+ "normalized": true,
825
+ "rstrip": false,
826
+ "single_word": false,
827
+ "special": false
828
+ },
829
+ "151746": {
830
+ "content": "<|action_79|>",
831
+ "lstrip": false,
832
+ "normalized": true,
833
+ "rstrip": false,
834
+ "single_word": false,
835
+ "special": false
836
+ },
837
+ "151747": {
838
+ "content": "<|action_80|>",
839
+ "lstrip": false,
840
+ "normalized": true,
841
+ "rstrip": false,
842
+ "single_word": false,
843
+ "special": false
844
+ },
845
+ "151748": {
846
+ "content": "<|action_81|>",
847
+ "lstrip": false,
848
+ "normalized": true,
849
+ "rstrip": false,
850
+ "single_word": false,
851
+ "special": false
852
+ },
853
+ "151749": {
854
+ "content": "<|action_82|>",
855
+ "lstrip": false,
856
+ "normalized": true,
857
+ "rstrip": false,
858
+ "single_word": false,
859
+ "special": false
860
+ },
861
+ "151750": {
862
+ "content": "<|action_83|>",
863
+ "lstrip": false,
864
+ "normalized": true,
865
+ "rstrip": false,
866
+ "single_word": false,
867
+ "special": false
868
+ },
869
+ "151751": {
870
+ "content": "<|action_84|>",
871
+ "lstrip": false,
872
+ "normalized": true,
873
+ "rstrip": false,
874
+ "single_word": false,
875
+ "special": false
876
+ },
877
+ "151752": {
878
+ "content": "<|action_85|>",
879
+ "lstrip": false,
880
+ "normalized": true,
881
+ "rstrip": false,
882
+ "single_word": false,
883
+ "special": false
884
+ },
885
+ "151753": {
886
+ "content": "<|action_86|>",
887
+ "lstrip": false,
888
+ "normalized": true,
889
+ "rstrip": false,
890
+ "single_word": false,
891
+ "special": false
892
+ },
893
+ "151754": {
894
+ "content": "<|action_87|>",
895
+ "lstrip": false,
896
+ "normalized": true,
897
+ "rstrip": false,
898
+ "single_word": false,
899
+ "special": false
900
+ },
901
+ "151755": {
902
+ "content": "<|action_88|>",
903
+ "lstrip": false,
904
+ "normalized": true,
905
+ "rstrip": false,
906
+ "single_word": false,
907
+ "special": false
908
+ },
909
+ "151756": {
910
+ "content": "<|action_89|>",
911
+ "lstrip": false,
912
+ "normalized": true,
913
+ "rstrip": false,
914
+ "single_word": false,
915
+ "special": false
916
+ },
917
+ "151757": {
918
+ "content": "<|action_90|>",
919
+ "lstrip": false,
920
+ "normalized": true,
921
+ "rstrip": false,
922
+ "single_word": false,
923
+ "special": false
924
+ },
925
+ "151758": {
926
+ "content": "<|action_91|>",
927
+ "lstrip": false,
928
+ "normalized": true,
929
+ "rstrip": false,
930
+ "single_word": false,
931
+ "special": false
932
+ },
933
+ "151759": {
934
+ "content": "<|action_92|>",
935
+ "lstrip": false,
936
+ "normalized": true,
937
+ "rstrip": false,
938
+ "single_word": false,
939
+ "special": false
940
+ },
941
+ "151760": {
942
+ "content": "<|action_93|>",
943
+ "lstrip": false,
944
+ "normalized": true,
945
+ "rstrip": false,
946
+ "single_word": false,
947
+ "special": false
948
+ },
949
+ "151761": {
950
+ "content": "<|action_94|>",
951
+ "lstrip": false,
952
+ "normalized": true,
953
+ "rstrip": false,
954
+ "single_word": false,
955
+ "special": false
956
+ },
957
+ "151762": {
958
+ "content": "<|action_95|>",
959
+ "lstrip": false,
960
+ "normalized": true,
961
+ "rstrip": false,
962
+ "single_word": false,
963
+ "special": false
964
+ },
965
+ "151763": {
966
+ "content": "<|action_96|>",
967
+ "lstrip": false,
968
+ "normalized": true,
969
+ "rstrip": false,
970
+ "single_word": false,
971
+ "special": false
972
+ },
973
+ "151764": {
974
+ "content": "<|action_97|>",
975
+ "lstrip": false,
976
+ "normalized": true,
977
+ "rstrip": false,
978
+ "single_word": false,
979
+ "special": false
980
+ },
981
+ "151765": {
982
+ "content": "<|action_98|>",
983
+ "lstrip": false,
984
+ "normalized": true,
985
+ "rstrip": false,
986
+ "single_word": false,
987
+ "special": false
988
+ },
989
+ "151766": {
990
+ "content": "<|action_99|>",
991
+ "lstrip": false,
992
+ "normalized": true,
993
+ "rstrip": false,
994
+ "single_word": false,
995
+ "special": false
996
+ },
997
+ "151767": {
998
+ "content": "<|action_100|>",
999
+ "lstrip": false,
1000
+ "normalized": true,
1001
+ "rstrip": false,
1002
+ "single_word": false,
1003
+ "special": false
1004
+ },
1005
+ "151768": {
1006
+ "content": "<|action_101|>",
1007
+ "lstrip": false,
1008
+ "normalized": true,
1009
+ "rstrip": false,
1010
+ "single_word": false,
1011
+ "special": false
1012
+ },
1013
+ "151769": {
1014
+ "content": "<|action_102|>",
1015
+ "lstrip": false,
1016
+ "normalized": true,
1017
+ "rstrip": false,
1018
+ "single_word": false,
1019
+ "special": false
1020
+ },
1021
+ "151770": {
1022
+ "content": "<|action_103|>",
1023
+ "lstrip": false,
1024
+ "normalized": true,
1025
+ "rstrip": false,
1026
+ "single_word": false,
1027
+ "special": false
1028
+ },
1029
+ "151771": {
1030
+ "content": "<|action_104|>",
1031
+ "lstrip": false,
1032
+ "normalized": true,
1033
+ "rstrip": false,
1034
+ "single_word": false,
1035
+ "special": false
1036
+ },
1037
+ "151772": {
1038
+ "content": "<|action_105|>",
1039
+ "lstrip": false,
1040
+ "normalized": true,
1041
+ "rstrip": false,
1042
+ "single_word": false,
1043
+ "special": false
1044
+ },
1045
+ "151773": {
1046
+ "content": "<|action_106|>",
1047
+ "lstrip": false,
1048
+ "normalized": true,
1049
+ "rstrip": false,
1050
+ "single_word": false,
1051
+ "special": false
1052
+ },
1053
+ "151774": {
1054
+ "content": "<|action_107|>",
1055
+ "lstrip": false,
1056
+ "normalized": true,
1057
+ "rstrip": false,
1058
+ "single_word": false,
1059
+ "special": false
1060
+ },
1061
+ "151775": {
1062
+ "content": "<|action_108|>",
1063
+ "lstrip": false,
1064
+ "normalized": true,
1065
+ "rstrip": false,
1066
+ "single_word": false,
1067
+ "special": false
1068
+ },
1069
+ "151776": {
1070
+ "content": "<|action_109|>",
1071
+ "lstrip": false,
1072
+ "normalized": true,
1073
+ "rstrip": false,
1074
+ "single_word": false,
1075
+ "special": false
1076
+ },
1077
+ "151777": {
1078
+ "content": "<|action_110|>",
1079
+ "lstrip": false,
1080
+ "normalized": true,
1081
+ "rstrip": false,
1082
+ "single_word": false,
1083
+ "special": false
1084
+ },
1085
+ "151778": {
1086
+ "content": "<|action_111|>",
1087
+ "lstrip": false,
1088
+ "normalized": true,
1089
+ "rstrip": false,
1090
+ "single_word": false,
1091
+ "special": false
1092
+ },
1093
+ "151779": {
1094
+ "content": "<|action_112|>",
1095
+ "lstrip": false,
1096
+ "normalized": true,
1097
+ "rstrip": false,
1098
+ "single_word": false,
1099
+ "special": false
1100
+ },
1101
+ "151780": {
1102
+ "content": "<|action_113|>",
1103
+ "lstrip": false,
1104
+ "normalized": true,
1105
+ "rstrip": false,
1106
+ "single_word": false,
1107
+ "special": false
1108
+ },
1109
+ "151781": {
1110
+ "content": "<|action_114|>",
1111
+ "lstrip": false,
1112
+ "normalized": true,
1113
+ "rstrip": false,
1114
+ "single_word": false,
1115
+ "special": false
1116
+ },
1117
+ "151782": {
1118
+ "content": "<|action_115|>",
1119
+ "lstrip": false,
1120
+ "normalized": true,
1121
+ "rstrip": false,
1122
+ "single_word": false,
1123
+ "special": false
1124
+ },
1125
+ "151783": {
1126
+ "content": "<|action_116|>",
1127
+ "lstrip": false,
1128
+ "normalized": true,
1129
+ "rstrip": false,
1130
+ "single_word": false,
1131
+ "special": false
1132
+ },
1133
+ "151784": {
1134
+ "content": "<|action_117|>",
1135
+ "lstrip": false,
1136
+ "normalized": true,
1137
+ "rstrip": false,
1138
+ "single_word": false,
1139
+ "special": false
1140
+ },
1141
+ "151785": {
1142
+ "content": "<|action_118|>",
1143
+ "lstrip": false,
1144
+ "normalized": true,
1145
+ "rstrip": false,
1146
+ "single_word": false,
1147
+ "special": false
1148
+ },
1149
+ "151786": {
1150
+ "content": "<|action_119|>",
1151
+ "lstrip": false,
1152
+ "normalized": true,
1153
+ "rstrip": false,
1154
+ "single_word": false,
1155
+ "special": false
1156
+ },
1157
+ "151787": {
1158
+ "content": "<|action_120|>",
1159
+ "lstrip": false,
1160
+ "normalized": true,
1161
+ "rstrip": false,
1162
+ "single_word": false,
1163
+ "special": false
1164
+ },
1165
+ "151788": {
1166
+ "content": "<|action_121|>",
1167
+ "lstrip": false,
1168
+ "normalized": true,
1169
+ "rstrip": false,
1170
+ "single_word": false,
1171
+ "special": false
1172
+ },
1173
+ "151789": {
1174
+ "content": "<|action_122|>",
1175
+ "lstrip": false,
1176
+ "normalized": true,
1177
+ "rstrip": false,
1178
+ "single_word": false,
1179
+ "special": false
1180
+ },
1181
+ "151790": {
1182
+ "content": "<|action_123|>",
1183
+ "lstrip": false,
1184
+ "normalized": true,
1185
+ "rstrip": false,
1186
+ "single_word": false,
1187
+ "special": false
1188
+ },
1189
+ "151791": {
1190
+ "content": "<|action_124|>",
1191
+ "lstrip": false,
1192
+ "normalized": true,
1193
+ "rstrip": false,
1194
+ "single_word": false,
1195
+ "special": false
1196
+ },
1197
+ "151792": {
1198
+ "content": "<|action_125|>",
1199
+ "lstrip": false,
1200
+ "normalized": true,
1201
+ "rstrip": false,
1202
+ "single_word": false,
1203
+ "special": false
1204
+ },
1205
+ "151793": {
1206
+ "content": "<|action_126|>",
1207
+ "lstrip": false,
1208
+ "normalized": true,
1209
+ "rstrip": false,
1210
+ "single_word": false,
1211
+ "special": false
1212
+ },
1213
+ "151794": {
1214
+ "content": "<|action_127|>",
1215
+ "lstrip": false,
1216
+ "normalized": true,
1217
+ "rstrip": false,
1218
+ "single_word": false,
1219
+ "special": false
1220
+ },
1221
+ "151795": {
1222
+ "content": "<|action_128|>",
1223
+ "lstrip": false,
1224
+ "normalized": true,
1225
+ "rstrip": false,
1226
+ "single_word": false,
1227
+ "special": false
1228
+ },
1229
+ "151796": {
1230
+ "content": "<|action_129|>",
1231
+ "lstrip": false,
1232
+ "normalized": true,
1233
+ "rstrip": false,
1234
+ "single_word": false,
1235
+ "special": false
1236
+ },
1237
+ "151797": {
1238
+ "content": "<|action_130|>",
1239
+ "lstrip": false,
1240
+ "normalized": true,
1241
+ "rstrip": false,
1242
+ "single_word": false,
1243
+ "special": false
1244
+ },
1245
+ "151798": {
1246
+ "content": "<|action_131|>",
1247
+ "lstrip": false,
1248
+ "normalized": true,
1249
+ "rstrip": false,
1250
+ "single_word": false,
1251
+ "special": false
1252
+ },
1253
+ "151799": {
1254
+ "content": "<|action_132|>",
1255
+ "lstrip": false,
1256
+ "normalized": true,
1257
+ "rstrip": false,
1258
+ "single_word": false,
1259
+ "special": false
1260
+ },
1261
+ "151800": {
1262
+ "content": "<|action_133|>",
1263
+ "lstrip": false,
1264
+ "normalized": true,
1265
+ "rstrip": false,
1266
+ "single_word": false,
1267
+ "special": false
1268
+ },
1269
+ "151801": {
1270
+ "content": "<|action_134|>",
1271
+ "lstrip": false,
1272
+ "normalized": true,
1273
+ "rstrip": false,
1274
+ "single_word": false,
1275
+ "special": false
1276
+ },
1277
+ "151802": {
1278
+ "content": "<|action_135|>",
1279
+ "lstrip": false,
1280
+ "normalized": true,
1281
+ "rstrip": false,
1282
+ "single_word": false,
1283
+ "special": false
1284
+ },
1285
+ "151803": {
1286
+ "content": "<|action_136|>",
1287
+ "lstrip": false,
1288
+ "normalized": true,
1289
+ "rstrip": false,
1290
+ "single_word": false,
1291
+ "special": false
1292
+ },
1293
+ "151804": {
1294
+ "content": "<|action_137|>",
1295
+ "lstrip": false,
1296
+ "normalized": true,
1297
+ "rstrip": false,
1298
+ "single_word": false,
1299
+ "special": false
1300
+ },
1301
+ "151805": {
1302
+ "content": "<|action_138|>",
1303
+ "lstrip": false,
1304
+ "normalized": true,
1305
+ "rstrip": false,
1306
+ "single_word": false,
1307
+ "special": false
1308
+ },
1309
+ "151806": {
1310
+ "content": "<|action_139|>",
1311
+ "lstrip": false,
1312
+ "normalized": true,
1313
+ "rstrip": false,
1314
+ "single_word": false,
1315
+ "special": false
1316
+ },
1317
+ "151807": {
1318
+ "content": "<|action_140|>",
1319
+ "lstrip": false,
1320
+ "normalized": true,
1321
+ "rstrip": false,
1322
+ "single_word": false,
1323
+ "special": false
1324
+ },
1325
+ "151808": {
1326
+ "content": "<|action_141|>",
1327
+ "lstrip": false,
1328
+ "normalized": true,
1329
+ "rstrip": false,
1330
+ "single_word": false,
1331
+ "special": false
1332
+ },
1333
+ "151809": {
1334
+ "content": "<|action_142|>",
1335
+ "lstrip": false,
1336
+ "normalized": true,
1337
+ "rstrip": false,
1338
+ "single_word": false,
1339
+ "special": false
1340
+ },
1341
+ "151810": {
1342
+ "content": "<|action_143|>",
1343
+ "lstrip": false,
1344
+ "normalized": true,
1345
+ "rstrip": false,
1346
+ "single_word": false,
1347
+ "special": false
1348
+ },
1349
+ "151811": {
1350
+ "content": "<|action_144|>",
1351
+ "lstrip": false,
1352
+ "normalized": true,
1353
+ "rstrip": false,
1354
+ "single_word": false,
1355
+ "special": false
1356
+ },
1357
+ "151812": {
1358
+ "content": "<|action_145|>",
1359
+ "lstrip": false,
1360
+ "normalized": true,
1361
+ "rstrip": false,
1362
+ "single_word": false,
1363
+ "special": false
1364
+ },
1365
+ "151813": {
1366
+ "content": "<|action_146|>",
1367
+ "lstrip": false,
1368
+ "normalized": true,
1369
+ "rstrip": false,
1370
+ "single_word": false,
1371
+ "special": false
1372
+ },
1373
+ "151814": {
1374
+ "content": "<|action_147|>",
1375
+ "lstrip": false,
1376
+ "normalized": true,
1377
+ "rstrip": false,
1378
+ "single_word": false,
1379
+ "special": false
1380
+ },
1381
+ "151815": {
1382
+ "content": "<|action_148|>",
1383
+ "lstrip": false,
1384
+ "normalized": true,
1385
+ "rstrip": false,
1386
+ "single_word": false,
1387
+ "special": false
1388
+ },
1389
+ "151816": {
1390
+ "content": "<|action_149|>",
1391
+ "lstrip": false,
1392
+ "normalized": true,
1393
+ "rstrip": false,
1394
+ "single_word": false,
1395
+ "special": false
1396
+ },
1397
+ "151817": {
1398
+ "content": "<|action_150|>",
1399
+ "lstrip": false,
1400
+ "normalized": true,
1401
+ "rstrip": false,
1402
+ "single_word": false,
1403
+ "special": false
1404
+ },
1405
+ "151818": {
1406
+ "content": "<|action_151|>",
1407
+ "lstrip": false,
1408
+ "normalized": true,
1409
+ "rstrip": false,
1410
+ "single_word": false,
1411
+ "special": false
1412
+ },
1413
+ "151819": {
1414
+ "content": "<|action_152|>",
1415
+ "lstrip": false,
1416
+ "normalized": true,
1417
+ "rstrip": false,
1418
+ "single_word": false,
1419
+ "special": false
1420
+ },
1421
+ "151820": {
1422
+ "content": "<|action_153|>",
1423
+ "lstrip": false,
1424
+ "normalized": true,
1425
+ "rstrip": false,
1426
+ "single_word": false,
1427
+ "special": false
1428
+ },
1429
+ "151821": {
1430
+ "content": "<|action_154|>",
1431
+ "lstrip": false,
1432
+ "normalized": true,
1433
+ "rstrip": false,
1434
+ "single_word": false,
1435
+ "special": false
1436
+ },
1437
+ "151822": {
1438
+ "content": "<|action_155|>",
1439
+ "lstrip": false,
1440
+ "normalized": true,
1441
+ "rstrip": false,
1442
+ "single_word": false,
1443
+ "special": false
1444
+ },
1445
+ "151823": {
1446
+ "content": "<|action_156|>",
1447
+ "lstrip": false,
1448
+ "normalized": true,
1449
+ "rstrip": false,
1450
+ "single_word": false,
1451
+ "special": false
1452
+ },
1453
+ "151824": {
1454
+ "content": "<|action_157|>",
1455
+ "lstrip": false,
1456
+ "normalized": true,
1457
+ "rstrip": false,
1458
+ "single_word": false,
1459
+ "special": false
1460
+ },
1461
+ "151825": {
1462
+ "content": "<|action_158|>",
1463
+ "lstrip": false,
1464
+ "normalized": true,
1465
+ "rstrip": false,
1466
+ "single_word": false,
1467
+ "special": false
1468
+ },
1469
+ "151826": {
1470
+ "content": "<|action_159|>",
1471
+ "lstrip": false,
1472
+ "normalized": true,
1473
+ "rstrip": false,
1474
+ "single_word": false,
1475
+ "special": false
1476
+ },
1477
+ "151827": {
1478
+ "content": "<|action_160|>",
1479
+ "lstrip": false,
1480
+ "normalized": true,
1481
+ "rstrip": false,
1482
+ "single_word": false,
1483
+ "special": false
1484
+ },
1485
+ "151828": {
1486
+ "content": "<|action_161|>",
1487
+ "lstrip": false,
1488
+ "normalized": true,
1489
+ "rstrip": false,
1490
+ "single_word": false,
1491
+ "special": false
1492
+ },
1493
+ "151829": {
1494
+ "content": "<|action_162|>",
1495
+ "lstrip": false,
1496
+ "normalized": true,
1497
+ "rstrip": false,
1498
+ "single_word": false,
1499
+ "special": false
1500
+ },
1501
+ "151830": {
1502
+ "content": "<|action_163|>",
1503
+ "lstrip": false,
1504
+ "normalized": true,
1505
+ "rstrip": false,
1506
+ "single_word": false,
1507
+ "special": false
1508
+ },
1509
+ "151831": {
1510
+ "content": "<|action_164|>",
1511
+ "lstrip": false,
1512
+ "normalized": true,
1513
+ "rstrip": false,
1514
+ "single_word": false,
1515
+ "special": false
1516
+ },
1517
+ "151832": {
1518
+ "content": "<|action_165|>",
1519
+ "lstrip": false,
1520
+ "normalized": true,
1521
+ "rstrip": false,
1522
+ "single_word": false,
1523
+ "special": false
1524
+ },
1525
+ "151833": {
1526
+ "content": "<|action_166|>",
1527
+ "lstrip": false,
1528
+ "normalized": true,
1529
+ "rstrip": false,
1530
+ "single_word": false,
1531
+ "special": false
1532
+ },
1533
+ "151834": {
1534
+ "content": "<|action_167|>",
1535
+ "lstrip": false,
1536
+ "normalized": true,
1537
+ "rstrip": false,
1538
+ "single_word": false,
1539
+ "special": false
1540
+ },
1541
+ "151835": {
1542
+ "content": "<|action_168|>",
1543
+ "lstrip": false,
1544
+ "normalized": true,
1545
+ "rstrip": false,
1546
+ "single_word": false,
1547
+ "special": false
1548
+ },
1549
+ "151836": {
1550
+ "content": "<|action_169|>",
1551
+ "lstrip": false,
1552
+ "normalized": true,
1553
+ "rstrip": false,
1554
+ "single_word": false,
1555
+ "special": false
1556
+ },
1557
+ "151837": {
1558
+ "content": "<|action_170|>",
1559
+ "lstrip": false,
1560
+ "normalized": true,
1561
+ "rstrip": false,
1562
+ "single_word": false,
1563
+ "special": false
1564
+ },
1565
+ "151838": {
1566
+ "content": "<|action_171|>",
1567
+ "lstrip": false,
1568
+ "normalized": true,
1569
+ "rstrip": false,
1570
+ "single_word": false,
1571
+ "special": false
1572
+ },
1573
+ "151839": {
1574
+ "content": "<|action_172|>",
1575
+ "lstrip": false,
1576
+ "normalized": true,
1577
+ "rstrip": false,
1578
+ "single_word": false,
1579
+ "special": false
1580
+ },
1581
+ "151840": {
1582
+ "content": "<|action_173|>",
1583
+ "lstrip": false,
1584
+ "normalized": true,
1585
+ "rstrip": false,
1586
+ "single_word": false,
1587
+ "special": false
1588
+ },
1589
+ "151841": {
1590
+ "content": "<|action_174|>",
1591
+ "lstrip": false,
1592
+ "normalized": true,
1593
+ "rstrip": false,
1594
+ "single_word": false,
1595
+ "special": false
1596
+ },
1597
+ "151842": {
1598
+ "content": "<|action_175|>",
1599
+ "lstrip": false,
1600
+ "normalized": true,
1601
+ "rstrip": false,
1602
+ "single_word": false,
1603
+ "special": false
1604
+ },
1605
+ "151843": {
1606
+ "content": "<|action_176|>",
1607
+ "lstrip": false,
1608
+ "normalized": true,
1609
+ "rstrip": false,
1610
+ "single_word": false,
1611
+ "special": false
1612
+ },
1613
+ "151844": {
1614
+ "content": "<|action_177|>",
1615
+ "lstrip": false,
1616
+ "normalized": true,
1617
+ "rstrip": false,
1618
+ "single_word": false,
1619
+ "special": false
1620
+ },
1621
+ "151845": {
1622
+ "content": "<|action_178|>",
1623
+ "lstrip": false,
1624
+ "normalized": true,
1625
+ "rstrip": false,
1626
+ "single_word": false,
1627
+ "special": false
1628
+ },
1629
+ "151846": {
1630
+ "content": "<|action_179|>",
1631
+ "lstrip": false,
1632
+ "normalized": true,
1633
+ "rstrip": false,
1634
+ "single_word": false,
1635
+ "special": false
1636
+ },
1637
+ "151847": {
1638
+ "content": "<|action_180|>",
1639
+ "lstrip": false,
1640
+ "normalized": true,
1641
+ "rstrip": false,
1642
+ "single_word": false,
1643
+ "special": false
1644
+ },
1645
+ "151848": {
1646
+ "content": "<|action_181|>",
1647
+ "lstrip": false,
1648
+ "normalized": true,
1649
+ "rstrip": false,
1650
+ "single_word": false,
1651
+ "special": false
1652
+ },
1653
+ "151849": {
1654
+ "content": "<|action_182|>",
1655
+ "lstrip": false,
1656
+ "normalized": true,
1657
+ "rstrip": false,
1658
+ "single_word": false,
1659
+ "special": false
1660
+ },
1661
+ "151850": {
1662
+ "content": "<|action_183|>",
1663
+ "lstrip": false,
1664
+ "normalized": true,
1665
+ "rstrip": false,
1666
+ "single_word": false,
1667
+ "special": false
1668
+ },
1669
+ "151851": {
1670
+ "content": "<|action_184|>",
1671
+ "lstrip": false,
1672
+ "normalized": true,
1673
+ "rstrip": false,
1674
+ "single_word": false,
1675
+ "special": false
1676
+ },
1677
+ "151852": {
1678
+ "content": "<|action_185|>",
1679
+ "lstrip": false,
1680
+ "normalized": true,
1681
+ "rstrip": false,
1682
+ "single_word": false,
1683
+ "special": false
1684
+ },
1685
+ "151853": {
1686
+ "content": "<|action_186|>",
1687
+ "lstrip": false,
1688
+ "normalized": true,
1689
+ "rstrip": false,
1690
+ "single_word": false,
1691
+ "special": false
1692
+ },
1693
+ "151854": {
1694
+ "content": "<|action_187|>",
1695
+ "lstrip": false,
1696
+ "normalized": true,
1697
+ "rstrip": false,
1698
+ "single_word": false,
1699
+ "special": false
1700
+ },
1701
+ "151855": {
1702
+ "content": "<|action_188|>",
1703
+ "lstrip": false,
1704
+ "normalized": true,
1705
+ "rstrip": false,
1706
+ "single_word": false,
1707
+ "special": false
1708
+ },
1709
+ "151856": {
1710
+ "content": "<|action_189|>",
1711
+ "lstrip": false,
1712
+ "normalized": true,
1713
+ "rstrip": false,
1714
+ "single_word": false,
1715
+ "special": false
1716
+ },
1717
+ "151857": {
1718
+ "content": "<|action_190|>",
1719
+ "lstrip": false,
1720
+ "normalized": true,
1721
+ "rstrip": false,
1722
+ "single_word": false,
1723
+ "special": false
1724
+ },
1725
+ "151858": {
1726
+ "content": "<|action_191|>",
1727
+ "lstrip": false,
1728
+ "normalized": true,
1729
+ "rstrip": false,
1730
+ "single_word": false,
1731
+ "special": false
1732
+ },
1733
+ "151859": {
1734
+ "content": "<|action_192|>",
1735
+ "lstrip": false,
1736
+ "normalized": true,
1737
+ "rstrip": false,
1738
+ "single_word": false,
1739
+ "special": false
1740
+ },
1741
+ "151860": {
1742
+ "content": "<|action_193|>",
1743
+ "lstrip": false,
1744
+ "normalized": true,
1745
+ "rstrip": false,
1746
+ "single_word": false,
1747
+ "special": false
1748
+ },
1749
+ "151861": {
1750
+ "content": "<|action_194|>",
1751
+ "lstrip": false,
1752
+ "normalized": true,
1753
+ "rstrip": false,
1754
+ "single_word": false,
1755
+ "special": false
1756
+ },
1757
+ "151862": {
1758
+ "content": "<|action_195|>",
1759
+ "lstrip": false,
1760
+ "normalized": true,
1761
+ "rstrip": false,
1762
+ "single_word": false,
1763
+ "special": false
1764
+ },
1765
+ "151863": {
1766
+ "content": "<|action_196|>",
1767
+ "lstrip": false,
1768
+ "normalized": true,
1769
+ "rstrip": false,
1770
+ "single_word": false,
1771
+ "special": false
1772
+ },
1773
+ "151864": {
1774
+ "content": "<|action_197|>",
1775
+ "lstrip": false,
1776
+ "normalized": true,
1777
+ "rstrip": false,
1778
+ "single_word": false,
1779
+ "special": false
1780
+ },
1781
+ "151865": {
1782
+ "content": "<|action_198|>",
1783
+ "lstrip": false,
1784
+ "normalized": true,
1785
+ "rstrip": false,
1786
+ "single_word": false,
1787
+ "special": false
1788
+ },
1789
+ "151866": {
1790
+ "content": "<|action_199|>",
1791
+ "lstrip": false,
1792
+ "normalized": true,
1793
+ "rstrip": false,
1794
+ "single_word": false,
1795
+ "special": false
1796
+ },
1797
+ "151867": {
1798
+ "content": "<|action_200|>",
1799
+ "lstrip": false,
1800
+ "normalized": true,
1801
+ "rstrip": false,
1802
+ "single_word": false,
1803
+ "special": false
1804
+ },
1805
+ "151868": {
1806
+ "content": "<|action_201|>",
1807
+ "lstrip": false,
1808
+ "normalized": true,
1809
+ "rstrip": false,
1810
+ "single_word": false,
1811
+ "special": false
1812
+ },
1813
+ "151869": {
1814
+ "content": "<|action_202|>",
1815
+ "lstrip": false,
1816
+ "normalized": true,
1817
+ "rstrip": false,
1818
+ "single_word": false,
1819
+ "special": false
1820
+ },
1821
+ "151870": {
1822
+ "content": "<|action_203|>",
1823
+ "lstrip": false,
1824
+ "normalized": true,
1825
+ "rstrip": false,
1826
+ "single_word": false,
1827
+ "special": false
1828
+ },
1829
+ "151871": {
1830
+ "content": "<|action_204|>",
1831
+ "lstrip": false,
1832
+ "normalized": true,
1833
+ "rstrip": false,
1834
+ "single_word": false,
1835
+ "special": false
1836
+ },
1837
+ "151872": {
1838
+ "content": "<|action_205|>",
1839
+ "lstrip": false,
1840
+ "normalized": true,
1841
+ "rstrip": false,
1842
+ "single_word": false,
1843
+ "special": false
1844
+ },
1845
+ "151873": {
1846
+ "content": "<|action_206|>",
1847
+ "lstrip": false,
1848
+ "normalized": true,
1849
+ "rstrip": false,
1850
+ "single_word": false,
1851
+ "special": false
1852
+ },
1853
+ "151874": {
1854
+ "content": "<|action_207|>",
1855
+ "lstrip": false,
1856
+ "normalized": true,
1857
+ "rstrip": false,
1858
+ "single_word": false,
1859
+ "special": false
1860
+ },
1861
+ "151875": {
1862
+ "content": "<|action_208|>",
1863
+ "lstrip": false,
1864
+ "normalized": true,
1865
+ "rstrip": false,
1866
+ "single_word": false,
1867
+ "special": false
1868
+ },
1869
+ "151876": {
1870
+ "content": "<|action_209|>",
1871
+ "lstrip": false,
1872
+ "normalized": true,
1873
+ "rstrip": false,
1874
+ "single_word": false,
1875
+ "special": false
1876
+ },
1877
+ "151877": {
1878
+ "content": "<|action_210|>",
1879
+ "lstrip": false,
1880
+ "normalized": true,
1881
+ "rstrip": false,
1882
+ "single_word": false,
1883
+ "special": false
1884
+ },
1885
+ "151878": {
1886
+ "content": "<|action_211|>",
1887
+ "lstrip": false,
1888
+ "normalized": true,
1889
+ "rstrip": false,
1890
+ "single_word": false,
1891
+ "special": false
1892
+ },
1893
+ "151879": {
1894
+ "content": "<|action_212|>",
1895
+ "lstrip": false,
1896
+ "normalized": true,
1897
+ "rstrip": false,
1898
+ "single_word": false,
1899
+ "special": false
1900
+ },
1901
+ "151880": {
1902
+ "content": "<|action_213|>",
1903
+ "lstrip": false,
1904
+ "normalized": true,
1905
+ "rstrip": false,
1906
+ "single_word": false,
1907
+ "special": false
1908
+ },
1909
+ "151881": {
1910
+ "content": "<|action_214|>",
1911
+ "lstrip": false,
1912
+ "normalized": true,
1913
+ "rstrip": false,
1914
+ "single_word": false,
1915
+ "special": false
1916
+ },
1917
+ "151882": {
1918
+ "content": "<|action_215|>",
1919
+ "lstrip": false,
1920
+ "normalized": true,
1921
+ "rstrip": false,
1922
+ "single_word": false,
1923
+ "special": false
1924
+ },
1925
+ "151883": {
1926
+ "content": "<|action_216|>",
1927
+ "lstrip": false,
1928
+ "normalized": true,
1929
+ "rstrip": false,
1930
+ "single_word": false,
1931
+ "special": false
1932
+ },
1933
+ "151884": {
1934
+ "content": "<|action_217|>",
1935
+ "lstrip": false,
1936
+ "normalized": true,
1937
+ "rstrip": false,
1938
+ "single_word": false,
1939
+ "special": false
1940
+ },
1941
+ "151885": {
1942
+ "content": "<|action_218|>",
1943
+ "lstrip": false,
1944
+ "normalized": true,
1945
+ "rstrip": false,
1946
+ "single_word": false,
1947
+ "special": false
1948
+ },
1949
+ "151886": {
1950
+ "content": "<|action_219|>",
1951
+ "lstrip": false,
1952
+ "normalized": true,
1953
+ "rstrip": false,
1954
+ "single_word": false,
1955
+ "special": false
1956
+ },
1957
+ "151887": {
1958
+ "content": "<|action_220|>",
1959
+ "lstrip": false,
1960
+ "normalized": true,
1961
+ "rstrip": false,
1962
+ "single_word": false,
1963
+ "special": false
1964
+ },
1965
+ "151888": {
1966
+ "content": "<|action_221|>",
1967
+ "lstrip": false,
1968
+ "normalized": true,
1969
+ "rstrip": false,
1970
+ "single_word": false,
1971
+ "special": false
1972
+ },
1973
+ "151889": {
1974
+ "content": "<|action_222|>",
1975
+ "lstrip": false,
1976
+ "normalized": true,
1977
+ "rstrip": false,
1978
+ "single_word": false,
1979
+ "special": false
1980
+ },
1981
+ "151890": {
1982
+ "content": "<|action_223|>",
1983
+ "lstrip": false,
1984
+ "normalized": true,
1985
+ "rstrip": false,
1986
+ "single_word": false,
1987
+ "special": false
1988
+ },
1989
+ "151891": {
1990
+ "content": "<|action_224|>",
1991
+ "lstrip": false,
1992
+ "normalized": true,
1993
+ "rstrip": false,
1994
+ "single_word": false,
1995
+ "special": false
1996
+ },
1997
+ "151892": {
1998
+ "content": "<|action_225|>",
1999
+ "lstrip": false,
2000
+ "normalized": true,
2001
+ "rstrip": false,
2002
+ "single_word": false,
2003
+ "special": false
2004
+ },
2005
+ "151893": {
2006
+ "content": "<|action_226|>",
2007
+ "lstrip": false,
2008
+ "normalized": true,
2009
+ "rstrip": false,
2010
+ "single_word": false,
2011
+ "special": false
2012
+ },
2013
+ "151894": {
2014
+ "content": "<|action_227|>",
2015
+ "lstrip": false,
2016
+ "normalized": true,
2017
+ "rstrip": false,
2018
+ "single_word": false,
2019
+ "special": false
2020
+ },
2021
+ "151895": {
2022
+ "content": "<|action_228|>",
2023
+ "lstrip": false,
2024
+ "normalized": true,
2025
+ "rstrip": false,
2026
+ "single_word": false,
2027
+ "special": false
2028
+ },
2029
+ "151896": {
2030
+ "content": "<|action_229|>",
2031
+ "lstrip": false,
2032
+ "normalized": true,
2033
+ "rstrip": false,
2034
+ "single_word": false,
2035
+ "special": false
2036
+ },
2037
+ "151897": {
2038
+ "content": "<|action_230|>",
2039
+ "lstrip": false,
2040
+ "normalized": true,
2041
+ "rstrip": false,
2042
+ "single_word": false,
2043
+ "special": false
2044
+ },
2045
+ "151898": {
2046
+ "content": "<|action_231|>",
2047
+ "lstrip": false,
2048
+ "normalized": true,
2049
+ "rstrip": false,
2050
+ "single_word": false,
2051
+ "special": false
2052
+ },
2053
+ "151899": {
2054
+ "content": "<|action_232|>",
2055
+ "lstrip": false,
2056
+ "normalized": true,
2057
+ "rstrip": false,
2058
+ "single_word": false,
2059
+ "special": false
2060
+ },
2061
+ "151900": {
2062
+ "content": "<|action_233|>",
2063
+ "lstrip": false,
2064
+ "normalized": true,
2065
+ "rstrip": false,
2066
+ "single_word": false,
2067
+ "special": false
2068
+ },
2069
+ "151901": {
2070
+ "content": "<|action_234|>",
2071
+ "lstrip": false,
2072
+ "normalized": true,
2073
+ "rstrip": false,
2074
+ "single_word": false,
2075
+ "special": false
2076
+ },
2077
+ "151902": {
2078
+ "content": "<|action_235|>",
2079
+ "lstrip": false,
2080
+ "normalized": true,
2081
+ "rstrip": false,
2082
+ "single_word": false,
2083
+ "special": false
2084
+ },
2085
+ "151903": {
2086
+ "content": "<|action_236|>",
2087
+ "lstrip": false,
2088
+ "normalized": true,
2089
+ "rstrip": false,
2090
+ "single_word": false,
2091
+ "special": false
2092
+ },
2093
+ "151904": {
2094
+ "content": "<|action_237|>",
2095
+ "lstrip": false,
2096
+ "normalized": true,
2097
+ "rstrip": false,
2098
+ "single_word": false,
2099
+ "special": false
2100
+ },
2101
+ "151905": {
2102
+ "content": "<|action_238|>",
2103
+ "lstrip": false,
2104
+ "normalized": true,
2105
+ "rstrip": false,
2106
+ "single_word": false,
2107
+ "special": false
2108
+ },
2109
+ "151906": {
2110
+ "content": "<|action_239|>",
2111
+ "lstrip": false,
2112
+ "normalized": true,
2113
+ "rstrip": false,
2114
+ "single_word": false,
2115
+ "special": false
2116
+ },
2117
+ "151907": {
2118
+ "content": "<|action_240|>",
2119
+ "lstrip": false,
2120
+ "normalized": true,
2121
+ "rstrip": false,
2122
+ "single_word": false,
2123
+ "special": false
2124
+ },
2125
+ "151908": {
2126
+ "content": "<|action_241|>",
2127
+ "lstrip": false,
2128
+ "normalized": true,
2129
+ "rstrip": false,
2130
+ "single_word": false,
2131
+ "special": false
2132
+ },
2133
+ "151909": {
2134
+ "content": "<|action_242|>",
2135
+ "lstrip": false,
2136
+ "normalized": true,
2137
+ "rstrip": false,
2138
+ "single_word": false,
2139
+ "special": false
2140
+ },
2141
+ "151910": {
2142
+ "content": "<|action_243|>",
2143
+ "lstrip": false,
2144
+ "normalized": true,
2145
+ "rstrip": false,
2146
+ "single_word": false,
2147
+ "special": false
2148
+ },
2149
+ "151911": {
2150
+ "content": "<|action_244|>",
2151
+ "lstrip": false,
2152
+ "normalized": true,
2153
+ "rstrip": false,
2154
+ "single_word": false,
2155
+ "special": false
2156
+ },
2157
+ "151912": {
2158
+ "content": "<|action_245|>",
2159
+ "lstrip": false,
2160
+ "normalized": true,
2161
+ "rstrip": false,
2162
+ "single_word": false,
2163
+ "special": false
2164
+ },
2165
+ "151913": {
2166
+ "content": "<|action_246|>",
2167
+ "lstrip": false,
2168
+ "normalized": true,
2169
+ "rstrip": false,
2170
+ "single_word": false,
2171
+ "special": false
2172
+ },
2173
+ "151914": {
2174
+ "content": "<|action_247|>",
2175
+ "lstrip": false,
2176
+ "normalized": true,
2177
+ "rstrip": false,
2178
+ "single_word": false,
2179
+ "special": false
2180
+ },
2181
+ "151915": {
2182
+ "content": "<|action_248|>",
2183
+ "lstrip": false,
2184
+ "normalized": true,
2185
+ "rstrip": false,
2186
+ "single_word": false,
2187
+ "special": false
2188
+ },
2189
+ "151916": {
2190
+ "content": "<|action_249|>",
2191
+ "lstrip": false,
2192
+ "normalized": true,
2193
+ "rstrip": false,
2194
+ "single_word": false,
2195
+ "special": false
2196
+ },
2197
+ "151917": {
2198
+ "content": "<|action_250|>",
2199
+ "lstrip": false,
2200
+ "normalized": true,
2201
+ "rstrip": false,
2202
+ "single_word": false,
2203
+ "special": false
2204
+ },
2205
+ "151918": {
2206
+ "content": "<|action_251|>",
2207
+ "lstrip": false,
2208
+ "normalized": true,
2209
+ "rstrip": false,
2210
+ "single_word": false,
2211
+ "special": false
2212
+ },
2213
+ "151919": {
2214
+ "content": "<|action_252|>",
2215
+ "lstrip": false,
2216
+ "normalized": true,
2217
+ "rstrip": false,
2218
+ "single_word": false,
2219
+ "special": false
2220
+ },
2221
+ "151920": {
2222
+ "content": "<|action_253|>",
2223
+ "lstrip": false,
2224
+ "normalized": true,
2225
+ "rstrip": false,
2226
+ "single_word": false,
2227
+ "special": false
2228
+ },
2229
+ "151921": {
2230
+ "content": "<|action_254|>",
2231
+ "lstrip": false,
2232
+ "normalized": true,
2233
+ "rstrip": false,
2234
+ "single_word": false,
2235
+ "special": false
2236
+ },
2237
+ "151922": {
2238
+ "content": "<|action_255|>",
2239
+ "lstrip": false,
2240
+ "normalized": true,
2241
+ "rstrip": false,
2242
+ "single_word": false,
2243
+ "special": false
2244
+ }
2245
+ },
2246
+ "additional_special_tokens": [
2247
+ "<|beginoftext|>",
2248
+ "<|mask|>"
2249
+ ],
2250
+ "auto_map": {
2251
+ "AutoProcessor": "processing_dreamvl.DreamVLProcessor",
2252
+ "AutoTokenizer": [
2253
+ "tokenization_dream.DreamTokenizer",
2254
+ null
2255
+ ]
2256
+ },
2257
+ "bos_token": "<|beginoftext|>",
2258
+ "chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|image_pad|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|video_pad|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}",
2259
+ "clean_up_tokenization_spaces": false,
2260
+ "eos_token": "<|endoftext|>",
2261
+ "errors": "replace",
2262
+ "extra_special_tokens": {},
2263
+ "mask_token": "<|mask|>",
2264
+ "model_max_length": 8192,
2265
+ "pad_token": "<|endoftext|>",
2266
+ "padding_side": "right",
2267
+ "processor_class": "DreamVLProcessor",
2268
+ "split_special_tokens": false,
2269
+ "tokenizer_class": "DreamTokenizer",
2270
+ "unk_token": null
2271
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff