Nuoya commited on
Commit
deb06b6
·
verified ·
1 Parent(s): 4f6d9ec

upload trained model

Browse files
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "<PAD>": 32000
3
+ }
config.json ADDED
@@ -0,0 +1,3178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/mnt/public/peihong/models/RLinf-OpenVLAOFT-RoboTwin-SFT-move_can_pot",
3
+ "action_dim": 14,
4
+ "add_value_head": false,
5
+ "arch_specifier": "no-align+fused-gelu-mlp",
6
+ "architectures": [
7
+ "OpenVLAOFTForRLActionPrediction"
8
+ ],
9
+ "auto_map": {
10
+ "AutoConfig": "configuration_prismatic.OpenVLAConfig",
11
+ "AutoModelForVision2Seq": "modeling_prismatic.OpenVLAForActionPrediction"
12
+ },
13
+ "hf_llm_id": "meta-llama/Llama-2-7b-hf",
14
+ "image_resize_strategy": "resize-naive",
15
+ "image_sizes": [
16
+ 224,
17
+ 224
18
+ ],
19
+ "llm_backbone_id": "llama2-7b-pure",
20
+ "llm_max_length": 2048,
21
+ "max_prompt_length": 512,
22
+ "model_type": "openvla",
23
+ "n_action_bins": 256,
24
+ "norm_stats": {
25
+ "austin_buds_dataset_converted_externally_to_rlds": {
26
+ "action": {
27
+ "mask": [
28
+ true,
29
+ true,
30
+ true,
31
+ true,
32
+ true,
33
+ true,
34
+ false
35
+ ],
36
+ "max": [
37
+ 1.0,
38
+ 1.0,
39
+ 1.0,
40
+ 0.0,
41
+ 0.0,
42
+ 0.0,
43
+ 1.0
44
+ ],
45
+ "mean": [
46
+ -0.07678354531526566,
47
+ 0.0036849044263362885,
48
+ 0.05644911900162697,
49
+ 0.0,
50
+ 0.0,
51
+ 0.0,
52
+ 0.3510494828224182
53
+ ],
54
+ "min": [
55
+ -1.0,
56
+ -1.0,
57
+ -1.0,
58
+ 0.0,
59
+ 0.0,
60
+ 0.0,
61
+ 0.0
62
+ ],
63
+ "q01": [
64
+ -1.0,
65
+ -0.9599999785423279,
66
+ -0.8714285492897034,
67
+ 0.0,
68
+ 0.0,
69
+ 0.0,
70
+ 0.0
71
+ ],
72
+ "q99": [
73
+ 1.0,
74
+ 0.8600000143051147,
75
+ 1.0,
76
+ 0.0,
77
+ 0.0,
78
+ 0.0,
79
+ 1.0
80
+ ],
81
+ "std": [
82
+ 0.6367740631103516,
83
+ 0.37889179587364197,
84
+ 0.47796326875686646,
85
+ 0.0,
86
+ 0.0,
87
+ 0.0,
88
+ 0.47721168398857117
89
+ ]
90
+ },
91
+ "num_trajectories": 50,
92
+ "num_transitions": 34112,
93
+ "proprio": {
94
+ "max": [
95
+ 0.0,
96
+ 0.0,
97
+ 0.0,
98
+ 0.0,
99
+ 0.0,
100
+ 0.0,
101
+ 0.0
102
+ ],
103
+ "mean": [
104
+ 0.0,
105
+ 0.0,
106
+ 0.0,
107
+ 0.0,
108
+ 0.0,
109
+ 0.0,
110
+ 0.0
111
+ ],
112
+ "min": [
113
+ 0.0,
114
+ 0.0,
115
+ 0.0,
116
+ 0.0,
117
+ 0.0,
118
+ 0.0,
119
+ 0.0
120
+ ],
121
+ "q01": [
122
+ 0.0,
123
+ 0.0,
124
+ 0.0,
125
+ 0.0,
126
+ 0.0,
127
+ 0.0,
128
+ 0.0
129
+ ],
130
+ "q99": [
131
+ 0.0,
132
+ 0.0,
133
+ 0.0,
134
+ 0.0,
135
+ 0.0,
136
+ 0.0,
137
+ 0.0
138
+ ],
139
+ "std": [
140
+ 0.0,
141
+ 0.0,
142
+ 0.0,
143
+ 0.0,
144
+ 0.0,
145
+ 0.0,
146
+ 0.0
147
+ ]
148
+ }
149
+ },
150
+ "austin_sailor_dataset_converted_externally_to_rlds": {
151
+ "action": {
152
+ "mask": [
153
+ true,
154
+ true,
155
+ true,
156
+ true,
157
+ true,
158
+ true,
159
+ false
160
+ ],
161
+ "max": [
162
+ 1.0,
163
+ 1.0,
164
+ 1.0,
165
+ 0.0,
166
+ 0.0,
167
+ 0.375,
168
+ 1.0
169
+ ],
170
+ "mean": [
171
+ 0.011825348250567913,
172
+ 0.006461074110120535,
173
+ 0.06023626774549484,
174
+ 0.0,
175
+ 0.0,
176
+ 0.0016465914668515325,
177
+ 0.5260950326919556
178
+ ],
179
+ "min": [
180
+ -1.0,
181
+ -1.0,
182
+ -1.0,
183
+ 0.0,
184
+ 0.0,
185
+ -0.375,
186
+ 0.0
187
+ ],
188
+ "q01": [
189
+ -1.0,
190
+ -0.9828571677207947,
191
+ -0.6000000238418579,
192
+ 0.0,
193
+ 0.0,
194
+ -0.17249999940395355,
195
+ 0.0
196
+ ],
197
+ "q99": [
198
+ 1.0,
199
+ 0.9457142949104309,
200
+ 1.0,
201
+ 0.0,
202
+ 0.0,
203
+ 0.17892856895923615,
204
+ 1.0
205
+ ],
206
+ "std": [
207
+ 0.46348899602890015,
208
+ 0.41240179538726807,
209
+ 0.411862850189209,
210
+ 0.0,
211
+ 0.0,
212
+ 0.0578610822558403,
213
+ 0.49894046783447266
214
+ ]
215
+ },
216
+ "num_trajectories": 240,
217
+ "num_transitions": 353094,
218
+ "proprio": {
219
+ "max": [
220
+ 0.0,
221
+ 0.0,
222
+ 0.0,
223
+ 0.0,
224
+ 0.0,
225
+ 0.0,
226
+ 0.0
227
+ ],
228
+ "mean": [
229
+ 0.0,
230
+ 0.0,
231
+ 0.0,
232
+ 0.0,
233
+ 0.0,
234
+ 0.0,
235
+ 0.0
236
+ ],
237
+ "min": [
238
+ 0.0,
239
+ 0.0,
240
+ 0.0,
241
+ 0.0,
242
+ 0.0,
243
+ 0.0,
244
+ 0.0
245
+ ],
246
+ "q01": [
247
+ 0.0,
248
+ 0.0,
249
+ 0.0,
250
+ 0.0,
251
+ 0.0,
252
+ 0.0,
253
+ 0.0
254
+ ],
255
+ "q99": [
256
+ 0.0,
257
+ 0.0,
258
+ 0.0,
259
+ 0.0,
260
+ 0.0,
261
+ 0.0,
262
+ 0.0
263
+ ],
264
+ "std": [
265
+ 0.0,
266
+ 0.0,
267
+ 0.0,
268
+ 0.0,
269
+ 0.0,
270
+ 0.0,
271
+ 0.0
272
+ ]
273
+ }
274
+ },
275
+ "austin_sirius_dataset_converted_externally_to_rlds": {
276
+ "action": {
277
+ "mask": [
278
+ true,
279
+ true,
280
+ true,
281
+ true,
282
+ true,
283
+ true,
284
+ false
285
+ ],
286
+ "max": [
287
+ 1.0002285242080688,
288
+ 0.960608720779419,
289
+ 1.105179786682129,
290
+ 0.0,
291
+ 0.0,
292
+ 0.341785728931427,
293
+ 1.0
294
+ ],
295
+ "mean": [
296
+ 0.07747682929039001,
297
+ 0.03195561468601227,
298
+ 0.04244732856750488,
299
+ 0.0,
300
+ 0.0,
301
+ -0.01603456400334835,
302
+ 0.43260177969932556
303
+ ],
304
+ "min": [
305
+ -1.0183025598526,
306
+ -0.9800000190734863,
307
+ -0.9774575233459473,
308
+ 0.0,
309
+ 0.0,
310
+ -0.34607142210006714,
311
+ 0.0
312
+ ],
313
+ "q01": [
314
+ -0.780905865430832,
315
+ -0.5667179036140442,
316
+ -0.5254343223571777,
317
+ 0.0,
318
+ 0.0,
319
+ -0.28495091378688814,
320
+ 0.0
321
+ ],
322
+ "q99": [
323
+ 0.9569637751579284,
324
+ 0.6971374487876891,
325
+ 0.8124888157844541,
326
+ 0.0,
327
+ 0.0,
328
+ 0.1971428543329239,
329
+ 1.0
330
+ ],
331
+ "std": [
332
+ 0.3906329572200775,
333
+ 0.2998155355453491,
334
+ 0.2782271206378937,
335
+ 0.0,
336
+ 0.0,
337
+ 0.08120622485876083,
338
+ 0.49528297781944275
339
+ ]
340
+ },
341
+ "num_trajectories": 559,
342
+ "num_transitions": 279939,
343
+ "proprio": {
344
+ "max": [
345
+ 0.0,
346
+ 0.0,
347
+ 0.0,
348
+ 0.0,
349
+ 0.0,
350
+ 0.0,
351
+ 0.0
352
+ ],
353
+ "mean": [
354
+ 0.0,
355
+ 0.0,
356
+ 0.0,
357
+ 0.0,
358
+ 0.0,
359
+ 0.0,
360
+ 0.0
361
+ ],
362
+ "min": [
363
+ 0.0,
364
+ 0.0,
365
+ 0.0,
366
+ 0.0,
367
+ 0.0,
368
+ 0.0,
369
+ 0.0
370
+ ],
371
+ "q01": [
372
+ 0.0,
373
+ 0.0,
374
+ 0.0,
375
+ 0.0,
376
+ 0.0,
377
+ 0.0,
378
+ 0.0
379
+ ],
380
+ "q99": [
381
+ 0.0,
382
+ 0.0,
383
+ 0.0,
384
+ 0.0,
385
+ 0.0,
386
+ 0.0,
387
+ 0.0
388
+ ],
389
+ "std": [
390
+ 0.0,
391
+ 0.0,
392
+ 0.0,
393
+ 0.0,
394
+ 0.0,
395
+ 0.0,
396
+ 0.0
397
+ ]
398
+ }
399
+ },
400
+ "bc_z": {
401
+ "action": {
402
+ "mask": [
403
+ true,
404
+ true,
405
+ true,
406
+ true,
407
+ true,
408
+ true,
409
+ false
410
+ ],
411
+ "max": [
412
+ 0.2165454924106598,
413
+ 0.1251407265663147,
414
+ 0.10772687941789627,
415
+ 0.33544227480888367,
416
+ 0.28117990493774414,
417
+ 0.40614867210388184,
418
+ 1.0
419
+ ],
420
+ "mean": [
421
+ -0.009958467446267605,
422
+ 0.0008958321413956583,
423
+ 0.004995597992092371,
424
+ 0.00029755113064311445,
425
+ -0.008735382929444313,
426
+ -0.030693737789988518,
427
+ 0.8344562649726868
428
+ ],
429
+ "min": [
430
+ -0.1677047461271286,
431
+ -0.14630407094955444,
432
+ -0.10066790133714676,
433
+ -0.29421567916870117,
434
+ -0.32101404666900635,
435
+ -0.4635624885559082,
436
+ 0.0
437
+ ],
438
+ "q01": [
439
+ -0.09220654994249344,
440
+ -0.06456145539879798,
441
+ -0.049121275544166565,
442
+ -0.11594625547528267,
443
+ -0.14152548640966414,
444
+ -0.2251061636209488,
445
+ 0.0
446
+ ],
447
+ "q99": [
448
+ 0.07628866866230968,
449
+ 0.058019736707210584,
450
+ 0.052540797740221024,
451
+ 0.11740604028105736,
452
+ 0.11703975558280955,
453
+ 0.16729306846857078,
454
+ 1.0
455
+ ],
456
+ "std": [
457
+ 0.03053455986082554,
458
+ 0.0231423731893301,
459
+ 0.020641816779971123,
460
+ 0.04155943542718887,
461
+ 0.046427831053733826,
462
+ 0.0769818127155304,
463
+ 0.3610210120677948
464
+ ]
465
+ },
466
+ "num_trajectories": 43264,
467
+ "num_transitions": 6015535,
468
+ "proprio": {
469
+ "max": [
470
+ 0.0,
471
+ 0.0,
472
+ 0.0,
473
+ 0.0,
474
+ 0.0,
475
+ 0.0,
476
+ 0.0
477
+ ],
478
+ "mean": [
479
+ 0.0,
480
+ 0.0,
481
+ 0.0,
482
+ 0.0,
483
+ 0.0,
484
+ 0.0,
485
+ 0.0
486
+ ],
487
+ "min": [
488
+ 0.0,
489
+ 0.0,
490
+ 0.0,
491
+ 0.0,
492
+ 0.0,
493
+ 0.0,
494
+ 0.0
495
+ ],
496
+ "q01": [
497
+ 0.0,
498
+ 0.0,
499
+ 0.0,
500
+ 0.0,
501
+ 0.0,
502
+ 0.0,
503
+ 0.0
504
+ ],
505
+ "q99": [
506
+ 0.0,
507
+ 0.0,
508
+ 0.0,
509
+ 0.0,
510
+ 0.0,
511
+ 0.0,
512
+ 0.0
513
+ ],
514
+ "std": [
515
+ 0.0,
516
+ 0.0,
517
+ 0.0,
518
+ 0.0,
519
+ 0.0,
520
+ 0.0,
521
+ 0.0
522
+ ]
523
+ }
524
+ },
525
+ "berkeley_autolab_ur5": {
526
+ "action": {
527
+ "mask": [
528
+ true,
529
+ true,
530
+ true,
531
+ true,
532
+ true,
533
+ true,
534
+ false
535
+ ],
536
+ "max": [
537
+ 0.019999999552965164,
538
+ 0.019999999552965164,
539
+ 0.019999999552965164,
540
+ 0.06666667014360428,
541
+ 0.06666667014360428,
542
+ 0.06666667014360428,
543
+ 1.0
544
+ ],
545
+ "mean": [
546
+ 0.0005683620693162084,
547
+ 0.001217700308188796,
548
+ -0.0005296372692100704,
549
+ 0.00021029810886830091,
550
+ 6.0695128922816366e-05,
551
+ 0.001204986940138042,
552
+ 0.6298308372497559
553
+ ],
554
+ "min": [
555
+ -0.019999999552965164,
556
+ -0.019999999552965164,
557
+ -0.019999999552965164,
558
+ -0.06666667014360428,
559
+ -0.06666667014360428,
560
+ -0.06666667014360428,
561
+ 0.0
562
+ ],
563
+ "q01": [
564
+ -0.019999999552965164,
565
+ -0.019999999552965164,
566
+ -0.019999999552965164,
567
+ -0.02628571353852749,
568
+ -0.06666667014360428,
569
+ -0.03847619146108627,
570
+ 0.0
571
+ ],
572
+ "q99": [
573
+ 0.019999999552965164,
574
+ 0.019999999552965164,
575
+ 0.019999999552965164,
576
+ 0.031809523701667786,
577
+ 0.06666667014360428,
578
+ 0.036571428179740906,
579
+ 1.0
580
+ ],
581
+ "std": [
582
+ 0.0115329809486866,
583
+ 0.007990492507815361,
584
+ 0.009577835910022259,
585
+ 0.009432995691895485,
586
+ 0.016427582129836082,
587
+ 0.011053967289626598,
588
+ 0.48267969489097595
589
+ ]
590
+ },
591
+ "num_trajectories": 1000,
592
+ "num_transitions": 97939,
593
+ "proprio": {
594
+ "max": [
595
+ 0.0,
596
+ 0.0,
597
+ 0.0,
598
+ 0.0,
599
+ 0.0,
600
+ 0.0,
601
+ 0.0
602
+ ],
603
+ "mean": [
604
+ 0.0,
605
+ 0.0,
606
+ 0.0,
607
+ 0.0,
608
+ 0.0,
609
+ 0.0,
610
+ 0.0
611
+ ],
612
+ "min": [
613
+ 0.0,
614
+ 0.0,
615
+ 0.0,
616
+ 0.0,
617
+ 0.0,
618
+ 0.0,
619
+ 0.0
620
+ ],
621
+ "q01": [
622
+ 0.0,
623
+ 0.0,
624
+ 0.0,
625
+ 0.0,
626
+ 0.0,
627
+ 0.0,
628
+ 0.0
629
+ ],
630
+ "q99": [
631
+ 0.0,
632
+ 0.0,
633
+ 0.0,
634
+ 0.0,
635
+ 0.0,
636
+ 0.0,
637
+ 0.0
638
+ ],
639
+ "std": [
640
+ 0.0,
641
+ 0.0,
642
+ 0.0,
643
+ 0.0,
644
+ 0.0,
645
+ 0.0,
646
+ 0.0
647
+ ]
648
+ }
649
+ },
650
+ "berkeley_cable_routing": {
651
+ "action": {
652
+ "mask": [
653
+ true,
654
+ true,
655
+ true,
656
+ true,
657
+ true,
658
+ true,
659
+ false
660
+ ],
661
+ "max": [
662
+ 0.9633283019065857,
663
+ 1.0,
664
+ 1.0,
665
+ 0.0,
666
+ 0.0,
667
+ 1.0,
668
+ 0.0
669
+ ],
670
+ "mean": [
671
+ -0.07139874249696732,
672
+ 0.023609008640050888,
673
+ 0.10241943597793579,
674
+ 0.0,
675
+ 0.0,
676
+ 0.049671024084091187,
677
+ 0.0
678
+ ],
679
+ "min": [
680
+ -0.9809081554412842,
681
+ -0.9554349184036255,
682
+ -0.9994775056838989,
683
+ 0.0,
684
+ 0.0,
685
+ -1.0,
686
+ 0.0
687
+ ],
688
+ "q01": [
689
+ -0.5534318816661835,
690
+ -0.4797285574674606,
691
+ -0.5314934802055359,
692
+ 0.0,
693
+ 0.0,
694
+ -0.8855219376087189,
695
+ 0.0
696
+ ],
697
+ "q99": [
698
+ 0.42652835428714786,
699
+ 0.5000944086909298,
700
+ 0.639823433756829,
701
+ 0.0,
702
+ 0.0,
703
+ 0.984243879914284,
704
+ 0.0
705
+ ],
706
+ "std": [
707
+ 0.1815500408411026,
708
+ 0.1810990273952484,
709
+ 0.21220779418945312,
710
+ 0.0,
711
+ 0.0,
712
+ 0.3475511968135834,
713
+ 0.0
714
+ ]
715
+ },
716
+ "num_trajectories": 1647,
717
+ "num_transitions": 42328,
718
+ "proprio": {
719
+ "max": [
720
+ 0.0,
721
+ 0.0,
722
+ 0.0,
723
+ 0.0,
724
+ 0.0,
725
+ 0.0,
726
+ 0.0
727
+ ],
728
+ "mean": [
729
+ 0.0,
730
+ 0.0,
731
+ 0.0,
732
+ 0.0,
733
+ 0.0,
734
+ 0.0,
735
+ 0.0
736
+ ],
737
+ "min": [
738
+ 0.0,
739
+ 0.0,
740
+ 0.0,
741
+ 0.0,
742
+ 0.0,
743
+ 0.0,
744
+ 0.0
745
+ ],
746
+ "q01": [
747
+ 0.0,
748
+ 0.0,
749
+ 0.0,
750
+ 0.0,
751
+ 0.0,
752
+ 0.0,
753
+ 0.0
754
+ ],
755
+ "q99": [
756
+ 0.0,
757
+ 0.0,
758
+ 0.0,
759
+ 0.0,
760
+ 0.0,
761
+ 0.0,
762
+ 0.0
763
+ ],
764
+ "std": [
765
+ 0.0,
766
+ 0.0,
767
+ 0.0,
768
+ 0.0,
769
+ 0.0,
770
+ 0.0,
771
+ 0.0
772
+ ]
773
+ }
774
+ },
775
+ "berkeley_fanuc_manipulation": {
776
+ "action": {
777
+ "mask": [
778
+ true,
779
+ true,
780
+ true,
781
+ true,
782
+ true,
783
+ true,
784
+ false
785
+ ],
786
+ "max": [
787
+ 0.009999999776482582,
788
+ 0.009999999776482582,
789
+ 0.009999999776482582,
790
+ 0.03490658476948738,
791
+ 0.03490658476948738,
792
+ 0.03490658476948738,
793
+ 1.0
794
+ ],
795
+ "mean": [
796
+ 0.0007744057802483439,
797
+ -0.00031240080716088414,
798
+ -0.0015001941937953234,
799
+ -0.0007515158504247665,
800
+ -0.00015832878125365824,
801
+ 0.00014327642566058785,
802
+ 0.699295699596405
803
+ ],
804
+ "min": [
805
+ -0.009999999776482582,
806
+ -0.009999999776482582,
807
+ -0.009999999776482582,
808
+ -0.03490658476948738,
809
+ -0.03490658476948738,
810
+ -0.03490658476948738,
811
+ 0.0
812
+ ],
813
+ "q01": [
814
+ -0.009999999776482582,
815
+ -0.009999999776482582,
816
+ -0.009999999776482582,
817
+ -0.03490658476948738,
818
+ 0.0,
819
+ -0.03490658476948738,
820
+ 0.0
821
+ ],
822
+ "q99": [
823
+ 0.009999999776482582,
824
+ 0.009999999776482582,
825
+ 0.009999999776482582,
826
+ 0.03490658476948738,
827
+ 0.0,
828
+ 0.03490658476948738,
829
+ 1.0
830
+ ],
831
+ "std": [
832
+ 0.0034070091787725687,
833
+ 0.0049921851605176926,
834
+ 0.005344334989786148,
835
+ 0.00759894959628582,
836
+ 0.004081866703927517,
837
+ 0.008568956516683102,
838
+ 0.4586937427520752
839
+ ]
840
+ },
841
+ "num_trajectories": 415,
842
+ "num_transitions": 62613,
843
+ "proprio": {
844
+ "max": [
845
+ 0.0,
846
+ 0.0,
847
+ 0.0,
848
+ 0.0,
849
+ 0.0,
850
+ 0.0,
851
+ 0.0
852
+ ],
853
+ "mean": [
854
+ 0.0,
855
+ 0.0,
856
+ 0.0,
857
+ 0.0,
858
+ 0.0,
859
+ 0.0,
860
+ 0.0
861
+ ],
862
+ "min": [
863
+ 0.0,
864
+ 0.0,
865
+ 0.0,
866
+ 0.0,
867
+ 0.0,
868
+ 0.0,
869
+ 0.0
870
+ ],
871
+ "q01": [
872
+ 0.0,
873
+ 0.0,
874
+ 0.0,
875
+ 0.0,
876
+ 0.0,
877
+ 0.0,
878
+ 0.0
879
+ ],
880
+ "q99": [
881
+ 0.0,
882
+ 0.0,
883
+ 0.0,
884
+ 0.0,
885
+ 0.0,
886
+ 0.0,
887
+ 0.0
888
+ ],
889
+ "std": [
890
+ 0.0,
891
+ 0.0,
892
+ 0.0,
893
+ 0.0,
894
+ 0.0,
895
+ 0.0,
896
+ 0.0
897
+ ]
898
+ }
899
+ },
900
+ "bridge_orig": {
901
+ "action": {
902
+ "mask": [
903
+ true,
904
+ true,
905
+ true,
906
+ true,
907
+ true,
908
+ true,
909
+ false
910
+ ],
911
+ "max": [
912
+ 0.41691166162490845,
913
+ 0.25864794850349426,
914
+ 0.21218234300613403,
915
+ 3.122201919555664,
916
+ 1.8618112802505493,
917
+ 6.280478477478027,
918
+ 1.0
919
+ ],
920
+ "mean": [
921
+ 0.0002334194869035855,
922
+ 0.00013004911306779832,
923
+ -0.00012762474943883717,
924
+ -0.0001556558854645118,
925
+ -0.0004039328487124294,
926
+ 0.00023557482927571982,
927
+ 0.5764579176902771
928
+ ],
929
+ "min": [
930
+ -0.4007510244846344,
931
+ -0.13874775171279907,
932
+ -0.22553899884223938,
933
+ -3.2010786533355713,
934
+ -1.8618112802505493,
935
+ -6.279075622558594,
936
+ 0.0
937
+ ],
938
+ "q01": [
939
+ -0.02872725307941437,
940
+ -0.04170349963009357,
941
+ -0.026093858778476715,
942
+ -0.08092105075716972,
943
+ -0.09288699507713317,
944
+ -0.20718276381492615,
945
+ 0.0
946
+ ],
947
+ "q99": [
948
+ 0.028309678435325586,
949
+ 0.040855254605412394,
950
+ 0.040161586627364146,
951
+ 0.08192047759890528,
952
+ 0.07792850524187081,
953
+ 0.20382574498653397,
954
+ 1.0
955
+ ],
956
+ "std": [
957
+ 0.009765930473804474,
958
+ 0.013689135201275349,
959
+ 0.012667362578213215,
960
+ 0.028534092009067535,
961
+ 0.030637972056865692,
962
+ 0.07691419124603271,
963
+ 0.4973701536655426
964
+ ]
965
+ },
966
+ "num_trajectories": 60064,
967
+ "num_transitions": 2135463,
968
+ "proprio": {
969
+ "max": [
970
+ 0.0,
971
+ 0.0,
972
+ 0.0,
973
+ 0.0,
974
+ 0.0,
975
+ 0.0,
976
+ 0.0
977
+ ],
978
+ "mean": [
979
+ 0.0,
980
+ 0.0,
981
+ 0.0,
982
+ 0.0,
983
+ 0.0,
984
+ 0.0,
985
+ 0.0
986
+ ],
987
+ "min": [
988
+ 0.0,
989
+ 0.0,
990
+ 0.0,
991
+ 0.0,
992
+ 0.0,
993
+ 0.0,
994
+ 0.0
995
+ ],
996
+ "q01": [
997
+ 0.0,
998
+ 0.0,
999
+ 0.0,
1000
+ 0.0,
1001
+ 0.0,
1002
+ 0.0,
1003
+ 0.0
1004
+ ],
1005
+ "q99": [
1006
+ 0.0,
1007
+ 0.0,
1008
+ 0.0,
1009
+ 0.0,
1010
+ 0.0,
1011
+ 0.0,
1012
+ 0.0
1013
+ ],
1014
+ "std": [
1015
+ 0.0,
1016
+ 0.0,
1017
+ 0.0,
1018
+ 0.0,
1019
+ 0.0,
1020
+ 0.0,
1021
+ 0.0
1022
+ ]
1023
+ }
1024
+ },
1025
+ "cmu_stretch": {
1026
+ "action": {
1027
+ "mask": [
1028
+ true,
1029
+ true,
1030
+ true,
1031
+ true,
1032
+ true,
1033
+ true,
1034
+ false
1035
+ ],
1036
+ "max": [
1037
+ 0.02338407188653946,
1038
+ 0.0,
1039
+ 0.023404927924275398,
1040
+ 0.0,
1041
+ 0.0,
1042
+ 0.0,
1043
+ 1.0
1044
+ ],
1045
+ "mean": [
1046
+ 0.00036304505192674696,
1047
+ 0.0,
1048
+ 0.0016466958913952112,
1049
+ 0.0,
1050
+ 0.0,
1051
+ 0.0,
1052
+ 0.3987048268318176
1053
+ ],
1054
+ "min": [
1055
+ -0.019353797659277916,
1056
+ 0.0,
1057
+ -0.02019215188920498,
1058
+ 0.0,
1059
+ 0.0,
1060
+ 0.0,
1061
+ 0.0
1062
+ ],
1063
+ "q01": [
1064
+ -0.011175686959177256,
1065
+ 0.0,
1066
+ -0.0032206363626755773,
1067
+ 0.0,
1068
+ 0.0,
1069
+ 0.0,
1070
+ 0.0
1071
+ ],
1072
+ "q99": [
1073
+ 0.014501785952597848,
1074
+ 0.0,
1075
+ 0.015056106168776728,
1076
+ 0.0,
1077
+ 0.0,
1078
+ 0.0,
1079
+ 1.0
1080
+ ],
1081
+ "std": [
1082
+ 0.004081828519701958,
1083
+ 0.0,
1084
+ 0.0037743328139185905,
1085
+ 0.0,
1086
+ 0.0,
1087
+ 0.0,
1088
+ 0.48963725566864014
1089
+ ]
1090
+ },
1091
+ "num_trajectories": 135,
1092
+ "num_transitions": 25016,
1093
+ "proprio": {
1094
+ "max": [
1095
+ 0.0,
1096
+ 0.0,
1097
+ 0.0,
1098
+ 0.0,
1099
+ 0.0,
1100
+ 0.0,
1101
+ 0.0
1102
+ ],
1103
+ "mean": [
1104
+ 0.0,
1105
+ 0.0,
1106
+ 0.0,
1107
+ 0.0,
1108
+ 0.0,
1109
+ 0.0,
1110
+ 0.0
1111
+ ],
1112
+ "min": [
1113
+ 0.0,
1114
+ 0.0,
1115
+ 0.0,
1116
+ 0.0,
1117
+ 0.0,
1118
+ 0.0,
1119
+ 0.0
1120
+ ],
1121
+ "q01": [
1122
+ 0.0,
1123
+ 0.0,
1124
+ 0.0,
1125
+ 0.0,
1126
+ 0.0,
1127
+ 0.0,
1128
+ 0.0
1129
+ ],
1130
+ "q99": [
1131
+ 0.0,
1132
+ 0.0,
1133
+ 0.0,
1134
+ 0.0,
1135
+ 0.0,
1136
+ 0.0,
1137
+ 0.0
1138
+ ],
1139
+ "std": [
1140
+ 0.0,
1141
+ 0.0,
1142
+ 0.0,
1143
+ 0.0,
1144
+ 0.0,
1145
+ 0.0,
1146
+ 0.0
1147
+ ]
1148
+ }
1149
+ },
1150
+ "dlr_edan_shared_control_converted_externally_to_rlds": {
1151
+ "action": {
1152
+ "mask": [
1153
+ true,
1154
+ true,
1155
+ true,
1156
+ true,
1157
+ true,
1158
+ true,
1159
+ false
1160
+ ],
1161
+ "max": [
1162
+ 0.18991442024707794,
1163
+ 0.0739002525806427,
1164
+ 0.18064819276332855,
1165
+ 0.0866486132144928,
1166
+ 0.13464981317520142,
1167
+ 0.16910280287265778,
1168
+ 1.0
1169
+ ],
1170
+ "mean": [
1171
+ 0.006647810339927673,
1172
+ -0.0007657372043468058,
1173
+ 0.006522852927446365,
1174
+ 0.0011679717572405934,
1175
+ -0.006395625416189432,
1176
+ -0.011902998201549053,
1177
+ 0.6985887289047241
1178
+ ],
1179
+ "min": [
1180
+ -0.10054297000169754,
1181
+ -0.08427435159683228,
1182
+ -0.13533438742160797,
1183
+ -0.17556548118591309,
1184
+ -0.18485672771930695,
1185
+ -0.2680685818195343,
1186
+ 0.0
1187
+ ],
1188
+ "q01": [
1189
+ -0.02987122368067503,
1190
+ -0.06013262912631035,
1191
+ -0.08286409199237824,
1192
+ -0.05924444157630205,
1193
+ -0.15986866518855095,
1194
+ -0.15636983573436739,
1195
+ 0.0
1196
+ ],
1197
+ "q99": [
1198
+ 0.08832092039287087,
1199
+ 0.042126184627413736,
1200
+ 0.11311905644834042,
1201
+ 0.0643695573508739,
1202
+ 0.03941855944693088,
1203
+ 0.156646853685379,
1204
+ 1.0
1205
+ ],
1206
+ "std": [
1207
+ 0.021393608301877975,
1208
+ 0.01814231649041176,
1209
+ 0.03374375030398369,
1210
+ 0.01743541844189167,
1211
+ 0.03394376486539841,
1212
+ 0.04641875624656677,
1213
+ 0.4588589072227478
1214
+ ]
1215
+ },
1216
+ "num_trajectories": 104,
1217
+ "num_transitions": 8928,
1218
+ "proprio": {
1219
+ "max": [
1220
+ 0.0,
1221
+ 0.0,
1222
+ 0.0,
1223
+ 0.0,
1224
+ 0.0,
1225
+ 0.0,
1226
+ 0.0
1227
+ ],
1228
+ "mean": [
1229
+ 0.0,
1230
+ 0.0,
1231
+ 0.0,
1232
+ 0.0,
1233
+ 0.0,
1234
+ 0.0,
1235
+ 0.0
1236
+ ],
1237
+ "min": [
1238
+ 0.0,
1239
+ 0.0,
1240
+ 0.0,
1241
+ 0.0,
1242
+ 0.0,
1243
+ 0.0,
1244
+ 0.0
1245
+ ],
1246
+ "q01": [
1247
+ 0.0,
1248
+ 0.0,
1249
+ 0.0,
1250
+ 0.0,
1251
+ 0.0,
1252
+ 0.0,
1253
+ 0.0
1254
+ ],
1255
+ "q99": [
1256
+ 0.0,
1257
+ 0.0,
1258
+ 0.0,
1259
+ 0.0,
1260
+ 0.0,
1261
+ 0.0,
1262
+ 0.0
1263
+ ],
1264
+ "std": [
1265
+ 0.0,
1266
+ 0.0,
1267
+ 0.0,
1268
+ 0.0,
1269
+ 0.0,
1270
+ 0.0,
1271
+ 0.0
1272
+ ]
1273
+ }
1274
+ },
1275
+ "dobbe": {
1276
+ "action": {
1277
+ "mask": [
1278
+ true,
1279
+ true,
1280
+ true,
1281
+ true,
1282
+ true,
1283
+ true,
1284
+ false
1285
+ ],
1286
+ "max": [
1287
+ 38.590423583984375,
1288
+ 17.932697296142578,
1289
+ 4.843764305114746,
1290
+ 1.4372116327285767,
1291
+ 0.4340403974056244,
1292
+ 1.2057193517684937,
1293
+ 0.9998947381973267
1294
+ ],
1295
+ "mean": [
1296
+ -0.0001120665911003016,
1297
+ 0.0011229600058868527,
1298
+ -0.00010194431524723768,
1299
+ -7.371398532995954e-05,
1300
+ -0.00067531579406932,
1301
+ -5.6643435527803376e-05,
1302
+ 0.6318281888961792
1303
+ ],
1304
+ "min": [
1305
+ -5.700923442840576,
1306
+ -21.605947494506836,
1307
+ -123.72489929199219,
1308
+ -1.7229845523834229,
1309
+ -0.4998578727245331,
1310
+ -0.8867913484573364,
1311
+ 1.4196479014572105e-06
1312
+ ],
1313
+ "q01": [
1314
+ -0.01119564864784479,
1315
+ -0.014266146533191203,
1316
+ -0.0071747214533388615,
1317
+ -0.009444301575422287,
1318
+ -0.03990109823644161,
1319
+ -0.017422311007976532,
1320
+ 4.003279136668425e-05
1321
+ ],
1322
+ "q99": [
1323
+ 0.01015154086053368,
1324
+ 0.017181577533483497,
1325
+ 0.007216989761218411,
1326
+ 0.010380979906767595,
1327
+ 0.03556173853576176,
1328
+ 0.018032474815845446,
1329
+ 0.9982578039169312
1330
+ ],
1331
+ "std": [
1332
+ 0.04264938458800316,
1333
+ 0.04428559169173241,
1334
+ 0.12224084138870239,
1335
+ 0.005388413090258837,
1336
+ 0.011246449314057827,
1337
+ 0.006287882570177317,
1338
+ 0.39732322096824646
1339
+ ]
1340
+ },
1341
+ "num_trajectories": 5208,
1342
+ "num_transitions": 1139911,
1343
+ "proprio": {
1344
+ "max": [
1345
+ 0.0,
1346
+ 0.0,
1347
+ 0.0,
1348
+ 0.0,
1349
+ 0.0,
1350
+ 0.0,
1351
+ 0.0
1352
+ ],
1353
+ "mean": [
1354
+ 0.0,
1355
+ 0.0,
1356
+ 0.0,
1357
+ 0.0,
1358
+ 0.0,
1359
+ 0.0,
1360
+ 0.0
1361
+ ],
1362
+ "min": [
1363
+ 0.0,
1364
+ 0.0,
1365
+ 0.0,
1366
+ 0.0,
1367
+ 0.0,
1368
+ 0.0,
1369
+ 0.0
1370
+ ],
1371
+ "q01": [
1372
+ 0.0,
1373
+ 0.0,
1374
+ 0.0,
1375
+ 0.0,
1376
+ 0.0,
1377
+ 0.0,
1378
+ 0.0
1379
+ ],
1380
+ "q99": [
1381
+ 0.0,
1382
+ 0.0,
1383
+ 0.0,
1384
+ 0.0,
1385
+ 0.0,
1386
+ 0.0,
1387
+ 0.0
1388
+ ],
1389
+ "std": [
1390
+ 0.0,
1391
+ 0.0,
1392
+ 0.0,
1393
+ 0.0,
1394
+ 0.0,
1395
+ 0.0,
1396
+ 0.0
1397
+ ]
1398
+ }
1399
+ },
1400
+ "fmb_dataset": {
1401
+ "action": {
1402
+ "mask": [
1403
+ true,
1404
+ true,
1405
+ true,
1406
+ true,
1407
+ true,
1408
+ true,
1409
+ false
1410
+ ],
1411
+ "max": [
1412
+ 1.399999976158142,
1413
+ 1.0,
1414
+ 1.399999976158142,
1415
+ 1.0,
1416
+ 1.0,
1417
+ 1.0,
1418
+ 1.0
1419
+ ],
1420
+ "mean": [
1421
+ 0.059029702097177505,
1422
+ -0.06476633995771408,
1423
+ -0.09787475317716599,
1424
+ 0.004325388930737972,
1425
+ 0.00028963794466108084,
1426
+ -0.04457257315516472,
1427
+ 0.7336440086364746
1428
+ ],
1429
+ "min": [
1430
+ -1.399999976158142,
1431
+ -1.399999976158142,
1432
+ -1.0,
1433
+ -1.0,
1434
+ -1.0,
1435
+ -1.0,
1436
+ 0.0
1437
+ ],
1438
+ "q01": [
1439
+ -0.8257142901420593,
1440
+ -1.399999976158142,
1441
+ -1.0,
1442
+ -1.0,
1443
+ -0.3028571307659149,
1444
+ -1.0,
1445
+ 0.0
1446
+ ],
1447
+ "q99": [
1448
+ 1.0,
1449
+ 0.5257142782211304,
1450
+ 1.0,
1451
+ 1.0,
1452
+ 0.3400000035762787,
1453
+ 1.0,
1454
+ 1.0
1455
+ ],
1456
+ "std": [
1457
+ 0.28809213638305664,
1458
+ 0.2820415794849396,
1459
+ 0.4626740515232086,
1460
+ 0.3266514539718628,
1461
+ 0.10842999070882797,
1462
+ 0.3440099358558655,
1463
+ 0.4435282051563263
1464
+ ]
1465
+ },
1466
+ "num_trajectories": 8612,
1467
+ "num_transitions": 1137459,
1468
+ "proprio": {
1469
+ "max": [
1470
+ 0.0,
1471
+ 0.0,
1472
+ 0.0,
1473
+ 0.0,
1474
+ 0.0,
1475
+ 0.0,
1476
+ 0.0
1477
+ ],
1478
+ "mean": [
1479
+ 0.0,
1480
+ 0.0,
1481
+ 0.0,
1482
+ 0.0,
1483
+ 0.0,
1484
+ 0.0,
1485
+ 0.0
1486
+ ],
1487
+ "min": [
1488
+ 0.0,
1489
+ 0.0,
1490
+ 0.0,
1491
+ 0.0,
1492
+ 0.0,
1493
+ 0.0,
1494
+ 0.0
1495
+ ],
1496
+ "q01": [
1497
+ 0.0,
1498
+ 0.0,
1499
+ 0.0,
1500
+ 0.0,
1501
+ 0.0,
1502
+ 0.0,
1503
+ 0.0
1504
+ ],
1505
+ "q99": [
1506
+ 0.0,
1507
+ 0.0,
1508
+ 0.0,
1509
+ 0.0,
1510
+ 0.0,
1511
+ 0.0,
1512
+ 0.0
1513
+ ],
1514
+ "std": [
1515
+ 0.0,
1516
+ 0.0,
1517
+ 0.0,
1518
+ 0.0,
1519
+ 0.0,
1520
+ 0.0,
1521
+ 0.0
1522
+ ]
1523
+ }
1524
+ },
1525
+ "fractal20220817_data": {
1526
+ "action": {
1527
+ "mask": [
1528
+ true,
1529
+ true,
1530
+ true,
1531
+ true,
1532
+ true,
1533
+ true,
1534
+ false
1535
+ ],
1536
+ "max": [
1537
+ 2.9984593391418457,
1538
+ 22.09052848815918,
1539
+ 2.7507524490356445,
1540
+ 1.570636510848999,
1541
+ 1.5321086645126343,
1542
+ 1.5691522359848022,
1543
+ 1.0
1544
+ ],
1545
+ "mean": [
1546
+ 0.006987582892179489,
1547
+ 0.006265917327255011,
1548
+ -0.01262515690177679,
1549
+ 0.04333311319351196,
1550
+ -0.005756212864071131,
1551
+ 0.0009130256366916001,
1552
+ 0.5354204773902893
1553
+ ],
1554
+ "min": [
1555
+ -2.0204520225524902,
1556
+ -5.497899532318115,
1557
+ -2.031663417816162,
1558
+ -1.569917917251587,
1559
+ -1.569892168045044,
1560
+ -1.570419430732727,
1561
+ 0.0
1562
+ ],
1563
+ "q01": [
1564
+ -0.22453527510166169,
1565
+ -0.14820013284683228,
1566
+ -0.231589707583189,
1567
+ -0.3517994859814644,
1568
+ -0.4193011274933815,
1569
+ -0.43643461108207704,
1570
+ 0.0
1571
+ ],
1572
+ "q99": [
1573
+ 0.17824687153100965,
1574
+ 0.14938379630446405,
1575
+ 0.21842354819178575,
1576
+ 0.5892666035890578,
1577
+ 0.35272657424211445,
1578
+ 0.44796681255102094,
1579
+ 1.0
1580
+ ],
1581
+ "std": [
1582
+ 0.0692116990685463,
1583
+ 0.05970962345600128,
1584
+ 0.07353084534406662,
1585
+ 0.15610496699810028,
1586
+ 0.13164450228214264,
1587
+ 0.14593800902366638,
1588
+ 0.497110515832901
1589
+ ]
1590
+ },
1591
+ "num_trajectories": 87212,
1592
+ "num_transitions": 3786400,
1593
+ "proprio": {
1594
+ "max": [
1595
+ 0.0,
1596
+ 0.0,
1597
+ 0.0,
1598
+ 0.0,
1599
+ 0.0,
1600
+ 0.0,
1601
+ 0.0
1602
+ ],
1603
+ "mean": [
1604
+ 0.0,
1605
+ 0.0,
1606
+ 0.0,
1607
+ 0.0,
1608
+ 0.0,
1609
+ 0.0,
1610
+ 0.0
1611
+ ],
1612
+ "min": [
1613
+ 0.0,
1614
+ 0.0,
1615
+ 0.0,
1616
+ 0.0,
1617
+ 0.0,
1618
+ 0.0,
1619
+ 0.0
1620
+ ],
1621
+ "q01": [
1622
+ 0.0,
1623
+ 0.0,
1624
+ 0.0,
1625
+ 0.0,
1626
+ 0.0,
1627
+ 0.0,
1628
+ 0.0
1629
+ ],
1630
+ "q99": [
1631
+ 0.0,
1632
+ 0.0,
1633
+ 0.0,
1634
+ 0.0,
1635
+ 0.0,
1636
+ 0.0,
1637
+ 0.0
1638
+ ],
1639
+ "std": [
1640
+ 0.0,
1641
+ 0.0,
1642
+ 0.0,
1643
+ 0.0,
1644
+ 0.0,
1645
+ 0.0,
1646
+ 0.0
1647
+ ]
1648
+ }
1649
+ },
1650
+ "furniture_bench_dataset_converted_externally_to_rlds": {
1651
+ "action": {
1652
+ "mask": [
1653
+ true,
1654
+ true,
1655
+ true,
1656
+ true,
1657
+ true,
1658
+ true,
1659
+ false
1660
+ ],
1661
+ "max": [
1662
+ 0.10000000149011612,
1663
+ 0.10000000149011612,
1664
+ 0.10000000149011612,
1665
+ 0.8651833534240723,
1666
+ 1.0909736156463623,
1667
+ 2.863185405731201,
1668
+ 1.0
1669
+ ],
1670
+ "mean": [
1671
+ 0.00014610752987209707,
1672
+ 0.0010830952087417245,
1673
+ 0.0006224989192560315,
1674
+ -0.003303206292912364,
1675
+ -0.0026880695950239897,
1676
+ 0.018242603167891502,
1677
+ 0.48854944109916687
1678
+ ],
1679
+ "min": [
1680
+ -0.10495579987764359,
1681
+ -0.10939455777406693,
1682
+ -0.10000000149011612,
1683
+ -0.971906840801239,
1684
+ -1.0475432872772217,
1685
+ -3.06000018119812,
1686
+ 0.0
1687
+ ],
1688
+ "q01": [
1689
+ -0.053988199681043625,
1690
+ -0.05049169331789017,
1691
+ -0.032499241530895236,
1692
+ -0.1953887003660202,
1693
+ -0.41674559473991396,
1694
+ -0.8886768388748169,
1695
+ 0.0
1696
+ ],
1697
+ "q99": [
1698
+ 0.05414841488003723,
1699
+ 0.04965164884924884,
1700
+ 0.060055799782276154,
1701
+ 0.18231668293476103,
1702
+ 0.39867786407470646,
1703
+ 0.8772023963928218,
1704
+ 1.0
1705
+ ],
1706
+ "std": [
1707
+ 0.01610708422958851,
1708
+ 0.014891477301716805,
1709
+ 0.014014219865202904,
1710
+ 0.058274295181035995,
1711
+ 0.11417088657617569,
1712
+ 0.33479776978492737,
1713
+ 0.49991825222969055
1714
+ ]
1715
+ },
1716
+ "num_trajectories": 5100,
1717
+ "num_transitions": 3948057,
1718
+ "proprio": {
1719
+ "max": [
1720
+ 0.0,
1721
+ 0.0,
1722
+ 0.0,
1723
+ 0.0,
1724
+ 0.0,
1725
+ 0.0,
1726
+ 0.0
1727
+ ],
1728
+ "mean": [
1729
+ 0.0,
1730
+ 0.0,
1731
+ 0.0,
1732
+ 0.0,
1733
+ 0.0,
1734
+ 0.0,
1735
+ 0.0
1736
+ ],
1737
+ "min": [
1738
+ 0.0,
1739
+ 0.0,
1740
+ 0.0,
1741
+ 0.0,
1742
+ 0.0,
1743
+ 0.0,
1744
+ 0.0
1745
+ ],
1746
+ "q01": [
1747
+ 0.0,
1748
+ 0.0,
1749
+ 0.0,
1750
+ 0.0,
1751
+ 0.0,
1752
+ 0.0,
1753
+ 0.0
1754
+ ],
1755
+ "q99": [
1756
+ 0.0,
1757
+ 0.0,
1758
+ 0.0,
1759
+ 0.0,
1760
+ 0.0,
1761
+ 0.0,
1762
+ 0.0
1763
+ ],
1764
+ "std": [
1765
+ 0.0,
1766
+ 0.0,
1767
+ 0.0,
1768
+ 0.0,
1769
+ 0.0,
1770
+ 0.0,
1771
+ 0.0
1772
+ ]
1773
+ }
1774
+ },
1775
+ "iamlab_cmu_pickup_insert_converted_externally_to_rlds": {
1776
+ "action": {
1777
+ "mask": [
1778
+ true,
1779
+ true,
1780
+ true,
1781
+ true,
1782
+ true,
1783
+ true,
1784
+ false
1785
+ ],
1786
+ "max": [
1787
+ 0.6634981632232666,
1788
+ 0.23428471386432648,
1789
+ 0.4308285415172577,
1790
+ 3.1415927410125732,
1791
+ 0.13647015392780304,
1792
+ 3.141592502593994,
1793
+ 1.0
1794
+ ],
1795
+ "mean": [
1796
+ 0.5274372696876526,
1797
+ 0.02858201041817665,
1798
+ 0.18712575733661652,
1799
+ 1.2339589595794678,
1800
+ 0.03226623684167862,
1801
+ -1.4199490547180176,
1802
+ 0.5550631880760193
1803
+ ],
1804
+ "min": [
1805
+ 0.3071657121181488,
1806
+ -0.29754969477653503,
1807
+ 0.06578229367733002,
1808
+ -3.1415927410125732,
1809
+ -0.04584203287959099,
1810
+ -3.141592502593994,
1811
+ 0.0
1812
+ ],
1813
+ "q01": [
1814
+ 0.3148897051811218,
1815
+ -0.20317550599575043,
1816
+ 0.06785467118024827,
1817
+ -3.140952730178833,
1818
+ -0.029743434861302376,
1819
+ -3.141091251373291,
1820
+ 0.0
1821
+ ],
1822
+ "q99": [
1823
+ 0.6472805738449097,
1824
+ 0.20846802592277527,
1825
+ 0.36855655312538155,
1826
+ 3.1409926891326903,
1827
+ 0.11424950212240226,
1828
+ 3.1410969257354737,
1829
+ 1.0
1830
+ ],
1831
+ "std": [
1832
+ 0.08108345419168472,
1833
+ 0.1116757020354271,
1834
+ 0.07747554779052734,
1835
+ 2.8737246990203857,
1836
+ 0.02774704433977604,
1837
+ 2.7678682804107666,
1838
+ 0.49695101380348206
1839
+ ]
1840
+ },
1841
+ "num_trajectories": 631,
1842
+ "num_transitions": 146241,
1843
+ "proprio": {
1844
+ "max": [
1845
+ 0.0,
1846
+ 0.0,
1847
+ 0.0,
1848
+ 0.0,
1849
+ 0.0,
1850
+ 0.0,
1851
+ 0.0
1852
+ ],
1853
+ "mean": [
1854
+ 0.0,
1855
+ 0.0,
1856
+ 0.0,
1857
+ 0.0,
1858
+ 0.0,
1859
+ 0.0,
1860
+ 0.0
1861
+ ],
1862
+ "min": [
1863
+ 0.0,
1864
+ 0.0,
1865
+ 0.0,
1866
+ 0.0,
1867
+ 0.0,
1868
+ 0.0,
1869
+ 0.0
1870
+ ],
1871
+ "q01": [
1872
+ 0.0,
1873
+ 0.0,
1874
+ 0.0,
1875
+ 0.0,
1876
+ 0.0,
1877
+ 0.0,
1878
+ 0.0
1879
+ ],
1880
+ "q99": [
1881
+ 0.0,
1882
+ 0.0,
1883
+ 0.0,
1884
+ 0.0,
1885
+ 0.0,
1886
+ 0.0,
1887
+ 0.0
1888
+ ],
1889
+ "std": [
1890
+ 0.0,
1891
+ 0.0,
1892
+ 0.0,
1893
+ 0.0,
1894
+ 0.0,
1895
+ 0.0,
1896
+ 0.0
1897
+ ]
1898
+ }
1899
+ },
1900
+ "jaco_play": {
1901
+ "action": {
1902
+ "mask": [
1903
+ true,
1904
+ true,
1905
+ true,
1906
+ true,
1907
+ true,
1908
+ true,
1909
+ false
1910
+ ],
1911
+ "max": [
1912
+ 0.20000000298023224,
1913
+ 0.20000000298023224,
1914
+ 0.20000000298023224,
1915
+ 0.0,
1916
+ 0.0,
1917
+ 0.0,
1918
+ 1.0
1919
+ ],
1920
+ "mean": [
1921
+ 0.0009658430935814977,
1922
+ -0.00580078037455678,
1923
+ -0.00395062193274498,
1924
+ 0.0,
1925
+ 0.0,
1926
+ 0.0,
1927
+ 0.34934908151626587
1928
+ ],
1929
+ "min": [
1930
+ -0.20000000298023224,
1931
+ -0.20000000298023224,
1932
+ -0.20000000298023224,
1933
+ 0.0,
1934
+ 0.0,
1935
+ 0.0,
1936
+ 0.0
1937
+ ],
1938
+ "q01": [
1939
+ -0.20000000298023224,
1940
+ -0.20000000298023224,
1941
+ -0.20000000298023224,
1942
+ 0.0,
1943
+ 0.0,
1944
+ 0.0,
1945
+ 0.0
1946
+ ],
1947
+ "q99": [
1948
+ 0.20000000298023224,
1949
+ 0.20000000298023224,
1950
+ 0.20000000298023224,
1951
+ 0.0,
1952
+ 0.0,
1953
+ 0.0,
1954
+ 1.0
1955
+ ],
1956
+ "std": [
1957
+ 0.12235074490308762,
1958
+ 0.09678777307271957,
1959
+ 0.11155334860086441,
1960
+ 0.0,
1961
+ 0.0,
1962
+ 0.0,
1963
+ 0.4768252968788147
1964
+ ]
1965
+ },
1966
+ "num_trajectories": 1085,
1967
+ "num_transitions": 77965,
1968
+ "proprio": {
1969
+ "max": [
1970
+ 0.0,
1971
+ 0.0,
1972
+ 0.0,
1973
+ 0.0,
1974
+ 0.0,
1975
+ 0.0,
1976
+ 0.0
1977
+ ],
1978
+ "mean": [
1979
+ 0.0,
1980
+ 0.0,
1981
+ 0.0,
1982
+ 0.0,
1983
+ 0.0,
1984
+ 0.0,
1985
+ 0.0
1986
+ ],
1987
+ "min": [
1988
+ 0.0,
1989
+ 0.0,
1990
+ 0.0,
1991
+ 0.0,
1992
+ 0.0,
1993
+ 0.0,
1994
+ 0.0
1995
+ ],
1996
+ "q01": [
1997
+ 0.0,
1998
+ 0.0,
1999
+ 0.0,
2000
+ 0.0,
2001
+ 0.0,
2002
+ 0.0,
2003
+ 0.0
2004
+ ],
2005
+ "q99": [
2006
+ 0.0,
2007
+ 0.0,
2008
+ 0.0,
2009
+ 0.0,
2010
+ 0.0,
2011
+ 0.0,
2012
+ 0.0
2013
+ ],
2014
+ "std": [
2015
+ 0.0,
2016
+ 0.0,
2017
+ 0.0,
2018
+ 0.0,
2019
+ 0.0,
2020
+ 0.0,
2021
+ 0.0
2022
+ ]
2023
+ }
2024
+ },
2025
+ "kuka": {
2026
+ "action": {
2027
+ "mask": [
2028
+ true,
2029
+ true,
2030
+ true,
2031
+ true,
2032
+ true,
2033
+ true,
2034
+ false
2035
+ ],
2036
+ "max": [
2037
+ 0.1697135865688324,
2038
+ 0.2777623236179352,
2039
+ 0.43710532784461975,
2040
+ 0.0,
2041
+ 0.0,
2042
+ 1.9684287309646606,
2043
+ 1.0
2044
+ ],
2045
+ "mean": [
2046
+ -0.0004668905457947403,
2047
+ 0.00040138536132872105,
2048
+ -0.001280792523175478,
2049
+ 0.0,
2050
+ 0.0,
2051
+ -0.03722453489899635,
2052
+ 0.4131543040275574
2053
+ ],
2054
+ "min": [
2055
+ -0.159867063164711,
2056
+ -0.2892282009124756,
2057
+ -0.2795473635196686,
2058
+ 0.0,
2059
+ 0.0,
2060
+ -1.9875637292861938,
2061
+ 0.0
2062
+ ],
2063
+ "q01": [
2064
+ -0.06619441494345665,
2065
+ -0.08713878810405731,
2066
+ -0.15083016991615295,
2067
+ 0.0,
2068
+ 0.0,
2069
+ -0.5415697038173676,
2070
+ 0.0
2071
+ ],
2072
+ "q99": [
2073
+ 0.06601839080452929,
2074
+ 0.08732476785779003,
2075
+ 0.18168179214000715,
2076
+ 0.0,
2077
+ 0.0,
2078
+ 0.2923380345106127,
2079
+ 1.0
2080
+ ],
2081
+ "std": [
2082
+ 0.02083250693976879,
2083
+ 0.02915887162089348,
2084
+ 0.06422865390777588,
2085
+ 0.0,
2086
+ 0.0,
2087
+ 0.14224295318126678,
2088
+ 0.49086448550224304
2089
+ ]
2090
+ },
2091
+ "num_trajectories": 209880,
2092
+ "num_transitions": 2455879,
2093
+ "proprio": {
2094
+ "max": [
2095
+ 0.0,
2096
+ 0.0,
2097
+ 0.0,
2098
+ 0.0,
2099
+ 0.0,
2100
+ 0.0,
2101
+ 0.0
2102
+ ],
2103
+ "mean": [
2104
+ 0.0,
2105
+ 0.0,
2106
+ 0.0,
2107
+ 0.0,
2108
+ 0.0,
2109
+ 0.0,
2110
+ 0.0
2111
+ ],
2112
+ "min": [
2113
+ 0.0,
2114
+ 0.0,
2115
+ 0.0,
2116
+ 0.0,
2117
+ 0.0,
2118
+ 0.0,
2119
+ 0.0
2120
+ ],
2121
+ "q01": [
2122
+ 0.0,
2123
+ 0.0,
2124
+ 0.0,
2125
+ 0.0,
2126
+ 0.0,
2127
+ 0.0,
2128
+ 0.0
2129
+ ],
2130
+ "q99": [
2131
+ 0.0,
2132
+ 0.0,
2133
+ 0.0,
2134
+ 0.0,
2135
+ 0.0,
2136
+ 0.0,
2137
+ 0.0
2138
+ ],
2139
+ "std": [
2140
+ 0.0,
2141
+ 0.0,
2142
+ 0.0,
2143
+ 0.0,
2144
+ 0.0,
2145
+ 0.0,
2146
+ 0.0
2147
+ ]
2148
+ }
2149
+ },
2150
+ "nyu_franka_play_dataset_converted_externally_to_rlds": {
2151
+ "action": {
2152
+ "mask": [
2153
+ true,
2154
+ true,
2155
+ true,
2156
+ true,
2157
+ true,
2158
+ true,
2159
+ false
2160
+ ],
2161
+ "max": [
2162
+ 0.06424188613891602,
2163
+ 0.07027634978294373,
2164
+ 0.06129661202430725,
2165
+ 6.281067848205566,
2166
+ 0.1967729926109314,
2167
+ 0.26377415657043457,
2168
+ 1.0
2169
+ ],
2170
+ "mean": [
2171
+ 0.001021989737637341,
2172
+ -0.00012002651783404872,
2173
+ 0.00032894269679673016,
2174
+ 0.0015034361276775599,
2175
+ -0.002198522910475731,
2176
+ -0.001663230243138969,
2177
+ 0.7230083346366882
2178
+ ],
2179
+ "min": [
2180
+ -0.05952230095863342,
2181
+ -0.07232445478439331,
2182
+ -0.06730806827545166,
2183
+ -6.278434753417969,
2184
+ -0.21479034423828125,
2185
+ -0.3627619743347168,
2186
+ 0.0
2187
+ ],
2188
+ "q01": [
2189
+ -0.03199600875377655,
2190
+ -0.032861671447753905,
2191
+ -0.03368805110454559,
2192
+ -0.12080862045288086,
2193
+ -0.12175218224525451,
2194
+ -0.11370223641395569,
2195
+ 0.0
2196
+ ],
2197
+ "q99": [
2198
+ 0.03101520001888276,
2199
+ 0.0373908892273903,
2200
+ 0.03646374464035038,
2201
+ 0.11764093399047852,
2202
+ 0.1258920183777809,
2203
+ 0.09366151213645942,
2204
+ 1.0
2205
+ ],
2206
+ "std": [
2207
+ 0.01327415369451046,
2208
+ 0.013215910643339157,
2209
+ 0.012822109274566174,
2210
+ 0.2732451558113098,
2211
+ 0.057022541761398315,
2212
+ 0.039172880351543427,
2213
+ 0.44752755761146545
2214
+ ]
2215
+ },
2216
+ "num_trajectories": 456,
2217
+ "num_transitions": 44875,
2218
+ "proprio": {
2219
+ "max": [
2220
+ 0.0,
2221
+ 0.0,
2222
+ 0.0,
2223
+ 0.0,
2224
+ 0.0,
2225
+ 0.0,
2226
+ 0.0
2227
+ ],
2228
+ "mean": [
2229
+ 0.0,
2230
+ 0.0,
2231
+ 0.0,
2232
+ 0.0,
2233
+ 0.0,
2234
+ 0.0,
2235
+ 0.0
2236
+ ],
2237
+ "min": [
2238
+ 0.0,
2239
+ 0.0,
2240
+ 0.0,
2241
+ 0.0,
2242
+ 0.0,
2243
+ 0.0,
2244
+ 0.0
2245
+ ],
2246
+ "q01": [
2247
+ 0.0,
2248
+ 0.0,
2249
+ 0.0,
2250
+ 0.0,
2251
+ 0.0,
2252
+ 0.0,
2253
+ 0.0
2254
+ ],
2255
+ "q99": [
2256
+ 0.0,
2257
+ 0.0,
2258
+ 0.0,
2259
+ 0.0,
2260
+ 0.0,
2261
+ 0.0,
2262
+ 0.0
2263
+ ],
2264
+ "std": [
2265
+ 0.0,
2266
+ 0.0,
2267
+ 0.0,
2268
+ 0.0,
2269
+ 0.0,
2270
+ 0.0,
2271
+ 0.0
2272
+ ]
2273
+ }
2274
+ },
2275
+ "roboturk": {
2276
+ "action": {
2277
+ "mask": [
2278
+ true,
2279
+ true,
2280
+ true,
2281
+ true,
2282
+ true,
2283
+ true,
2284
+ false
2285
+ ],
2286
+ "max": [
2287
+ 0.39124172925949097,
2288
+ 0.4601028263568878,
2289
+ 0.4870833456516266,
2290
+ 1.816888689994812,
2291
+ 1.8240282535552979,
2292
+ 1.4824820756912231,
2293
+ 1.0
2294
+ ],
2295
+ "mean": [
2296
+ 0.0014448732836171985,
2297
+ -0.0015945249469950795,
2298
+ -0.0011753785656765103,
2299
+ 0.0023012510500848293,
2300
+ -0.0009382463176734746,
2301
+ -0.00011485807772260159,
2302
+ 0.5746025443077087
2303
+ ],
2304
+ "min": [
2305
+ -0.6546999216079712,
2306
+ -0.6365841031074524,
2307
+ -0.4217723608016968,
2308
+ -1.6695482730865479,
2309
+ -1.8023357391357422,
2310
+ -1.4630827903747559,
2311
+ 0.0
2312
+ ],
2313
+ "q01": [
2314
+ -0.1342635464668274,
2315
+ -0.19996687173843383,
2316
+ -0.1482972100377083,
2317
+ -0.20720748245716095,
2318
+ -0.09676413893699647,
2319
+ -0.18075634717941286,
2320
+ 0.0
2321
+ ],
2322
+ "q99": [
2323
+ 0.14956976801157001,
2324
+ 0.1805950567126275,
2325
+ 0.18841815620660796,
2326
+ 0.21615413755178453,
2327
+ 0.09457383215427405,
2328
+ 0.18543301910162005,
2329
+ 1.0
2330
+ ],
2331
+ "std": [
2332
+ 0.04935386776924133,
2333
+ 0.0635455846786499,
2334
+ 0.061164740473032,
2335
+ 0.09553450345993042,
2336
+ 0.08420111238956451,
2337
+ 0.06517903506755829,
2338
+ 0.49452081322669983
2339
+ ]
2340
+ },
2341
+ "num_trajectories": 1995,
2342
+ "num_transitions": 187507,
2343
+ "proprio": {
2344
+ "max": [
2345
+ 0.0,
2346
+ 0.0,
2347
+ 0.0,
2348
+ 0.0,
2349
+ 0.0,
2350
+ 0.0,
2351
+ 0.0
2352
+ ],
2353
+ "mean": [
2354
+ 0.0,
2355
+ 0.0,
2356
+ 0.0,
2357
+ 0.0,
2358
+ 0.0,
2359
+ 0.0,
2360
+ 0.0
2361
+ ],
2362
+ "min": [
2363
+ 0.0,
2364
+ 0.0,
2365
+ 0.0,
2366
+ 0.0,
2367
+ 0.0,
2368
+ 0.0,
2369
+ 0.0
2370
+ ],
2371
+ "q01": [
2372
+ 0.0,
2373
+ 0.0,
2374
+ 0.0,
2375
+ 0.0,
2376
+ 0.0,
2377
+ 0.0,
2378
+ 0.0
2379
+ ],
2380
+ "q99": [
2381
+ 0.0,
2382
+ 0.0,
2383
+ 0.0,
2384
+ 0.0,
2385
+ 0.0,
2386
+ 0.0,
2387
+ 0.0
2388
+ ],
2389
+ "std": [
2390
+ 0.0,
2391
+ 0.0,
2392
+ 0.0,
2393
+ 0.0,
2394
+ 0.0,
2395
+ 0.0,
2396
+ 0.0
2397
+ ]
2398
+ }
2399
+ },
2400
+ "stanford_hydra_dataset_converted_externally_to_rlds": {
2401
+ "action": {
2402
+ "mask": [
2403
+ true,
2404
+ true,
2405
+ true,
2406
+ true,
2407
+ true,
2408
+ true,
2409
+ false
2410
+ ],
2411
+ "max": [
2412
+ 0.02499854564666748,
2413
+ 0.02499903365969658,
2414
+ 0.024999922141432762,
2415
+ 0.24974457919597626,
2416
+ 0.24997030198574066,
2417
+ 0.24999946355819702,
2418
+ 1.0
2419
+ ],
2420
+ "mean": [
2421
+ 0.0007790001109242439,
2422
+ 0.00013707754260394722,
2423
+ -0.0002548607881180942,
2424
+ 0.0012903271708637476,
2425
+ -0.004751681815832853,
2426
+ 0.002692886395379901,
2427
+ 0.48855218291282654
2428
+ ],
2429
+ "min": [
2430
+ -0.024999044835567474,
2431
+ -0.024999700486660004,
2432
+ -0.02499929815530777,
2433
+ -0.24993225932121277,
2434
+ -0.2499666064977646,
2435
+ -0.2499932497739792,
2436
+ 0.0
2437
+ ],
2438
+ "q01": [
2439
+ -0.019992006458342076,
2440
+ -0.02415412735193968,
2441
+ -0.022941758055239916,
2442
+ -0.11085530579090118,
2443
+ -0.12024572037160397,
2444
+ -0.13314770206809043,
2445
+ 0.0
2446
+ ],
2447
+ "q99": [
2448
+ 0.022886231057345868,
2449
+ 0.022358838934451335,
2450
+ 0.02410089675337076,
2451
+ 0.12370114490389822,
2452
+ 0.11323311634361738,
2453
+ 0.18474749639630164,
2454
+ 1.0
2455
+ ],
2456
+ "std": [
2457
+ 0.008022161200642586,
2458
+ 0.009131459519267082,
2459
+ 0.009574338793754578,
2460
+ 0.04122216999530792,
2461
+ 0.0384303517639637,
2462
+ 0.04606688767671585,
2463
+ 0.49976691603660583
2464
+ ]
2465
+ },
2466
+ "num_trajectories": 570,
2467
+ "num_transitions": 358234,
2468
+ "proprio": {
2469
+ "max": [
2470
+ 0.0,
2471
+ 0.0,
2472
+ 0.0,
2473
+ 0.0,
2474
+ 0.0,
2475
+ 0.0,
2476
+ 0.0
2477
+ ],
2478
+ "mean": [
2479
+ 0.0,
2480
+ 0.0,
2481
+ 0.0,
2482
+ 0.0,
2483
+ 0.0,
2484
+ 0.0,
2485
+ 0.0
2486
+ ],
2487
+ "min": [
2488
+ 0.0,
2489
+ 0.0,
2490
+ 0.0,
2491
+ 0.0,
2492
+ 0.0,
2493
+ 0.0,
2494
+ 0.0
2495
+ ],
2496
+ "q01": [
2497
+ 0.0,
2498
+ 0.0,
2499
+ 0.0,
2500
+ 0.0,
2501
+ 0.0,
2502
+ 0.0,
2503
+ 0.0
2504
+ ],
2505
+ "q99": [
2506
+ 0.0,
2507
+ 0.0,
2508
+ 0.0,
2509
+ 0.0,
2510
+ 0.0,
2511
+ 0.0,
2512
+ 0.0
2513
+ ],
2514
+ "std": [
2515
+ 0.0,
2516
+ 0.0,
2517
+ 0.0,
2518
+ 0.0,
2519
+ 0.0,
2520
+ 0.0,
2521
+ 0.0
2522
+ ]
2523
+ }
2524
+ },
2525
+ "taco_play": {
2526
+ "action": {
2527
+ "mask": [
2528
+ true,
2529
+ true,
2530
+ true,
2531
+ true,
2532
+ true,
2533
+ true,
2534
+ false
2535
+ ],
2536
+ "max": [
2537
+ 1.4915844202041626,
2538
+ 2.1842432022094727,
2539
+ 2.6836395263671875,
2540
+ 5.035226821899414,
2541
+ 2.665864944458008,
2542
+ 4.250768661499023,
2543
+ 1.0
2544
+ ],
2545
+ "mean": [
2546
+ -0.003845922416076064,
2547
+ 0.009671456180512905,
2548
+ 0.012780580669641495,
2549
+ -0.005403771996498108,
2550
+ -0.009606587700545788,
2551
+ -0.002480733208358288,
2552
+ 0.4263913035392761
2553
+ ],
2554
+ "min": [
2555
+ -4.242457866668701,
2556
+ -3.192805051803589,
2557
+ -1.3371467590332031,
2558
+ -4.202683448791504,
2559
+ -2.6722638607025146,
2560
+ -3.3467135429382324,
2561
+ 0.0
2562
+ ],
2563
+ "q01": [
2564
+ -0.7106140398979186,
2565
+ -1.056944659948349,
2566
+ -0.5878450274467468,
2567
+ -0.7682853937149048,
2568
+ -0.7180147767066956,
2569
+ -1.5527938604354858,
2570
+ 0.0
2571
+ ],
2572
+ "q99": [
2573
+ 0.6482916426658629,
2574
+ 1.0051310062408447,
2575
+ 0.9480248689651489,
2576
+ 0.6926478147506714,
2577
+ 0.6351067513227462,
2578
+ 1.628010264635086,
2579
+ 1.0
2580
+ ],
2581
+ "std": [
2582
+ 0.23254038393497467,
2583
+ 0.36298269033432007,
2584
+ 0.28692901134490967,
2585
+ 0.2617705166339874,
2586
+ 0.2438892275094986,
2587
+ 0.5216503143310547,
2588
+ 0.4946896731853485
2589
+ ]
2590
+ },
2591
+ "num_trajectories": 3603,
2592
+ "num_transitions": 237798,
2593
+ "proprio": {
2594
+ "max": [
2595
+ 0.0,
2596
+ 0.0,
2597
+ 0.0,
2598
+ 0.0,
2599
+ 0.0,
2600
+ 0.0,
2601
+ 0.0
2602
+ ],
2603
+ "mean": [
2604
+ 0.0,
2605
+ 0.0,
2606
+ 0.0,
2607
+ 0.0,
2608
+ 0.0,
2609
+ 0.0,
2610
+ 0.0
2611
+ ],
2612
+ "min": [
2613
+ 0.0,
2614
+ 0.0,
2615
+ 0.0,
2616
+ 0.0,
2617
+ 0.0,
2618
+ 0.0,
2619
+ 0.0
2620
+ ],
2621
+ "q01": [
2622
+ 0.0,
2623
+ 0.0,
2624
+ 0.0,
2625
+ 0.0,
2626
+ 0.0,
2627
+ 0.0,
2628
+ 0.0
2629
+ ],
2630
+ "q99": [
2631
+ 0.0,
2632
+ 0.0,
2633
+ 0.0,
2634
+ 0.0,
2635
+ 0.0,
2636
+ 0.0,
2637
+ 0.0
2638
+ ],
2639
+ "std": [
2640
+ 0.0,
2641
+ 0.0,
2642
+ 0.0,
2643
+ 0.0,
2644
+ 0.0,
2645
+ 0.0,
2646
+ 0.0
2647
+ ]
2648
+ }
2649
+ },
2650
+ "toto": {
2651
+ "action": {
2652
+ "mask": [
2653
+ true,
2654
+ true,
2655
+ true,
2656
+ true,
2657
+ true,
2658
+ true,
2659
+ false
2660
+ ],
2661
+ "max": [
2662
+ 0.6839867234230042,
2663
+ 0.4454185664653778,
2664
+ 0.7984078526496887,
2665
+ 2.120781660079956,
2666
+ 1.371164321899414,
2667
+ 1.4118704795837402,
2668
+ 0.0
2669
+ ],
2670
+ "mean": [
2671
+ 0.38542115688323975,
2672
+ 0.007769413758069277,
2673
+ 0.3632740378379822,
2674
+ -0.6652036905288696,
2675
+ 0.1890396922826767,
2676
+ 0.03298724442720413,
2677
+ 0.0
2678
+ ],
2679
+ "min": [
2680
+ 0.09922284632921219,
2681
+ -0.5180193781852722,
2682
+ 0.13791072368621826,
2683
+ -2.635117530822754,
2684
+ -1.0734480619430542,
2685
+ -1.9282547235488892,
2686
+ 0.0
2687
+ ],
2688
+ "q01": [
2689
+ 0.1756722891330719,
2690
+ -0.3077590811252594,
2691
+ 0.235383919775486,
2692
+ -2.0908505964279174,
2693
+ -0.6191593289375306,
2694
+ -0.7488683319091797,
2695
+ 0.0
2696
+ ],
2697
+ "q99": [
2698
+ 0.6136963081359863,
2699
+ 0.33704194784164443,
2700
+ 0.6681221985816956,
2701
+ 0.7422861719131538,
2702
+ 0.7955395007133507,
2703
+ 0.740464625358582,
2704
+ 0.0
2705
+ ],
2706
+ "std": [
2707
+ 0.12211652100086212,
2708
+ 0.19378550350666046,
2709
+ 0.10178236663341522,
2710
+ 0.5725259184837341,
2711
+ 0.29884573817253113,
2712
+ 0.3259911835193634,
2713
+ 0.0
2714
+ ]
2715
+ },
2716
+ "num_trajectories": 1003,
2717
+ "num_transitions": 325699,
2718
+ "proprio": {
2719
+ "max": [
2720
+ 0.0,
2721
+ 0.0,
2722
+ 0.0,
2723
+ 0.0,
2724
+ 0.0,
2725
+ 0.0,
2726
+ 0.0
2727
+ ],
2728
+ "mean": [
2729
+ 0.0,
2730
+ 0.0,
2731
+ 0.0,
2732
+ 0.0,
2733
+ 0.0,
2734
+ 0.0,
2735
+ 0.0
2736
+ ],
2737
+ "min": [
2738
+ 0.0,
2739
+ 0.0,
2740
+ 0.0,
2741
+ 0.0,
2742
+ 0.0,
2743
+ 0.0,
2744
+ 0.0
2745
+ ],
2746
+ "q01": [
2747
+ 0.0,
2748
+ 0.0,
2749
+ 0.0,
2750
+ 0.0,
2751
+ 0.0,
2752
+ 0.0,
2753
+ 0.0
2754
+ ],
2755
+ "q99": [
2756
+ 0.0,
2757
+ 0.0,
2758
+ 0.0,
2759
+ 0.0,
2760
+ 0.0,
2761
+ 0.0,
2762
+ 0.0
2763
+ ],
2764
+ "std": [
2765
+ 0.0,
2766
+ 0.0,
2767
+ 0.0,
2768
+ 0.0,
2769
+ 0.0,
2770
+ 0.0,
2771
+ 0.0
2772
+ ]
2773
+ }
2774
+ },
2775
+ "ucsd_kitchen_dataset_converted_externally_to_rlds": {
2776
+ "action": {
2777
+ "mask": [
2778
+ true,
2779
+ true,
2780
+ true,
2781
+ true,
2782
+ true,
2783
+ true,
2784
+ false
2785
+ ],
2786
+ "max": [
2787
+ 678.0,
2788
+ 400.0,
2789
+ 507.0,
2790
+ 180.00001525878906,
2791
+ 6.000013828277588,
2792
+ 116.99998474121094,
2793
+ 1.0
2794
+ ],
2795
+ "mean": [
2796
+ 410.37567138671875,
2797
+ 116.9518814086914,
2798
+ 192.35032653808594,
2799
+ -121.22441864013672,
2800
+ -33.84893035888672,
2801
+ 50.016136169433594,
2802
+ 0.741813600063324
2803
+ ],
2804
+ "min": [
2805
+ 172.0,
2806
+ -166.0,
2807
+ -99.99999237060547,
2808
+ -180.00001525878906,
2809
+ -89.0,
2810
+ -96.00010681152344,
2811
+ 0.0
2812
+ ],
2813
+ "q01": [
2814
+ 200.00001052856445,
2815
+ -102.31004211425781,
2816
+ -94.99993370056153,
2817
+ -180.00001525878906,
2818
+ -88.00001525878906,
2819
+ -38.999977111816406,
2820
+ 0.0
2821
+ ],
2822
+ "q99": [
2823
+ 637.0,
2824
+ 368.30999999999995,
2825
+ 493.0,
2826
+ 180.00001525878906,
2827
+ 0.999983012676239,
2828
+ 105.00001525878906,
2829
+ 1.0
2830
+ ],
2831
+ "std": [
2832
+ 122.81494903564453,
2833
+ 108.8009033203125,
2834
+ 130.303466796875,
2835
+ 116.28205108642578,
2836
+ 27.621843338012695,
2837
+ 41.02094650268555,
2838
+ 0.43763357400894165
2839
+ ]
2840
+ },
2841
+ "num_trajectories": 150,
2842
+ "num_transitions": 3970,
2843
+ "proprio": {
2844
+ "max": [
2845
+ 0.0,
2846
+ 0.0,
2847
+ 0.0,
2848
+ 0.0,
2849
+ 0.0,
2850
+ 0.0,
2851
+ 0.0
2852
+ ],
2853
+ "mean": [
2854
+ 0.0,
2855
+ 0.0,
2856
+ 0.0,
2857
+ 0.0,
2858
+ 0.0,
2859
+ 0.0,
2860
+ 0.0
2861
+ ],
2862
+ "min": [
2863
+ 0.0,
2864
+ 0.0,
2865
+ 0.0,
2866
+ 0.0,
2867
+ 0.0,
2868
+ 0.0,
2869
+ 0.0
2870
+ ],
2871
+ "q01": [
2872
+ 0.0,
2873
+ 0.0,
2874
+ 0.0,
2875
+ 0.0,
2876
+ 0.0,
2877
+ 0.0,
2878
+ 0.0
2879
+ ],
2880
+ "q99": [
2881
+ 0.0,
2882
+ 0.0,
2883
+ 0.0,
2884
+ 0.0,
2885
+ 0.0,
2886
+ 0.0,
2887
+ 0.0
2888
+ ],
2889
+ "std": [
2890
+ 0.0,
2891
+ 0.0,
2892
+ 0.0,
2893
+ 0.0,
2894
+ 0.0,
2895
+ 0.0,
2896
+ 0.0
2897
+ ]
2898
+ }
2899
+ },
2900
+ "utaustin_mutex": {
2901
+ "action": {
2902
+ "mask": [
2903
+ true,
2904
+ true,
2905
+ true,
2906
+ true,
2907
+ true,
2908
+ true,
2909
+ false
2910
+ ],
2911
+ "max": [
2912
+ 1.0,
2913
+ 1.0,
2914
+ 1.0,
2915
+ 0.375,
2916
+ 0.375,
2917
+ 0.375,
2918
+ 1.0
2919
+ ],
2920
+ "mean": [
2921
+ 0.06176406890153885,
2922
+ -0.005005486309528351,
2923
+ 0.10216785222291946,
2924
+ -0.03314131125807762,
2925
+ 0.013895004987716675,
2926
+ -0.011317633092403412,
2927
+ 0.5038976669311523
2928
+ ],
2929
+ "min": [
2930
+ -1.0,
2931
+ -1.0,
2932
+ -1.0,
2933
+ -0.375,
2934
+ -0.375,
2935
+ -0.375,
2936
+ 0.0
2937
+ ],
2938
+ "q01": [
2939
+ -0.4285714328289032,
2940
+ -0.9800000190734863,
2941
+ -0.5571428537368774,
2942
+ -0.375,
2943
+ -0.15642857551574707,
2944
+ -0.335357129573822,
2945
+ 0.0
2946
+ ],
2947
+ "q99": [
2948
+ 0.5914285778999329,
2949
+ 0.9714285731315613,
2950
+ 1.0,
2951
+ 0.3278571367263794,
2952
+ 0.207857146859169,
2953
+ 0.25607141852378845,
2954
+ 1.0
2955
+ ],
2956
+ "std": [
2957
+ 0.1875014752149582,
2958
+ 0.4468473494052887,
2959
+ 0.3792876601219177,
2960
+ 0.14097853004932404,
2961
+ 0.06453701853752136,
2962
+ 0.11765272170305252,
2963
+ 0.501045286655426
2964
+ ]
2965
+ },
2966
+ "num_trajectories": 1500,
2967
+ "num_transitions": 361883,
2968
+ "proprio": {
2969
+ "max": [
2970
+ 0.0,
2971
+ 0.0,
2972
+ 0.0,
2973
+ 0.0,
2974
+ 0.0,
2975
+ 0.0,
2976
+ 0.0
2977
+ ],
2978
+ "mean": [
2979
+ 0.0,
2980
+ 0.0,
2981
+ 0.0,
2982
+ 0.0,
2983
+ 0.0,
2984
+ 0.0,
2985
+ 0.0
2986
+ ],
2987
+ "min": [
2988
+ 0.0,
2989
+ 0.0,
2990
+ 0.0,
2991
+ 0.0,
2992
+ 0.0,
2993
+ 0.0,
2994
+ 0.0
2995
+ ],
2996
+ "q01": [
2997
+ 0.0,
2998
+ 0.0,
2999
+ 0.0,
3000
+ 0.0,
3001
+ 0.0,
3002
+ 0.0,
3003
+ 0.0
3004
+ ],
3005
+ "q99": [
3006
+ 0.0,
3007
+ 0.0,
3008
+ 0.0,
3009
+ 0.0,
3010
+ 0.0,
3011
+ 0.0,
3012
+ 0.0
3013
+ ],
3014
+ "std": [
3015
+ 0.0,
3016
+ 0.0,
3017
+ 0.0,
3018
+ 0.0,
3019
+ 0.0,
3020
+ 0.0,
3021
+ 0.0
3022
+ ]
3023
+ }
3024
+ },
3025
+ "viola": {
3026
+ "action": {
3027
+ "mask": [
3028
+ true,
3029
+ true,
3030
+ true,
3031
+ true,
3032
+ true,
3033
+ true,
3034
+ false
3035
+ ],
3036
+ "max": [
3037
+ 1.0,
3038
+ 1.0,
3039
+ 1.0,
3040
+ 0.375,
3041
+ 0.36321428418159485,
3042
+ 0.375,
3043
+ 1.0
3044
+ ],
3045
+ "mean": [
3046
+ 0.04761844128370285,
3047
+ -0.029204415157437325,
3048
+ 0.05586736649274826,
3049
+ -0.002618510741740465,
3050
+ 0.006867344491183758,
3051
+ -0.01682133786380291,
3052
+ 0.7323777675628662
3053
+ ],
3054
+ "min": [
3055
+ -1.0,
3056
+ -1.0,
3057
+ -1.0,
3058
+ -0.375,
3059
+ -0.375,
3060
+ -0.375,
3061
+ 0.0
3062
+ ],
3063
+ "q01": [
3064
+ -0.9628571271896362,
3065
+ -1.0,
3066
+ -1.0,
3067
+ -0.26249998807907104,
3068
+ -0.21321429312229156,
3069
+ -0.3385714292526245,
3070
+ 0.0
3071
+ ],
3072
+ "q99": [
3073
+ 0.9114285707473755,
3074
+ 0.868571400642395,
3075
+ 1.0,
3076
+ 0.2817857265472412,
3077
+ 0.2239285707473755,
3078
+ 0.3557142913341522,
3079
+ 1.0
3080
+ ],
3081
+ "std": [
3082
+ 0.39157867431640625,
3083
+ 0.4076525568962097,
3084
+ 0.40077948570251465,
3085
+ 0.10023996233940125,
3086
+ 0.0844319611787796,
3087
+ 0.10375042259693146,
3088
+ 0.44260647892951965
3089
+ ]
3090
+ },
3091
+ "num_trajectories": 150,
3092
+ "num_transitions": 76324,
3093
+ "proprio": {
3094
+ "max": [
3095
+ 0.0,
3096
+ 0.0,
3097
+ 0.0,
3098
+ 0.0,
3099
+ 0.0,
3100
+ 0.0,
3101
+ 0.0
3102
+ ],
3103
+ "mean": [
3104
+ 0.0,
3105
+ 0.0,
3106
+ 0.0,
3107
+ 0.0,
3108
+ 0.0,
3109
+ 0.0,
3110
+ 0.0
3111
+ ],
3112
+ "min": [
3113
+ 0.0,
3114
+ 0.0,
3115
+ 0.0,
3116
+ 0.0,
3117
+ 0.0,
3118
+ 0.0,
3119
+ 0.0
3120
+ ],
3121
+ "q01": [
3122
+ 0.0,
3123
+ 0.0,
3124
+ 0.0,
3125
+ 0.0,
3126
+ 0.0,
3127
+ 0.0,
3128
+ 0.0
3129
+ ],
3130
+ "q99": [
3131
+ 0.0,
3132
+ 0.0,
3133
+ 0.0,
3134
+ 0.0,
3135
+ 0.0,
3136
+ 0.0,
3137
+ 0.0
3138
+ ],
3139
+ "std": [
3140
+ 0.0,
3141
+ 0.0,
3142
+ 0.0,
3143
+ 0.0,
3144
+ 0.0,
3145
+ 0.0,
3146
+ 0.0
3147
+ ]
3148
+ }
3149
+ }
3150
+ },
3151
+ "num_action_chunks": 25,
3152
+ "output_projector_states": false,
3153
+ "pad_to_multiple_of": 64,
3154
+ "pad_token_id": 32000,
3155
+ "proprio_dim": 14,
3156
+ "text_config": {
3157
+ "model_type": "llama",
3158
+ "pad_token_id": 32000,
3159
+ "torch_dtype": "bfloat16",
3160
+ "vocab_size": 32064
3161
+ },
3162
+ "timm_model_ids": [
3163
+ "vit_large_patch14_reg4_dinov2.lvd142m",
3164
+ "vit_so400m_patch14_siglip_224"
3165
+ ],
3166
+ "timm_override_act_layers": [
3167
+ null,
3168
+ null
3169
+ ],
3170
+ "torch_dtype": "bfloat16",
3171
+ "transformers_version": "4.40.1",
3172
+ "unnorm_key": "move_can_pot_1k",
3173
+ "use_film": false,
3174
+ "use_fused_vision_backbone": true,
3175
+ "use_proprio": true,
3176
+ "value_type": "action_level",
3177
+ "vision_backbone_id": "dinosiglip-vit-so-224px"
3178
+ }
configuration_prismatic.py ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ configuration_prismatic.py
3
+
4
+ HuggingFace-style configuration definition for Prismatic VLMs, inheriting from `transformers.PretrainedConfig`.
5
+ Default configuration specifies `siglip-224px+7b`.
6
+ """
7
+
8
+ from typing import Any, Dict, List, Optional
9
+
10
+ from transformers import PretrainedConfig
11
+ from transformers.models.auto import CONFIG_MAPPING
12
+
13
+ # === Utilities for Mapping Prismatic names to HF names ===
14
+ # fmt: off
15
+ VISION_BACKBONE_TO_RESOLUTION: Dict[str, List[int]] = {
16
+ "clip-vit-l": [224], "siglip-vit-so400m": [224], "dinov2-vit-l": [224], "in1k-vit-l": [224],
17
+
18
+ "clip-vit-l-336px": [336],
19
+ "siglip-vit-so400m-384px": [384],
20
+
21
+ "dinoclip-vit-l-336px": [336, 336],
22
+ "dinosiglip-vit-so-224px": [224, 224],
23
+ "dinosiglip-vit-so-384px": [384, 384],
24
+ }
25
+ VISION_BACKBONE_TO_TIMM_ID: Dict[str, List[str]] = {
26
+ "clip-vit-l": ["vit_large_patch14_clip_224.openai"],
27
+ "clip-vit-l-336px": ["vit_large_patch14_clip_336.openai"],
28
+
29
+ "dinov2-vit-l": ["vit_large_patch14_reg4_dinov2.lvd142m"],
30
+ "in1k-vit-l": ["vit_large_patch16_224.augreg_in21k_ft_in1k"],
31
+
32
+ "siglip-vit-so400m": ["vit_so400m_patch14_siglip_224"],
33
+ "siglip-vit-so400m-384px": ["vit_so400m_patch14_siglip_384"],
34
+
35
+ "dinoclip-vit-l-336px": ["vit_large_patch14_reg4_dinov2.lvd142m", "vit_large_patch14_clip_336.openai"],
36
+ "dinosiglip-vit-so-224px": ["vit_large_patch14_reg4_dinov2.lvd142m", "vit_so400m_patch14_siglip_224"],
37
+ "dinosiglip-vit-so-384px": ["vit_large_patch14_reg4_dinov2.lvd142m", "vit_so400m_patch14_siglip_384"],
38
+ }
39
+ TIMM_OVERRIDE_ACT_LAYER: Dict[str, List[Optional[str]]] = {
40
+ "clip-vit-l": ["quick_gelu"], "clip-vit-l-336px": ["quick_gelu"],
41
+ "dinov2-vit-l": [None], "in1k-vit-l": [None],
42
+ "siglip-vit-so400m": [None], "siglip-vit-so400m-384px": [None],
43
+ "dinoclip-vit-l-336px": [None, "quick_gelu"],
44
+ "dinosiglip-vit-so-224px": [None, None], "dinosiglip-vit-so-384px": [None, None]
45
+ }
46
+
47
+ LLM_BACKBONE_TO_HF_PATH = {
48
+ "llama2-7b-pure": "meta-llama/Llama-2-7b-hf", "llama2-13b-pure": "meta-llama/Llama-2-13b-hf",
49
+ "llama2-7b-chat": "meta-llama/Llama-2-7b-chat-hf", "llama2-13b-chat": "meta-llama/Llama-2-13b-chat-hf",
50
+
51
+ "vicuna-v15-7b": "lmsys/vicuna-7b-v1.5", "vicuna-v15-13b": "lmsys/vicuna-13b-v1.5",
52
+
53
+ "mistral-v0.1-7b-pure": "mistralai/Mistral-7B-v0.1",
54
+ "mistral-v0.1-7b-instruct": "mistralai/Mistral-7B-Instruct-v0.1",
55
+
56
+ "phi-2-3b": "microsoft/phi-2",
57
+ }
58
+ LLM_BACKBONE_TO_HF_METACLASS = {
59
+ "llama2-7b-pure": "llama", "llama2-13b-pure": "llama", "llama2-7b-chat": "llama", "llama2-13b-chat": "llama",
60
+ "vicuna-v15-7b": "llama", "vicuna-v15-13b": "llama",
61
+
62
+ "mistral-v0.1-7b-pure": "mistral", "mistral-v0.1-7b-instruct": "mistral",
63
+
64
+ "phi-2-3b": "phi",
65
+ }
66
+
67
+ VALID_VISION_BACKBONES = set(VISION_BACKBONE_TO_RESOLUTION.keys())
68
+ VALID_LLM_BACKBONES = set(LLM_BACKBONE_TO_HF_PATH)
69
+ # fmt: on
70
+
71
+
72
+ class PrismaticConfig(PretrainedConfig):
73
+ model_type: str = "prismatic"
74
+ is_composition: bool = False
75
+
76
+ def __init__(
77
+ self,
78
+ vision_backbone_id: str = "siglip-vit-so400m",
79
+ llm_backbone_id: str = "vicuna-v15-7b",
80
+ arch_specifier: str = "no-align+gelu-mlp",
81
+ use_fused_vision_backbone: Optional[bool] = None,
82
+ image_resize_strategy: str = "letterbox",
83
+ text_config: Optional[Dict[str, Any]] = None,
84
+ llm_max_length: int = 2048,
85
+ pad_token_id: int = 32000,
86
+ pad_to_multiple_of: int = 64,
87
+ output_projector_states: bool = False,
88
+ **kwargs: str,
89
+ ) -> None:
90
+ if vision_backbone_id not in VALID_VISION_BACKBONES:
91
+ raise ValueError(f"Vision backbone `{vision_backbone_id}` not in {VALID_VISION_BACKBONES = }")
92
+
93
+ if llm_backbone_id not in VALID_LLM_BACKBONES:
94
+ raise ValueError(f"LLM backbone `{llm_backbone_id}` not in {VALID_LLM_BACKBONES = }")
95
+
96
+ # Set Prismatic Configuration Fields
97
+ self.vision_backbone_id = vision_backbone_id
98
+ self.llm_backbone_id = llm_backbone_id
99
+ self.arch_specifier = arch_specifier
100
+ self.output_projector_states = output_projector_states
101
+
102
+ # [Contract] All vision backbone parameters are lists =>> supports fused backbones with different preprocessing
103
+ self.use_fused_vision_backbone = (
104
+ use_fused_vision_backbone
105
+ if use_fused_vision_backbone is not None
106
+ else any(self.vision_backbone_id.startswith(v) for v in ["dinoclip", "dinosiglip"])
107
+ )
108
+
109
+ self.timm_model_ids = VISION_BACKBONE_TO_TIMM_ID[self.vision_backbone_id]
110
+ self.timm_override_act_layers = TIMM_OVERRIDE_ACT_LAYER[self.vision_backbone_id]
111
+ self.image_sizes = VISION_BACKBONE_TO_RESOLUTION[self.vision_backbone_id]
112
+ self.image_resize_strategy = image_resize_strategy
113
+
114
+ self.hf_llm_id = LLM_BACKBONE_TO_HF_PATH[self.llm_backbone_id]
115
+ self.llm_max_length = llm_max_length
116
+ self.pad_token_id, self.pad_to_multiple_of = pad_token_id, pad_to_multiple_of
117
+
118
+ # [IMPORTANT] HF Utilities actually look for a `text_config` field... we need to use that specific naming!
119
+ self.text_config = (
120
+ CONFIG_MAPPING[LLM_BACKBONE_TO_HF_METACLASS[self.llm_backbone_id]](**text_config)
121
+ if text_config is not None
122
+ else CONFIG_MAPPING[LLM_BACKBONE_TO_HF_METACLASS[self.llm_backbone_id]]()
123
+ )
124
+
125
+ # Dispatch **kwargs to super() =>> note that `pad_token_id` collides, so we pass it in here as well...
126
+ super().__init__(pad_token_id=pad_token_id, **kwargs)
127
+
128
+
129
+ class OpenVLAConfig(PrismaticConfig):
130
+ model_type: str = "openvla"
131
+
132
+ def __init__(
133
+ self,
134
+ norm_stats: Optional[Dict[str, Dict[str, Dict[str, Dict[str, List[float]]]]]] = None,
135
+ n_action_bins: int = 256,
136
+ **kwargs: str,
137
+ ) -> None:
138
+ self.norm_stats, self.n_action_bins = norm_stats, n_action_bins
139
+
140
+ super().__init__(**kwargs)
dataset_statistics.json ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "move_can_pot_1k": {
3
+ "action": {
4
+ "mean": [
5
+ 0.012134070508182049,
6
+ 1.0961335897445679,
7
+ -0.8084463477134705,
8
+ 0.16331210732460022,
9
+ 0.12816421687602997,
10
+ -0.014115814119577408,
11
+ 0.731939971446991,
12
+ 0.010422518476843834,
13
+ 0.9426739811897278,
14
+ -0.7479446530342102,
15
+ -0.05347057059407234,
16
+ 0.2528546452522278,
17
+ -0.03410856053233147,
18
+ 0.7678626179695129
19
+ ],
20
+ "std": [
21
+ 0.15372420847415924,
22
+ 1.1200835704803467,
23
+ 0.8752743005752563,
24
+ 0.4640316367149353,
25
+ 0.7044339776039124,
26
+ 0.40339821577072144,
27
+ 0.41715264320373535,
28
+ 0.14047163724899292,
29
+ 1.1020301580429077,
30
+ 0.9079903960227966,
31
+ 0.3735364079475403,
32
+ 0.598988950252533,
33
+ 0.39881035685539246,
34
+ 0.3987906575202942
35
+ ],
36
+ "max": [
37
+ 0.4909421503543854,
38
+ 2.809541702270508,
39
+ 2.9999999242136255e-05,
40
+ 1.7197164297103882,
41
+ 1.2200000286102295,
42
+ 2.3720834255218506,
43
+ 1.0,
44
+ 0.6039340496063232,
45
+ 2.7965636253356934,
46
+ 2.9999999242136255e-05,
47
+ 1.6982948780059814,
48
+ 1.2200000286102295,
49
+ 1.1544662714004517,
50
+ 1.0
51
+ ],
52
+ "min": [
53
+ -0.5411472916603088,
54
+ 0.0,
55
+ -2.655247926712036,
56
+ -1.7064800262451172,
57
+ -1.2200000286102295,
58
+ -1.1924999952316284,
59
+ 0.0,
60
+ -0.48658713698387146,
61
+ 0.0,
62
+ -2.649240255355835,
63
+ -1.8036140203475952,
64
+ -1.2200000286102295,
65
+ -2.4220423698425293,
66
+ 0.0
67
+ ],
68
+ "q01": [
69
+ -0.33686161041259766,
70
+ 0.0,
71
+ -2.488318920135498,
72
+ -0.41574164628982546,
73
+ -1.2200000286102295,
74
+ -0.8976910710334778,
75
+ 0.0,
76
+ -0.3672727942466736,
77
+ 0.0,
78
+ -2.515756130218506,
79
+ -1.515120029449463,
80
+ -1.2101104259490967,
81
+ -1.0549092292785645,
82
+ 0.0
83
+ ],
84
+ "q99": [
85
+ 0.38805791854858396,
86
+ 2.6831905841827393,
87
+ 0.0,
88
+ 1.489553325176239,
89
+ 1.2151449918746948,
90
+ 0.9757098281383514,
91
+ 1.0,
92
+ 0.34161555767059326,
93
+ 2.6857664585113525,
94
+ 0.0,
95
+ 0.5955219864845276,
96
+ 1.219497755765915,
97
+ 0.9141867160797119,
98
+ 1.0
99
+ ],
100
+ "mask": [
101
+ true,
102
+ true,
103
+ true,
104
+ true,
105
+ true,
106
+ true,
107
+ true,
108
+ true,
109
+ true,
110
+ true,
111
+ true,
112
+ true,
113
+ true,
114
+ true
115
+ ]
116
+ },
117
+ "proprio": {
118
+ "mean": [
119
+ 0.012134070508182049,
120
+ 1.0961335897445679,
121
+ -0.8084463477134705,
122
+ 0.16331210732460022,
123
+ 0.12816421687602997,
124
+ -0.014115814119577408,
125
+ 0.731939971446991,
126
+ 0.010422518476843834,
127
+ 0.9426739811897278,
128
+ -0.7479446530342102,
129
+ -0.05347057059407234,
130
+ 0.2528546452522278,
131
+ -0.03410856053233147,
132
+ 0.7678626179695129
133
+ ],
134
+ "std": [
135
+ 0.15372420847415924,
136
+ 1.1200835704803467,
137
+ 0.8752743005752563,
138
+ 0.4640316367149353,
139
+ 0.7044339776039124,
140
+ 0.40339821577072144,
141
+ 0.41715264320373535,
142
+ 0.14047163724899292,
143
+ 1.1020301580429077,
144
+ 0.9079903960227966,
145
+ 0.3735364079475403,
146
+ 0.598988950252533,
147
+ 0.39881035685539246,
148
+ 0.3987906575202942
149
+ ],
150
+ "max": [
151
+ 0.4909421503543854,
152
+ 2.809541702270508,
153
+ 2.9999999242136255e-05,
154
+ 1.7197164297103882,
155
+ 1.2200000286102295,
156
+ 2.3720834255218506,
157
+ 1.0,
158
+ 0.6039340496063232,
159
+ 2.7965636253356934,
160
+ 2.9999999242136255e-05,
161
+ 1.6982948780059814,
162
+ 1.2200000286102295,
163
+ 1.1544662714004517,
164
+ 1.0
165
+ ],
166
+ "min": [
167
+ -0.5411472916603088,
168
+ 0.0,
169
+ -2.655247926712036,
170
+ -1.7064800262451172,
171
+ -1.2200000286102295,
172
+ -1.1924999952316284,
173
+ 0.0,
174
+ -0.48658713698387146,
175
+ 0.0,
176
+ -2.649240255355835,
177
+ -1.8036140203475952,
178
+ -1.2200000286102295,
179
+ -2.4220423698425293,
180
+ 0.0
181
+ ],
182
+ "q01": [
183
+ -0.33686161041259766,
184
+ 0.0,
185
+ -2.488318920135498,
186
+ -0.41574164628982546,
187
+ -1.2200000286102295,
188
+ -0.8976910710334778,
189
+ 0.0,
190
+ -0.3672727942466736,
191
+ 0.0,
192
+ -2.515756130218506,
193
+ -1.515120029449463,
194
+ -1.2101104259490967,
195
+ -1.0549092292785645,
196
+ 0.0
197
+ ],
198
+ "q99": [
199
+ 0.38805791854858396,
200
+ 2.6831905841827393,
201
+ 0.0,
202
+ 1.489553325176239,
203
+ 1.2151449918746948,
204
+ 0.9757098281383514,
205
+ 1.0,
206
+ 0.34161555767059326,
207
+ 2.6857664585113525,
208
+ 0.0,
209
+ 0.5955219864845276,
210
+ 1.219497755765915,
211
+ 0.9141867160797119,
212
+ 1.0
213
+ ]
214
+ },
215
+ "num_transitions": 106312,
216
+ "num_trajectories": 1000
217
+ }
218
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 32000,
6
+ "transformers_version": "4.40.1"
7
+ }
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2ba04d5d50cf145a530edfa2e0fc55034774ba845853a226898a10552291b22
3
+ size 4925122448
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6cdc5060374674aeb0c7c544b2e22a52be70e553ca0079cdc5d0879e004066f8
3
+ size 4947392496
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9af1472cc31b7f91be3dd0e8fe7398604766c63a481b51ce94c112787ab9bbb5
3
+ size 4947417456
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b743177f894e365f9b859bb8f7cdbdf4496e96f25c6a64216eae00b32267ad98
3
+ size 296354336
model.safetensors.index.json ADDED
@@ -0,0 +1,993 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 15116159872
4
+ },
5
+ "weight_map": {
6
+ "language_model.lm_head.weight": "model-00004-of-00004.safetensors",
7
+ "language_model.model.embed_tokens.weight": "model-00001-of-00004.safetensors",
8
+ "language_model.model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
9
+ "language_model.model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
10
+ "language_model.model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
11
+ "language_model.model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
12
+ "language_model.model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
13
+ "language_model.model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
14
+ "language_model.model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
15
+ "language_model.model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
16
+ "language_model.model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
17
+ "language_model.model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
18
+ "language_model.model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
19
+ "language_model.model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
20
+ "language_model.model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
21
+ "language_model.model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
22
+ "language_model.model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
23
+ "language_model.model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
24
+ "language_model.model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
25
+ "language_model.model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
26
+ "language_model.model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
27
+ "language_model.model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
28
+ "language_model.model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
29
+ "language_model.model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
30
+ "language_model.model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
31
+ "language_model.model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
32
+ "language_model.model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
33
+ "language_model.model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
34
+ "language_model.model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
35
+ "language_model.model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
36
+ "language_model.model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
37
+ "language_model.model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
38
+ "language_model.model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
39
+ "language_model.model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
40
+ "language_model.model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
41
+ "language_model.model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
42
+ "language_model.model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
43
+ "language_model.model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
44
+ "language_model.model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
45
+ "language_model.model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
46
+ "language_model.model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
47
+ "language_model.model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
48
+ "language_model.model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
49
+ "language_model.model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
50
+ "language_model.model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
51
+ "language_model.model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
52
+ "language_model.model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
53
+ "language_model.model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
54
+ "language_model.model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
55
+ "language_model.model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
56
+ "language_model.model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
57
+ "language_model.model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
58
+ "language_model.model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
59
+ "language_model.model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
60
+ "language_model.model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
61
+ "language_model.model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
62
+ "language_model.model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
63
+ "language_model.model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
64
+ "language_model.model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
65
+ "language_model.model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
66
+ "language_model.model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
67
+ "language_model.model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
68
+ "language_model.model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
69
+ "language_model.model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
70
+ "language_model.model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
71
+ "language_model.model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
72
+ "language_model.model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
73
+ "language_model.model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
74
+ "language_model.model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
75
+ "language_model.model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
76
+ "language_model.model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
77
+ "language_model.model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
78
+ "language_model.model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
79
+ "language_model.model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
80
+ "language_model.model.layers.16.input_layernorm.weight": "model-00002-of-00004.safetensors",
81
+ "language_model.model.layers.16.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
82
+ "language_model.model.layers.16.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
83
+ "language_model.model.layers.16.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
84
+ "language_model.model.layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
85
+ "language_model.model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
86
+ "language_model.model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
87
+ "language_model.model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
88
+ "language_model.model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
89
+ "language_model.model.layers.17.input_layernorm.weight": "model-00002-of-00004.safetensors",
90
+ "language_model.model.layers.17.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
91
+ "language_model.model.layers.17.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
92
+ "language_model.model.layers.17.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
93
+ "language_model.model.layers.17.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
94
+ "language_model.model.layers.17.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
95
+ "language_model.model.layers.17.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
96
+ "language_model.model.layers.17.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
97
+ "language_model.model.layers.17.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
98
+ "language_model.model.layers.18.input_layernorm.weight": "model-00002-of-00004.safetensors",
99
+ "language_model.model.layers.18.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
100
+ "language_model.model.layers.18.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
101
+ "language_model.model.layers.18.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
102
+ "language_model.model.layers.18.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
103
+ "language_model.model.layers.18.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
104
+ "language_model.model.layers.18.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
105
+ "language_model.model.layers.18.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
106
+ "language_model.model.layers.18.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
107
+ "language_model.model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors",
108
+ "language_model.model.layers.19.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
109
+ "language_model.model.layers.19.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
110
+ "language_model.model.layers.19.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
111
+ "language_model.model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
112
+ "language_model.model.layers.19.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
113
+ "language_model.model.layers.19.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
114
+ "language_model.model.layers.19.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
115
+ "language_model.model.layers.19.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
116
+ "language_model.model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
117
+ "language_model.model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
118
+ "language_model.model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
119
+ "language_model.model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
120
+ "language_model.model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
121
+ "language_model.model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
122
+ "language_model.model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
123
+ "language_model.model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
124
+ "language_model.model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
125
+ "language_model.model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
126
+ "language_model.model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
127
+ "language_model.model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
128
+ "language_model.model.layers.20.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
129
+ "language_model.model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
130
+ "language_model.model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
131
+ "language_model.model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
132
+ "language_model.model.layers.20.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
133
+ "language_model.model.layers.20.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
134
+ "language_model.model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors",
135
+ "language_model.model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
136
+ "language_model.model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
137
+ "language_model.model.layers.21.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
138
+ "language_model.model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
139
+ "language_model.model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
140
+ "language_model.model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
141
+ "language_model.model.layers.21.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
142
+ "language_model.model.layers.21.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
143
+ "language_model.model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
144
+ "language_model.model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
145
+ "language_model.model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
146
+ "language_model.model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
147
+ "language_model.model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
148
+ "language_model.model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
149
+ "language_model.model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
150
+ "language_model.model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
151
+ "language_model.model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
152
+ "language_model.model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
153
+ "language_model.model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
154
+ "language_model.model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
155
+ "language_model.model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
156
+ "language_model.model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
157
+ "language_model.model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
158
+ "language_model.model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
159
+ "language_model.model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
160
+ "language_model.model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
161
+ "language_model.model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
162
+ "language_model.model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
163
+ "language_model.model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
164
+ "language_model.model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
165
+ "language_model.model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
166
+ "language_model.model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
167
+ "language_model.model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
168
+ "language_model.model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
169
+ "language_model.model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
170
+ "language_model.model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
171
+ "language_model.model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
172
+ "language_model.model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
173
+ "language_model.model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
174
+ "language_model.model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
175
+ "language_model.model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
176
+ "language_model.model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
177
+ "language_model.model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
178
+ "language_model.model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
179
+ "language_model.model.layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
180
+ "language_model.model.layers.26.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
181
+ "language_model.model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
182
+ "language_model.model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
183
+ "language_model.model.layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
184
+ "language_model.model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
185
+ "language_model.model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
186
+ "language_model.model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
187
+ "language_model.model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
188
+ "language_model.model.layers.27.input_layernorm.weight": "model-00003-of-00004.safetensors",
189
+ "language_model.model.layers.27.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
190
+ "language_model.model.layers.27.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
191
+ "language_model.model.layers.27.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
192
+ "language_model.model.layers.27.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
193
+ "language_model.model.layers.27.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
194
+ "language_model.model.layers.27.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
195
+ "language_model.model.layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
196
+ "language_model.model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
197
+ "language_model.model.layers.28.input_layernorm.weight": "model-00003-of-00004.safetensors",
198
+ "language_model.model.layers.28.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
199
+ "language_model.model.layers.28.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
200
+ "language_model.model.layers.28.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
201
+ "language_model.model.layers.28.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
202
+ "language_model.model.layers.28.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
203
+ "language_model.model.layers.28.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
204
+ "language_model.model.layers.28.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
205
+ "language_model.model.layers.28.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
206
+ "language_model.model.layers.29.input_layernorm.weight": "model-00003-of-00004.safetensors",
207
+ "language_model.model.layers.29.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
208
+ "language_model.model.layers.29.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
209
+ "language_model.model.layers.29.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
210
+ "language_model.model.layers.29.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
211
+ "language_model.model.layers.29.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
212
+ "language_model.model.layers.29.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
213
+ "language_model.model.layers.29.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
214
+ "language_model.model.layers.29.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
215
+ "language_model.model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
216
+ "language_model.model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
217
+ "language_model.model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
218
+ "language_model.model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
219
+ "language_model.model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
220
+ "language_model.model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
221
+ "language_model.model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
222
+ "language_model.model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
223
+ "language_model.model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
224
+ "language_model.model.layers.30.input_layernorm.weight": "model-00003-of-00004.safetensors",
225
+ "language_model.model.layers.30.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
226
+ "language_model.model.layers.30.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
227
+ "language_model.model.layers.30.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
228
+ "language_model.model.layers.30.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
229
+ "language_model.model.layers.30.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
230
+ "language_model.model.layers.30.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
231
+ "language_model.model.layers.30.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
232
+ "language_model.model.layers.30.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
233
+ "language_model.model.layers.31.input_layernorm.weight": "model-00003-of-00004.safetensors",
234
+ "language_model.model.layers.31.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
235
+ "language_model.model.layers.31.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
236
+ "language_model.model.layers.31.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
237
+ "language_model.model.layers.31.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
238
+ "language_model.model.layers.31.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
239
+ "language_model.model.layers.31.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
240
+ "language_model.model.layers.31.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
241
+ "language_model.model.layers.31.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
242
+ "language_model.model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
243
+ "language_model.model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
244
+ "language_model.model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
245
+ "language_model.model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
246
+ "language_model.model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
247
+ "language_model.model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
248
+ "language_model.model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
249
+ "language_model.model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
250
+ "language_model.model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
251
+ "language_model.model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
252
+ "language_model.model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
253
+ "language_model.model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
254
+ "language_model.model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
255
+ "language_model.model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
256
+ "language_model.model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
257
+ "language_model.model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
258
+ "language_model.model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
259
+ "language_model.model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
260
+ "language_model.model.layers.6.input_layernorm.weight": "model-00001-of-00004.safetensors",
261
+ "language_model.model.layers.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
262
+ "language_model.model.layers.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
263
+ "language_model.model.layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
264
+ "language_model.model.layers.6.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
265
+ "language_model.model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
266
+ "language_model.model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
267
+ "language_model.model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
268
+ "language_model.model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
269
+ "language_model.model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors",
270
+ "language_model.model.layers.7.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
271
+ "language_model.model.layers.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
272
+ "language_model.model.layers.7.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
273
+ "language_model.model.layers.7.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
274
+ "language_model.model.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
275
+ "language_model.model.layers.7.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
276
+ "language_model.model.layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
277
+ "language_model.model.layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
278
+ "language_model.model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
279
+ "language_model.model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
280
+ "language_model.model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
281
+ "language_model.model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
282
+ "language_model.model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
283
+ "language_model.model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
284
+ "language_model.model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
285
+ "language_model.model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
286
+ "language_model.model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
287
+ "language_model.model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
288
+ "language_model.model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
289
+ "language_model.model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
290
+ "language_model.model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
291
+ "language_model.model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
292
+ "language_model.model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
293
+ "language_model.model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
294
+ "language_model.model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
295
+ "language_model.model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
296
+ "language_model.model.norm.weight": "model-00003-of-00004.safetensors",
297
+ "projector.fc1.bias": "model-00001-of-00004.safetensors",
298
+ "projector.fc1.weight": "model-00001-of-00004.safetensors",
299
+ "projector.fc2.bias": "model-00001-of-00004.safetensors",
300
+ "projector.fc2.weight": "model-00001-of-00004.safetensors",
301
+ "projector.fc3.bias": "model-00001-of-00004.safetensors",
302
+ "projector.fc3.weight": "model-00001-of-00004.safetensors",
303
+ "proprio_projector.fc1.bias": "model-00004-of-00004.safetensors",
304
+ "proprio_projector.fc1.weight": "model-00004-of-00004.safetensors",
305
+ "proprio_projector.fc2.bias": "model-00004-of-00004.safetensors",
306
+ "proprio_projector.fc2.weight": "model-00004-of-00004.safetensors",
307
+ "vision_backbone.featurizer.blocks.0.attn.proj.bias": "model-00001-of-00004.safetensors",
308
+ "vision_backbone.featurizer.blocks.0.attn.proj.weight": "model-00001-of-00004.safetensors",
309
+ "vision_backbone.featurizer.blocks.0.attn.qkv.bias": "model-00001-of-00004.safetensors",
310
+ "vision_backbone.featurizer.blocks.0.attn.qkv.weight": "model-00001-of-00004.safetensors",
311
+ "vision_backbone.featurizer.blocks.0.ls1.scale_factor": "model-00001-of-00004.safetensors",
312
+ "vision_backbone.featurizer.blocks.0.ls2.scale_factor": "model-00001-of-00004.safetensors",
313
+ "vision_backbone.featurizer.blocks.0.mlp.fc1.bias": "model-00001-of-00004.safetensors",
314
+ "vision_backbone.featurizer.blocks.0.mlp.fc1.weight": "model-00001-of-00004.safetensors",
315
+ "vision_backbone.featurizer.blocks.0.mlp.fc2.bias": "model-00001-of-00004.safetensors",
316
+ "vision_backbone.featurizer.blocks.0.mlp.fc2.weight": "model-00001-of-00004.safetensors",
317
+ "vision_backbone.featurizer.blocks.0.norm1.bias": "model-00001-of-00004.safetensors",
318
+ "vision_backbone.featurizer.blocks.0.norm1.weight": "model-00001-of-00004.safetensors",
319
+ "vision_backbone.featurizer.blocks.0.norm2.bias": "model-00001-of-00004.safetensors",
320
+ "vision_backbone.featurizer.blocks.0.norm2.weight": "model-00001-of-00004.safetensors",
321
+ "vision_backbone.featurizer.blocks.1.attn.proj.bias": "model-00001-of-00004.safetensors",
322
+ "vision_backbone.featurizer.blocks.1.attn.proj.weight": "model-00001-of-00004.safetensors",
323
+ "vision_backbone.featurizer.blocks.1.attn.qkv.bias": "model-00001-of-00004.safetensors",
324
+ "vision_backbone.featurizer.blocks.1.attn.qkv.weight": "model-00001-of-00004.safetensors",
325
+ "vision_backbone.featurizer.blocks.1.ls1.scale_factor": "model-00001-of-00004.safetensors",
326
+ "vision_backbone.featurizer.blocks.1.ls2.scale_factor": "model-00001-of-00004.safetensors",
327
+ "vision_backbone.featurizer.blocks.1.mlp.fc1.bias": "model-00001-of-00004.safetensors",
328
+ "vision_backbone.featurizer.blocks.1.mlp.fc1.weight": "model-00001-of-00004.safetensors",
329
+ "vision_backbone.featurizer.blocks.1.mlp.fc2.bias": "model-00001-of-00004.safetensors",
330
+ "vision_backbone.featurizer.blocks.1.mlp.fc2.weight": "model-00001-of-00004.safetensors",
331
+ "vision_backbone.featurizer.blocks.1.norm1.bias": "model-00001-of-00004.safetensors",
332
+ "vision_backbone.featurizer.blocks.1.norm1.weight": "model-00001-of-00004.safetensors",
333
+ "vision_backbone.featurizer.blocks.1.norm2.bias": "model-00001-of-00004.safetensors",
334
+ "vision_backbone.featurizer.blocks.1.norm2.weight": "model-00001-of-00004.safetensors",
335
+ "vision_backbone.featurizer.blocks.10.attn.proj.bias": "model-00001-of-00004.safetensors",
336
+ "vision_backbone.featurizer.blocks.10.attn.proj.weight": "model-00001-of-00004.safetensors",
337
+ "vision_backbone.featurizer.blocks.10.attn.qkv.bias": "model-00001-of-00004.safetensors",
338
+ "vision_backbone.featurizer.blocks.10.attn.qkv.weight": "model-00001-of-00004.safetensors",
339
+ "vision_backbone.featurizer.blocks.10.ls1.scale_factor": "model-00001-of-00004.safetensors",
340
+ "vision_backbone.featurizer.blocks.10.ls2.scale_factor": "model-00001-of-00004.safetensors",
341
+ "vision_backbone.featurizer.blocks.10.mlp.fc1.bias": "model-00001-of-00004.safetensors",
342
+ "vision_backbone.featurizer.blocks.10.mlp.fc1.weight": "model-00001-of-00004.safetensors",
343
+ "vision_backbone.featurizer.blocks.10.mlp.fc2.bias": "model-00001-of-00004.safetensors",
344
+ "vision_backbone.featurizer.blocks.10.mlp.fc2.weight": "model-00001-of-00004.safetensors",
345
+ "vision_backbone.featurizer.blocks.10.norm1.bias": "model-00001-of-00004.safetensors",
346
+ "vision_backbone.featurizer.blocks.10.norm1.weight": "model-00001-of-00004.safetensors",
347
+ "vision_backbone.featurizer.blocks.10.norm2.bias": "model-00001-of-00004.safetensors",
348
+ "vision_backbone.featurizer.blocks.10.norm2.weight": "model-00001-of-00004.safetensors",
349
+ "vision_backbone.featurizer.blocks.11.attn.proj.bias": "model-00001-of-00004.safetensors",
350
+ "vision_backbone.featurizer.blocks.11.attn.proj.weight": "model-00001-of-00004.safetensors",
351
+ "vision_backbone.featurizer.blocks.11.attn.qkv.bias": "model-00001-of-00004.safetensors",
352
+ "vision_backbone.featurizer.blocks.11.attn.qkv.weight": "model-00001-of-00004.safetensors",
353
+ "vision_backbone.featurizer.blocks.11.ls1.scale_factor": "model-00001-of-00004.safetensors",
354
+ "vision_backbone.featurizer.blocks.11.ls2.scale_factor": "model-00001-of-00004.safetensors",
355
+ "vision_backbone.featurizer.blocks.11.mlp.fc1.bias": "model-00001-of-00004.safetensors",
356
+ "vision_backbone.featurizer.blocks.11.mlp.fc1.weight": "model-00001-of-00004.safetensors",
357
+ "vision_backbone.featurizer.blocks.11.mlp.fc2.bias": "model-00001-of-00004.safetensors",
358
+ "vision_backbone.featurizer.blocks.11.mlp.fc2.weight": "model-00001-of-00004.safetensors",
359
+ "vision_backbone.featurizer.blocks.11.norm1.bias": "model-00001-of-00004.safetensors",
360
+ "vision_backbone.featurizer.blocks.11.norm1.weight": "model-00001-of-00004.safetensors",
361
+ "vision_backbone.featurizer.blocks.11.norm2.bias": "model-00001-of-00004.safetensors",
362
+ "vision_backbone.featurizer.blocks.11.norm2.weight": "model-00001-of-00004.safetensors",
363
+ "vision_backbone.featurizer.blocks.12.attn.proj.bias": "model-00001-of-00004.safetensors",
364
+ "vision_backbone.featurizer.blocks.12.attn.proj.weight": "model-00001-of-00004.safetensors",
365
+ "vision_backbone.featurizer.blocks.12.attn.qkv.bias": "model-00001-of-00004.safetensors",
366
+ "vision_backbone.featurizer.blocks.12.attn.qkv.weight": "model-00001-of-00004.safetensors",
367
+ "vision_backbone.featurizer.blocks.12.ls1.scale_factor": "model-00001-of-00004.safetensors",
368
+ "vision_backbone.featurizer.blocks.12.ls2.scale_factor": "model-00001-of-00004.safetensors",
369
+ "vision_backbone.featurizer.blocks.12.mlp.fc1.bias": "model-00001-of-00004.safetensors",
370
+ "vision_backbone.featurizer.blocks.12.mlp.fc1.weight": "model-00001-of-00004.safetensors",
371
+ "vision_backbone.featurizer.blocks.12.mlp.fc2.bias": "model-00001-of-00004.safetensors",
372
+ "vision_backbone.featurizer.blocks.12.mlp.fc2.weight": "model-00001-of-00004.safetensors",
373
+ "vision_backbone.featurizer.blocks.12.norm1.bias": "model-00001-of-00004.safetensors",
374
+ "vision_backbone.featurizer.blocks.12.norm1.weight": "model-00001-of-00004.safetensors",
375
+ "vision_backbone.featurizer.blocks.12.norm2.bias": "model-00001-of-00004.safetensors",
376
+ "vision_backbone.featurizer.blocks.12.norm2.weight": "model-00001-of-00004.safetensors",
377
+ "vision_backbone.featurizer.blocks.13.attn.proj.bias": "model-00001-of-00004.safetensors",
378
+ "vision_backbone.featurizer.blocks.13.attn.proj.weight": "model-00001-of-00004.safetensors",
379
+ "vision_backbone.featurizer.blocks.13.attn.qkv.bias": "model-00001-of-00004.safetensors",
380
+ "vision_backbone.featurizer.blocks.13.attn.qkv.weight": "model-00001-of-00004.safetensors",
381
+ "vision_backbone.featurizer.blocks.13.ls1.scale_factor": "model-00001-of-00004.safetensors",
382
+ "vision_backbone.featurizer.blocks.13.ls2.scale_factor": "model-00001-of-00004.safetensors",
383
+ "vision_backbone.featurizer.blocks.13.mlp.fc1.bias": "model-00001-of-00004.safetensors",
384
+ "vision_backbone.featurizer.blocks.13.mlp.fc1.weight": "model-00001-of-00004.safetensors",
385
+ "vision_backbone.featurizer.blocks.13.mlp.fc2.bias": "model-00001-of-00004.safetensors",
386
+ "vision_backbone.featurizer.blocks.13.mlp.fc2.weight": "model-00001-of-00004.safetensors",
387
+ "vision_backbone.featurizer.blocks.13.norm1.bias": "model-00001-of-00004.safetensors",
388
+ "vision_backbone.featurizer.blocks.13.norm1.weight": "model-00001-of-00004.safetensors",
389
+ "vision_backbone.featurizer.blocks.13.norm2.bias": "model-00001-of-00004.safetensors",
390
+ "vision_backbone.featurizer.blocks.13.norm2.weight": "model-00001-of-00004.safetensors",
391
+ "vision_backbone.featurizer.blocks.14.attn.proj.bias": "model-00001-of-00004.safetensors",
392
+ "vision_backbone.featurizer.blocks.14.attn.proj.weight": "model-00001-of-00004.safetensors",
393
+ "vision_backbone.featurizer.blocks.14.attn.qkv.bias": "model-00001-of-00004.safetensors",
394
+ "vision_backbone.featurizer.blocks.14.attn.qkv.weight": "model-00001-of-00004.safetensors",
395
+ "vision_backbone.featurizer.blocks.14.ls1.scale_factor": "model-00001-of-00004.safetensors",
396
+ "vision_backbone.featurizer.blocks.14.ls2.scale_factor": "model-00001-of-00004.safetensors",
397
+ "vision_backbone.featurizer.blocks.14.mlp.fc1.bias": "model-00001-of-00004.safetensors",
398
+ "vision_backbone.featurizer.blocks.14.mlp.fc1.weight": "model-00001-of-00004.safetensors",
399
+ "vision_backbone.featurizer.blocks.14.mlp.fc2.bias": "model-00001-of-00004.safetensors",
400
+ "vision_backbone.featurizer.blocks.14.mlp.fc2.weight": "model-00001-of-00004.safetensors",
401
+ "vision_backbone.featurizer.blocks.14.norm1.bias": "model-00001-of-00004.safetensors",
402
+ "vision_backbone.featurizer.blocks.14.norm1.weight": "model-00001-of-00004.safetensors",
403
+ "vision_backbone.featurizer.blocks.14.norm2.bias": "model-00001-of-00004.safetensors",
404
+ "vision_backbone.featurizer.blocks.14.norm2.weight": "model-00001-of-00004.safetensors",
405
+ "vision_backbone.featurizer.blocks.15.attn.proj.bias": "model-00001-of-00004.safetensors",
406
+ "vision_backbone.featurizer.blocks.15.attn.proj.weight": "model-00001-of-00004.safetensors",
407
+ "vision_backbone.featurizer.blocks.15.attn.qkv.bias": "model-00001-of-00004.safetensors",
408
+ "vision_backbone.featurizer.blocks.15.attn.qkv.weight": "model-00001-of-00004.safetensors",
409
+ "vision_backbone.featurizer.blocks.15.ls1.scale_factor": "model-00001-of-00004.safetensors",
410
+ "vision_backbone.featurizer.blocks.15.ls2.scale_factor": "model-00001-of-00004.safetensors",
411
+ "vision_backbone.featurizer.blocks.15.mlp.fc1.bias": "model-00001-of-00004.safetensors",
412
+ "vision_backbone.featurizer.blocks.15.mlp.fc1.weight": "model-00001-of-00004.safetensors",
413
+ "vision_backbone.featurizer.blocks.15.mlp.fc2.bias": "model-00001-of-00004.safetensors",
414
+ "vision_backbone.featurizer.blocks.15.mlp.fc2.weight": "model-00001-of-00004.safetensors",
415
+ "vision_backbone.featurizer.blocks.15.norm1.bias": "model-00001-of-00004.safetensors",
416
+ "vision_backbone.featurizer.blocks.15.norm1.weight": "model-00001-of-00004.safetensors",
417
+ "vision_backbone.featurizer.blocks.15.norm2.bias": "model-00001-of-00004.safetensors",
418
+ "vision_backbone.featurizer.blocks.15.norm2.weight": "model-00001-of-00004.safetensors",
419
+ "vision_backbone.featurizer.blocks.16.attn.proj.bias": "model-00001-of-00004.safetensors",
420
+ "vision_backbone.featurizer.blocks.16.attn.proj.weight": "model-00001-of-00004.safetensors",
421
+ "vision_backbone.featurizer.blocks.16.attn.qkv.bias": "model-00001-of-00004.safetensors",
422
+ "vision_backbone.featurizer.blocks.16.attn.qkv.weight": "model-00001-of-00004.safetensors",
423
+ "vision_backbone.featurizer.blocks.16.ls1.scale_factor": "model-00001-of-00004.safetensors",
424
+ "vision_backbone.featurizer.blocks.16.ls2.scale_factor": "model-00001-of-00004.safetensors",
425
+ "vision_backbone.featurizer.blocks.16.mlp.fc1.bias": "model-00001-of-00004.safetensors",
426
+ "vision_backbone.featurizer.blocks.16.mlp.fc1.weight": "model-00001-of-00004.safetensors",
427
+ "vision_backbone.featurizer.blocks.16.mlp.fc2.bias": "model-00001-of-00004.safetensors",
428
+ "vision_backbone.featurizer.blocks.16.mlp.fc2.weight": "model-00001-of-00004.safetensors",
429
+ "vision_backbone.featurizer.blocks.16.norm1.bias": "model-00001-of-00004.safetensors",
430
+ "vision_backbone.featurizer.blocks.16.norm1.weight": "model-00001-of-00004.safetensors",
431
+ "vision_backbone.featurizer.blocks.16.norm2.bias": "model-00001-of-00004.safetensors",
432
+ "vision_backbone.featurizer.blocks.16.norm2.weight": "model-00001-of-00004.safetensors",
433
+ "vision_backbone.featurizer.blocks.17.attn.proj.bias": "model-00001-of-00004.safetensors",
434
+ "vision_backbone.featurizer.blocks.17.attn.proj.weight": "model-00001-of-00004.safetensors",
435
+ "vision_backbone.featurizer.blocks.17.attn.qkv.bias": "model-00001-of-00004.safetensors",
436
+ "vision_backbone.featurizer.blocks.17.attn.qkv.weight": "model-00001-of-00004.safetensors",
437
+ "vision_backbone.featurizer.blocks.17.ls1.scale_factor": "model-00001-of-00004.safetensors",
438
+ "vision_backbone.featurizer.blocks.17.ls2.scale_factor": "model-00001-of-00004.safetensors",
439
+ "vision_backbone.featurizer.blocks.17.mlp.fc1.bias": "model-00001-of-00004.safetensors",
440
+ "vision_backbone.featurizer.blocks.17.mlp.fc1.weight": "model-00001-of-00004.safetensors",
441
+ "vision_backbone.featurizer.blocks.17.mlp.fc2.bias": "model-00001-of-00004.safetensors",
442
+ "vision_backbone.featurizer.blocks.17.mlp.fc2.weight": "model-00001-of-00004.safetensors",
443
+ "vision_backbone.featurizer.blocks.17.norm1.bias": "model-00001-of-00004.safetensors",
444
+ "vision_backbone.featurizer.blocks.17.norm1.weight": "model-00001-of-00004.safetensors",
445
+ "vision_backbone.featurizer.blocks.17.norm2.bias": "model-00001-of-00004.safetensors",
446
+ "vision_backbone.featurizer.blocks.17.norm2.weight": "model-00001-of-00004.safetensors",
447
+ "vision_backbone.featurizer.blocks.18.attn.proj.bias": "model-00001-of-00004.safetensors",
448
+ "vision_backbone.featurizer.blocks.18.attn.proj.weight": "model-00001-of-00004.safetensors",
449
+ "vision_backbone.featurizer.blocks.18.attn.qkv.bias": "model-00001-of-00004.safetensors",
450
+ "vision_backbone.featurizer.blocks.18.attn.qkv.weight": "model-00001-of-00004.safetensors",
451
+ "vision_backbone.featurizer.blocks.18.ls1.scale_factor": "model-00001-of-00004.safetensors",
452
+ "vision_backbone.featurizer.blocks.18.ls2.scale_factor": "model-00001-of-00004.safetensors",
453
+ "vision_backbone.featurizer.blocks.18.mlp.fc1.bias": "model-00001-of-00004.safetensors",
454
+ "vision_backbone.featurizer.blocks.18.mlp.fc1.weight": "model-00001-of-00004.safetensors",
455
+ "vision_backbone.featurizer.blocks.18.mlp.fc2.bias": "model-00001-of-00004.safetensors",
456
+ "vision_backbone.featurizer.blocks.18.mlp.fc2.weight": "model-00001-of-00004.safetensors",
457
+ "vision_backbone.featurizer.blocks.18.norm1.bias": "model-00001-of-00004.safetensors",
458
+ "vision_backbone.featurizer.blocks.18.norm1.weight": "model-00001-of-00004.safetensors",
459
+ "vision_backbone.featurizer.blocks.18.norm2.bias": "model-00001-of-00004.safetensors",
460
+ "vision_backbone.featurizer.blocks.18.norm2.weight": "model-00001-of-00004.safetensors",
461
+ "vision_backbone.featurizer.blocks.19.attn.proj.bias": "model-00001-of-00004.safetensors",
462
+ "vision_backbone.featurizer.blocks.19.attn.proj.weight": "model-00001-of-00004.safetensors",
463
+ "vision_backbone.featurizer.blocks.19.attn.qkv.bias": "model-00001-of-00004.safetensors",
464
+ "vision_backbone.featurizer.blocks.19.attn.qkv.weight": "model-00001-of-00004.safetensors",
465
+ "vision_backbone.featurizer.blocks.19.ls1.scale_factor": "model-00001-of-00004.safetensors",
466
+ "vision_backbone.featurizer.blocks.19.ls2.scale_factor": "model-00001-of-00004.safetensors",
467
+ "vision_backbone.featurizer.blocks.19.mlp.fc1.bias": "model-00001-of-00004.safetensors",
468
+ "vision_backbone.featurizer.blocks.19.mlp.fc1.weight": "model-00001-of-00004.safetensors",
469
+ "vision_backbone.featurizer.blocks.19.mlp.fc2.bias": "model-00001-of-00004.safetensors",
470
+ "vision_backbone.featurizer.blocks.19.mlp.fc2.weight": "model-00001-of-00004.safetensors",
471
+ "vision_backbone.featurizer.blocks.19.norm1.bias": "model-00001-of-00004.safetensors",
472
+ "vision_backbone.featurizer.blocks.19.norm1.weight": "model-00001-of-00004.safetensors",
473
+ "vision_backbone.featurizer.blocks.19.norm2.bias": "model-00001-of-00004.safetensors",
474
+ "vision_backbone.featurizer.blocks.19.norm2.weight": "model-00001-of-00004.safetensors",
475
+ "vision_backbone.featurizer.blocks.2.attn.proj.bias": "model-00001-of-00004.safetensors",
476
+ "vision_backbone.featurizer.blocks.2.attn.proj.weight": "model-00001-of-00004.safetensors",
477
+ "vision_backbone.featurizer.blocks.2.attn.qkv.bias": "model-00001-of-00004.safetensors",
478
+ "vision_backbone.featurizer.blocks.2.attn.qkv.weight": "model-00001-of-00004.safetensors",
479
+ "vision_backbone.featurizer.blocks.2.ls1.scale_factor": "model-00001-of-00004.safetensors",
480
+ "vision_backbone.featurizer.blocks.2.ls2.scale_factor": "model-00001-of-00004.safetensors",
481
+ "vision_backbone.featurizer.blocks.2.mlp.fc1.bias": "model-00001-of-00004.safetensors",
482
+ "vision_backbone.featurizer.blocks.2.mlp.fc1.weight": "model-00001-of-00004.safetensors",
483
+ "vision_backbone.featurizer.blocks.2.mlp.fc2.bias": "model-00001-of-00004.safetensors",
484
+ "vision_backbone.featurizer.blocks.2.mlp.fc2.weight": "model-00001-of-00004.safetensors",
485
+ "vision_backbone.featurizer.blocks.2.norm1.bias": "model-00001-of-00004.safetensors",
486
+ "vision_backbone.featurizer.blocks.2.norm1.weight": "model-00001-of-00004.safetensors",
487
+ "vision_backbone.featurizer.blocks.2.norm2.bias": "model-00001-of-00004.safetensors",
488
+ "vision_backbone.featurizer.blocks.2.norm2.weight": "model-00001-of-00004.safetensors",
489
+ "vision_backbone.featurizer.blocks.20.attn.proj.bias": "model-00001-of-00004.safetensors",
490
+ "vision_backbone.featurizer.blocks.20.attn.proj.weight": "model-00001-of-00004.safetensors",
491
+ "vision_backbone.featurizer.blocks.20.attn.qkv.bias": "model-00001-of-00004.safetensors",
492
+ "vision_backbone.featurizer.blocks.20.attn.qkv.weight": "model-00001-of-00004.safetensors",
493
+ "vision_backbone.featurizer.blocks.20.ls1.scale_factor": "model-00001-of-00004.safetensors",
494
+ "vision_backbone.featurizer.blocks.20.ls2.scale_factor": "model-00001-of-00004.safetensors",
495
+ "vision_backbone.featurizer.blocks.20.mlp.fc1.bias": "model-00001-of-00004.safetensors",
496
+ "vision_backbone.featurizer.blocks.20.mlp.fc1.weight": "model-00001-of-00004.safetensors",
497
+ "vision_backbone.featurizer.blocks.20.mlp.fc2.bias": "model-00001-of-00004.safetensors",
498
+ "vision_backbone.featurizer.blocks.20.mlp.fc2.weight": "model-00001-of-00004.safetensors",
499
+ "vision_backbone.featurizer.blocks.20.norm1.bias": "model-00001-of-00004.safetensors",
500
+ "vision_backbone.featurizer.blocks.20.norm1.weight": "model-00001-of-00004.safetensors",
501
+ "vision_backbone.featurizer.blocks.20.norm2.bias": "model-00001-of-00004.safetensors",
502
+ "vision_backbone.featurizer.blocks.20.norm2.weight": "model-00001-of-00004.safetensors",
503
+ "vision_backbone.featurizer.blocks.21.attn.proj.bias": "model-00001-of-00004.safetensors",
504
+ "vision_backbone.featurizer.blocks.21.attn.proj.weight": "model-00001-of-00004.safetensors",
505
+ "vision_backbone.featurizer.blocks.21.attn.qkv.bias": "model-00001-of-00004.safetensors",
506
+ "vision_backbone.featurizer.blocks.21.attn.qkv.weight": "model-00001-of-00004.safetensors",
507
+ "vision_backbone.featurizer.blocks.21.ls1.scale_factor": "model-00001-of-00004.safetensors",
508
+ "vision_backbone.featurizer.blocks.21.ls2.scale_factor": "model-00001-of-00004.safetensors",
509
+ "vision_backbone.featurizer.blocks.21.mlp.fc1.bias": "model-00001-of-00004.safetensors",
510
+ "vision_backbone.featurizer.blocks.21.mlp.fc1.weight": "model-00001-of-00004.safetensors",
511
+ "vision_backbone.featurizer.blocks.21.mlp.fc2.bias": "model-00001-of-00004.safetensors",
512
+ "vision_backbone.featurizer.blocks.21.mlp.fc2.weight": "model-00001-of-00004.safetensors",
513
+ "vision_backbone.featurizer.blocks.21.norm1.bias": "model-00001-of-00004.safetensors",
514
+ "vision_backbone.featurizer.blocks.21.norm1.weight": "model-00001-of-00004.safetensors",
515
+ "vision_backbone.featurizer.blocks.21.norm2.bias": "model-00001-of-00004.safetensors",
516
+ "vision_backbone.featurizer.blocks.21.norm2.weight": "model-00001-of-00004.safetensors",
517
+ "vision_backbone.featurizer.blocks.22.attn.proj.bias": "model-00001-of-00004.safetensors",
518
+ "vision_backbone.featurizer.blocks.22.attn.proj.weight": "model-00001-of-00004.safetensors",
519
+ "vision_backbone.featurizer.blocks.22.attn.qkv.bias": "model-00001-of-00004.safetensors",
520
+ "vision_backbone.featurizer.blocks.22.attn.qkv.weight": "model-00001-of-00004.safetensors",
521
+ "vision_backbone.featurizer.blocks.22.ls1.scale_factor": "model-00001-of-00004.safetensors",
522
+ "vision_backbone.featurizer.blocks.22.ls2.scale_factor": "model-00001-of-00004.safetensors",
523
+ "vision_backbone.featurizer.blocks.22.mlp.fc1.bias": "model-00001-of-00004.safetensors",
524
+ "vision_backbone.featurizer.blocks.22.mlp.fc1.weight": "model-00001-of-00004.safetensors",
525
+ "vision_backbone.featurizer.blocks.22.mlp.fc2.bias": "model-00001-of-00004.safetensors",
526
+ "vision_backbone.featurizer.blocks.22.mlp.fc2.weight": "model-00001-of-00004.safetensors",
527
+ "vision_backbone.featurizer.blocks.22.norm1.bias": "model-00001-of-00004.safetensors",
528
+ "vision_backbone.featurizer.blocks.22.norm1.weight": "model-00001-of-00004.safetensors",
529
+ "vision_backbone.featurizer.blocks.22.norm2.bias": "model-00001-of-00004.safetensors",
530
+ "vision_backbone.featurizer.blocks.22.norm2.weight": "model-00001-of-00004.safetensors",
531
+ "vision_backbone.featurizer.blocks.23.attn.proj.bias": "model-00001-of-00004.safetensors",
532
+ "vision_backbone.featurizer.blocks.23.attn.proj.weight": "model-00001-of-00004.safetensors",
533
+ "vision_backbone.featurizer.blocks.23.attn.qkv.bias": "model-00001-of-00004.safetensors",
534
+ "vision_backbone.featurizer.blocks.23.attn.qkv.weight": "model-00001-of-00004.safetensors",
535
+ "vision_backbone.featurizer.blocks.23.ls1.scale_factor": "model-00001-of-00004.safetensors",
536
+ "vision_backbone.featurizer.blocks.23.ls2.scale_factor": "model-00001-of-00004.safetensors",
537
+ "vision_backbone.featurizer.blocks.23.mlp.fc1.bias": "model-00001-of-00004.safetensors",
538
+ "vision_backbone.featurizer.blocks.23.mlp.fc1.weight": "model-00001-of-00004.safetensors",
539
+ "vision_backbone.featurizer.blocks.23.mlp.fc2.bias": "model-00001-of-00004.safetensors",
540
+ "vision_backbone.featurizer.blocks.23.mlp.fc2.weight": "model-00001-of-00004.safetensors",
541
+ "vision_backbone.featurizer.blocks.23.norm1.bias": "model-00001-of-00004.safetensors",
542
+ "vision_backbone.featurizer.blocks.23.norm1.weight": "model-00001-of-00004.safetensors",
543
+ "vision_backbone.featurizer.blocks.23.norm2.bias": "model-00001-of-00004.safetensors",
544
+ "vision_backbone.featurizer.blocks.23.norm2.weight": "model-00001-of-00004.safetensors",
545
+ "vision_backbone.featurizer.blocks.3.attn.proj.bias": "model-00001-of-00004.safetensors",
546
+ "vision_backbone.featurizer.blocks.3.attn.proj.weight": "model-00001-of-00004.safetensors",
547
+ "vision_backbone.featurizer.blocks.3.attn.qkv.bias": "model-00001-of-00004.safetensors",
548
+ "vision_backbone.featurizer.blocks.3.attn.qkv.weight": "model-00001-of-00004.safetensors",
549
+ "vision_backbone.featurizer.blocks.3.ls1.scale_factor": "model-00001-of-00004.safetensors",
550
+ "vision_backbone.featurizer.blocks.3.ls2.scale_factor": "model-00001-of-00004.safetensors",
551
+ "vision_backbone.featurizer.blocks.3.mlp.fc1.bias": "model-00001-of-00004.safetensors",
552
+ "vision_backbone.featurizer.blocks.3.mlp.fc1.weight": "model-00001-of-00004.safetensors",
553
+ "vision_backbone.featurizer.blocks.3.mlp.fc2.bias": "model-00001-of-00004.safetensors",
554
+ "vision_backbone.featurizer.blocks.3.mlp.fc2.weight": "model-00001-of-00004.safetensors",
555
+ "vision_backbone.featurizer.blocks.3.norm1.bias": "model-00001-of-00004.safetensors",
556
+ "vision_backbone.featurizer.blocks.3.norm1.weight": "model-00001-of-00004.safetensors",
557
+ "vision_backbone.featurizer.blocks.3.norm2.bias": "model-00001-of-00004.safetensors",
558
+ "vision_backbone.featurizer.blocks.3.norm2.weight": "model-00001-of-00004.safetensors",
559
+ "vision_backbone.featurizer.blocks.4.attn.proj.bias": "model-00001-of-00004.safetensors",
560
+ "vision_backbone.featurizer.blocks.4.attn.proj.weight": "model-00001-of-00004.safetensors",
561
+ "vision_backbone.featurizer.blocks.4.attn.qkv.bias": "model-00001-of-00004.safetensors",
562
+ "vision_backbone.featurizer.blocks.4.attn.qkv.weight": "model-00001-of-00004.safetensors",
563
+ "vision_backbone.featurizer.blocks.4.ls1.scale_factor": "model-00001-of-00004.safetensors",
564
+ "vision_backbone.featurizer.blocks.4.ls2.scale_factor": "model-00001-of-00004.safetensors",
565
+ "vision_backbone.featurizer.blocks.4.mlp.fc1.bias": "model-00001-of-00004.safetensors",
566
+ "vision_backbone.featurizer.blocks.4.mlp.fc1.weight": "model-00001-of-00004.safetensors",
567
+ "vision_backbone.featurizer.blocks.4.mlp.fc2.bias": "model-00001-of-00004.safetensors",
568
+ "vision_backbone.featurizer.blocks.4.mlp.fc2.weight": "model-00001-of-00004.safetensors",
569
+ "vision_backbone.featurizer.blocks.4.norm1.bias": "model-00001-of-00004.safetensors",
570
+ "vision_backbone.featurizer.blocks.4.norm1.weight": "model-00001-of-00004.safetensors",
571
+ "vision_backbone.featurizer.blocks.4.norm2.bias": "model-00001-of-00004.safetensors",
572
+ "vision_backbone.featurizer.blocks.4.norm2.weight": "model-00001-of-00004.safetensors",
573
+ "vision_backbone.featurizer.blocks.5.attn.proj.bias": "model-00001-of-00004.safetensors",
574
+ "vision_backbone.featurizer.blocks.5.attn.proj.weight": "model-00001-of-00004.safetensors",
575
+ "vision_backbone.featurizer.blocks.5.attn.qkv.bias": "model-00001-of-00004.safetensors",
576
+ "vision_backbone.featurizer.blocks.5.attn.qkv.weight": "model-00001-of-00004.safetensors",
577
+ "vision_backbone.featurizer.blocks.5.ls1.scale_factor": "model-00001-of-00004.safetensors",
578
+ "vision_backbone.featurizer.blocks.5.ls2.scale_factor": "model-00001-of-00004.safetensors",
579
+ "vision_backbone.featurizer.blocks.5.mlp.fc1.bias": "model-00001-of-00004.safetensors",
580
+ "vision_backbone.featurizer.blocks.5.mlp.fc1.weight": "model-00001-of-00004.safetensors",
581
+ "vision_backbone.featurizer.blocks.5.mlp.fc2.bias": "model-00001-of-00004.safetensors",
582
+ "vision_backbone.featurizer.blocks.5.mlp.fc2.weight": "model-00001-of-00004.safetensors",
583
+ "vision_backbone.featurizer.blocks.5.norm1.bias": "model-00001-of-00004.safetensors",
584
+ "vision_backbone.featurizer.blocks.5.norm1.weight": "model-00001-of-00004.safetensors",
585
+ "vision_backbone.featurizer.blocks.5.norm2.bias": "model-00001-of-00004.safetensors",
586
+ "vision_backbone.featurizer.blocks.5.norm2.weight": "model-00001-of-00004.safetensors",
587
+ "vision_backbone.featurizer.blocks.6.attn.proj.bias": "model-00001-of-00004.safetensors",
588
+ "vision_backbone.featurizer.blocks.6.attn.proj.weight": "model-00001-of-00004.safetensors",
589
+ "vision_backbone.featurizer.blocks.6.attn.qkv.bias": "model-00001-of-00004.safetensors",
590
+ "vision_backbone.featurizer.blocks.6.attn.qkv.weight": "model-00001-of-00004.safetensors",
591
+ "vision_backbone.featurizer.blocks.6.ls1.scale_factor": "model-00001-of-00004.safetensors",
592
+ "vision_backbone.featurizer.blocks.6.ls2.scale_factor": "model-00001-of-00004.safetensors",
593
+ "vision_backbone.featurizer.blocks.6.mlp.fc1.bias": "model-00001-of-00004.safetensors",
594
+ "vision_backbone.featurizer.blocks.6.mlp.fc1.weight": "model-00001-of-00004.safetensors",
595
+ "vision_backbone.featurizer.blocks.6.mlp.fc2.bias": "model-00001-of-00004.safetensors",
596
+ "vision_backbone.featurizer.blocks.6.mlp.fc2.weight": "model-00001-of-00004.safetensors",
597
+ "vision_backbone.featurizer.blocks.6.norm1.bias": "model-00001-of-00004.safetensors",
598
+ "vision_backbone.featurizer.blocks.6.norm1.weight": "model-00001-of-00004.safetensors",
599
+ "vision_backbone.featurizer.blocks.6.norm2.bias": "model-00001-of-00004.safetensors",
600
+ "vision_backbone.featurizer.blocks.6.norm2.weight": "model-00001-of-00004.safetensors",
601
+ "vision_backbone.featurizer.blocks.7.attn.proj.bias": "model-00001-of-00004.safetensors",
602
+ "vision_backbone.featurizer.blocks.7.attn.proj.weight": "model-00001-of-00004.safetensors",
603
+ "vision_backbone.featurizer.blocks.7.attn.qkv.bias": "model-00001-of-00004.safetensors",
604
+ "vision_backbone.featurizer.blocks.7.attn.qkv.weight": "model-00001-of-00004.safetensors",
605
+ "vision_backbone.featurizer.blocks.7.ls1.scale_factor": "model-00001-of-00004.safetensors",
606
+ "vision_backbone.featurizer.blocks.7.ls2.scale_factor": "model-00001-of-00004.safetensors",
607
+ "vision_backbone.featurizer.blocks.7.mlp.fc1.bias": "model-00001-of-00004.safetensors",
608
+ "vision_backbone.featurizer.blocks.7.mlp.fc1.weight": "model-00001-of-00004.safetensors",
609
+ "vision_backbone.featurizer.blocks.7.mlp.fc2.bias": "model-00001-of-00004.safetensors",
610
+ "vision_backbone.featurizer.blocks.7.mlp.fc2.weight": "model-00001-of-00004.safetensors",
611
+ "vision_backbone.featurizer.blocks.7.norm1.bias": "model-00001-of-00004.safetensors",
612
+ "vision_backbone.featurizer.blocks.7.norm1.weight": "model-00001-of-00004.safetensors",
613
+ "vision_backbone.featurizer.blocks.7.norm2.bias": "model-00001-of-00004.safetensors",
614
+ "vision_backbone.featurizer.blocks.7.norm2.weight": "model-00001-of-00004.safetensors",
615
+ "vision_backbone.featurizer.blocks.8.attn.proj.bias": "model-00001-of-00004.safetensors",
616
+ "vision_backbone.featurizer.blocks.8.attn.proj.weight": "model-00001-of-00004.safetensors",
617
+ "vision_backbone.featurizer.blocks.8.attn.qkv.bias": "model-00001-of-00004.safetensors",
618
+ "vision_backbone.featurizer.blocks.8.attn.qkv.weight": "model-00001-of-00004.safetensors",
619
+ "vision_backbone.featurizer.blocks.8.ls1.scale_factor": "model-00001-of-00004.safetensors",
620
+ "vision_backbone.featurizer.blocks.8.ls2.scale_factor": "model-00001-of-00004.safetensors",
621
+ "vision_backbone.featurizer.blocks.8.mlp.fc1.bias": "model-00001-of-00004.safetensors",
622
+ "vision_backbone.featurizer.blocks.8.mlp.fc1.weight": "model-00001-of-00004.safetensors",
623
+ "vision_backbone.featurizer.blocks.8.mlp.fc2.bias": "model-00001-of-00004.safetensors",
624
+ "vision_backbone.featurizer.blocks.8.mlp.fc2.weight": "model-00001-of-00004.safetensors",
625
+ "vision_backbone.featurizer.blocks.8.norm1.bias": "model-00001-of-00004.safetensors",
626
+ "vision_backbone.featurizer.blocks.8.norm1.weight": "model-00001-of-00004.safetensors",
627
+ "vision_backbone.featurizer.blocks.8.norm2.bias": "model-00001-of-00004.safetensors",
628
+ "vision_backbone.featurizer.blocks.8.norm2.weight": "model-00001-of-00004.safetensors",
629
+ "vision_backbone.featurizer.blocks.9.attn.proj.bias": "model-00001-of-00004.safetensors",
630
+ "vision_backbone.featurizer.blocks.9.attn.proj.weight": "model-00001-of-00004.safetensors",
631
+ "vision_backbone.featurizer.blocks.9.attn.qkv.bias": "model-00001-of-00004.safetensors",
632
+ "vision_backbone.featurizer.blocks.9.attn.qkv.weight": "model-00001-of-00004.safetensors",
633
+ "vision_backbone.featurizer.blocks.9.ls1.scale_factor": "model-00001-of-00004.safetensors",
634
+ "vision_backbone.featurizer.blocks.9.ls2.scale_factor": "model-00001-of-00004.safetensors",
635
+ "vision_backbone.featurizer.blocks.9.mlp.fc1.bias": "model-00001-of-00004.safetensors",
636
+ "vision_backbone.featurizer.blocks.9.mlp.fc1.weight": "model-00001-of-00004.safetensors",
637
+ "vision_backbone.featurizer.blocks.9.mlp.fc2.bias": "model-00001-of-00004.safetensors",
638
+ "vision_backbone.featurizer.blocks.9.mlp.fc2.weight": "model-00001-of-00004.safetensors",
639
+ "vision_backbone.featurizer.blocks.9.norm1.bias": "model-00001-of-00004.safetensors",
640
+ "vision_backbone.featurizer.blocks.9.norm1.weight": "model-00001-of-00004.safetensors",
641
+ "vision_backbone.featurizer.blocks.9.norm2.bias": "model-00001-of-00004.safetensors",
642
+ "vision_backbone.featurizer.blocks.9.norm2.weight": "model-00001-of-00004.safetensors",
643
+ "vision_backbone.featurizer.cls_token": "model-00001-of-00004.safetensors",
644
+ "vision_backbone.featurizer.norm.bias": "model-00001-of-00004.safetensors",
645
+ "vision_backbone.featurizer.norm.weight": "model-00001-of-00004.safetensors",
646
+ "vision_backbone.featurizer.patch_embed.proj.bias": "model-00001-of-00004.safetensors",
647
+ "vision_backbone.featurizer.patch_embed.proj.weight": "model-00001-of-00004.safetensors",
648
+ "vision_backbone.featurizer.pos_embed": "model-00001-of-00004.safetensors",
649
+ "vision_backbone.featurizer.reg_token": "model-00001-of-00004.safetensors",
650
+ "vision_backbone.fused_featurizer.attn_pool.kv.bias": "model-00001-of-00004.safetensors",
651
+ "vision_backbone.fused_featurizer.attn_pool.kv.weight": "model-00001-of-00004.safetensors",
652
+ "vision_backbone.fused_featurizer.attn_pool.latent": "model-00001-of-00004.safetensors",
653
+ "vision_backbone.fused_featurizer.attn_pool.mlp.fc1.bias": "model-00001-of-00004.safetensors",
654
+ "vision_backbone.fused_featurizer.attn_pool.mlp.fc1.weight": "model-00001-of-00004.safetensors",
655
+ "vision_backbone.fused_featurizer.attn_pool.mlp.fc2.bias": "model-00001-of-00004.safetensors",
656
+ "vision_backbone.fused_featurizer.attn_pool.mlp.fc2.weight": "model-00001-of-00004.safetensors",
657
+ "vision_backbone.fused_featurizer.attn_pool.norm.bias": "model-00001-of-00004.safetensors",
658
+ "vision_backbone.fused_featurizer.attn_pool.norm.weight": "model-00001-of-00004.safetensors",
659
+ "vision_backbone.fused_featurizer.attn_pool.proj.bias": "model-00001-of-00004.safetensors",
660
+ "vision_backbone.fused_featurizer.attn_pool.proj.weight": "model-00001-of-00004.safetensors",
661
+ "vision_backbone.fused_featurizer.attn_pool.q.bias": "model-00001-of-00004.safetensors",
662
+ "vision_backbone.fused_featurizer.attn_pool.q.weight": "model-00001-of-00004.safetensors",
663
+ "vision_backbone.fused_featurizer.blocks.0.attn.proj.bias": "model-00001-of-00004.safetensors",
664
+ "vision_backbone.fused_featurizer.blocks.0.attn.proj.weight": "model-00001-of-00004.safetensors",
665
+ "vision_backbone.fused_featurizer.blocks.0.attn.qkv.bias": "model-00001-of-00004.safetensors",
666
+ "vision_backbone.fused_featurizer.blocks.0.attn.qkv.weight": "model-00001-of-00004.safetensors",
667
+ "vision_backbone.fused_featurizer.blocks.0.mlp.fc1.bias": "model-00001-of-00004.safetensors",
668
+ "vision_backbone.fused_featurizer.blocks.0.mlp.fc1.weight": "model-00001-of-00004.safetensors",
669
+ "vision_backbone.fused_featurizer.blocks.0.mlp.fc2.bias": "model-00001-of-00004.safetensors",
670
+ "vision_backbone.fused_featurizer.blocks.0.mlp.fc2.weight": "model-00001-of-00004.safetensors",
671
+ "vision_backbone.fused_featurizer.blocks.0.norm1.bias": "model-00001-of-00004.safetensors",
672
+ "vision_backbone.fused_featurizer.blocks.0.norm1.weight": "model-00001-of-00004.safetensors",
673
+ "vision_backbone.fused_featurizer.blocks.0.norm2.bias": "model-00001-of-00004.safetensors",
674
+ "vision_backbone.fused_featurizer.blocks.0.norm2.weight": "model-00001-of-00004.safetensors",
675
+ "vision_backbone.fused_featurizer.blocks.1.attn.proj.bias": "model-00001-of-00004.safetensors",
676
+ "vision_backbone.fused_featurizer.blocks.1.attn.proj.weight": "model-00001-of-00004.safetensors",
677
+ "vision_backbone.fused_featurizer.blocks.1.attn.qkv.bias": "model-00001-of-00004.safetensors",
678
+ "vision_backbone.fused_featurizer.blocks.1.attn.qkv.weight": "model-00001-of-00004.safetensors",
679
+ "vision_backbone.fused_featurizer.blocks.1.mlp.fc1.bias": "model-00001-of-00004.safetensors",
680
+ "vision_backbone.fused_featurizer.blocks.1.mlp.fc1.weight": "model-00001-of-00004.safetensors",
681
+ "vision_backbone.fused_featurizer.blocks.1.mlp.fc2.bias": "model-00001-of-00004.safetensors",
682
+ "vision_backbone.fused_featurizer.blocks.1.mlp.fc2.weight": "model-00001-of-00004.safetensors",
683
+ "vision_backbone.fused_featurizer.blocks.1.norm1.bias": "model-00001-of-00004.safetensors",
684
+ "vision_backbone.fused_featurizer.blocks.1.norm1.weight": "model-00001-of-00004.safetensors",
685
+ "vision_backbone.fused_featurizer.blocks.1.norm2.bias": "model-00001-of-00004.safetensors",
686
+ "vision_backbone.fused_featurizer.blocks.1.norm2.weight": "model-00001-of-00004.safetensors",
687
+ "vision_backbone.fused_featurizer.blocks.10.attn.proj.bias": "model-00001-of-00004.safetensors",
688
+ "vision_backbone.fused_featurizer.blocks.10.attn.proj.weight": "model-00001-of-00004.safetensors",
689
+ "vision_backbone.fused_featurizer.blocks.10.attn.qkv.bias": "model-00001-of-00004.safetensors",
690
+ "vision_backbone.fused_featurizer.blocks.10.attn.qkv.weight": "model-00001-of-00004.safetensors",
691
+ "vision_backbone.fused_featurizer.blocks.10.mlp.fc1.bias": "model-00001-of-00004.safetensors",
692
+ "vision_backbone.fused_featurizer.blocks.10.mlp.fc1.weight": "model-00001-of-00004.safetensors",
693
+ "vision_backbone.fused_featurizer.blocks.10.mlp.fc2.bias": "model-00001-of-00004.safetensors",
694
+ "vision_backbone.fused_featurizer.blocks.10.mlp.fc2.weight": "model-00001-of-00004.safetensors",
695
+ "vision_backbone.fused_featurizer.blocks.10.norm1.bias": "model-00001-of-00004.safetensors",
696
+ "vision_backbone.fused_featurizer.blocks.10.norm1.weight": "model-00001-of-00004.safetensors",
697
+ "vision_backbone.fused_featurizer.blocks.10.norm2.bias": "model-00001-of-00004.safetensors",
698
+ "vision_backbone.fused_featurizer.blocks.10.norm2.weight": "model-00001-of-00004.safetensors",
699
+ "vision_backbone.fused_featurizer.blocks.11.attn.proj.bias": "model-00001-of-00004.safetensors",
700
+ "vision_backbone.fused_featurizer.blocks.11.attn.proj.weight": "model-00001-of-00004.safetensors",
701
+ "vision_backbone.fused_featurizer.blocks.11.attn.qkv.bias": "model-00001-of-00004.safetensors",
702
+ "vision_backbone.fused_featurizer.blocks.11.attn.qkv.weight": "model-00001-of-00004.safetensors",
703
+ "vision_backbone.fused_featurizer.blocks.11.mlp.fc1.bias": "model-00001-of-00004.safetensors",
704
+ "vision_backbone.fused_featurizer.blocks.11.mlp.fc1.weight": "model-00001-of-00004.safetensors",
705
+ "vision_backbone.fused_featurizer.blocks.11.mlp.fc2.bias": "model-00001-of-00004.safetensors",
706
+ "vision_backbone.fused_featurizer.blocks.11.mlp.fc2.weight": "model-00001-of-00004.safetensors",
707
+ "vision_backbone.fused_featurizer.blocks.11.norm1.bias": "model-00001-of-00004.safetensors",
708
+ "vision_backbone.fused_featurizer.blocks.11.norm1.weight": "model-00001-of-00004.safetensors",
709
+ "vision_backbone.fused_featurizer.blocks.11.norm2.bias": "model-00001-of-00004.safetensors",
710
+ "vision_backbone.fused_featurizer.blocks.11.norm2.weight": "model-00001-of-00004.safetensors",
711
+ "vision_backbone.fused_featurizer.blocks.12.attn.proj.bias": "model-00001-of-00004.safetensors",
712
+ "vision_backbone.fused_featurizer.blocks.12.attn.proj.weight": "model-00001-of-00004.safetensors",
713
+ "vision_backbone.fused_featurizer.blocks.12.attn.qkv.bias": "model-00001-of-00004.safetensors",
714
+ "vision_backbone.fused_featurizer.blocks.12.attn.qkv.weight": "model-00001-of-00004.safetensors",
715
+ "vision_backbone.fused_featurizer.blocks.12.mlp.fc1.bias": "model-00001-of-00004.safetensors",
716
+ "vision_backbone.fused_featurizer.blocks.12.mlp.fc1.weight": "model-00001-of-00004.safetensors",
717
+ "vision_backbone.fused_featurizer.blocks.12.mlp.fc2.bias": "model-00001-of-00004.safetensors",
718
+ "vision_backbone.fused_featurizer.blocks.12.mlp.fc2.weight": "model-00001-of-00004.safetensors",
719
+ "vision_backbone.fused_featurizer.blocks.12.norm1.bias": "model-00001-of-00004.safetensors",
720
+ "vision_backbone.fused_featurizer.blocks.12.norm1.weight": "model-00001-of-00004.safetensors",
721
+ "vision_backbone.fused_featurizer.blocks.12.norm2.bias": "model-00001-of-00004.safetensors",
722
+ "vision_backbone.fused_featurizer.blocks.12.norm2.weight": "model-00001-of-00004.safetensors",
723
+ "vision_backbone.fused_featurizer.blocks.13.attn.proj.bias": "model-00001-of-00004.safetensors",
724
+ "vision_backbone.fused_featurizer.blocks.13.attn.proj.weight": "model-00001-of-00004.safetensors",
725
+ "vision_backbone.fused_featurizer.blocks.13.attn.qkv.bias": "model-00001-of-00004.safetensors",
726
+ "vision_backbone.fused_featurizer.blocks.13.attn.qkv.weight": "model-00001-of-00004.safetensors",
727
+ "vision_backbone.fused_featurizer.blocks.13.mlp.fc1.bias": "model-00001-of-00004.safetensors",
728
+ "vision_backbone.fused_featurizer.blocks.13.mlp.fc1.weight": "model-00001-of-00004.safetensors",
729
+ "vision_backbone.fused_featurizer.blocks.13.mlp.fc2.bias": "model-00001-of-00004.safetensors",
730
+ "vision_backbone.fused_featurizer.blocks.13.mlp.fc2.weight": "model-00001-of-00004.safetensors",
731
+ "vision_backbone.fused_featurizer.blocks.13.norm1.bias": "model-00001-of-00004.safetensors",
732
+ "vision_backbone.fused_featurizer.blocks.13.norm1.weight": "model-00001-of-00004.safetensors",
733
+ "vision_backbone.fused_featurizer.blocks.13.norm2.bias": "model-00001-of-00004.safetensors",
734
+ "vision_backbone.fused_featurizer.blocks.13.norm2.weight": "model-00001-of-00004.safetensors",
735
+ "vision_backbone.fused_featurizer.blocks.14.attn.proj.bias": "model-00001-of-00004.safetensors",
736
+ "vision_backbone.fused_featurizer.blocks.14.attn.proj.weight": "model-00001-of-00004.safetensors",
737
+ "vision_backbone.fused_featurizer.blocks.14.attn.qkv.bias": "model-00001-of-00004.safetensors",
738
+ "vision_backbone.fused_featurizer.blocks.14.attn.qkv.weight": "model-00001-of-00004.safetensors",
739
+ "vision_backbone.fused_featurizer.blocks.14.mlp.fc1.bias": "model-00001-of-00004.safetensors",
740
+ "vision_backbone.fused_featurizer.blocks.14.mlp.fc1.weight": "model-00001-of-00004.safetensors",
741
+ "vision_backbone.fused_featurizer.blocks.14.mlp.fc2.bias": "model-00001-of-00004.safetensors",
742
+ "vision_backbone.fused_featurizer.blocks.14.mlp.fc2.weight": "model-00001-of-00004.safetensors",
743
+ "vision_backbone.fused_featurizer.blocks.14.norm1.bias": "model-00001-of-00004.safetensors",
744
+ "vision_backbone.fused_featurizer.blocks.14.norm1.weight": "model-00001-of-00004.safetensors",
745
+ "vision_backbone.fused_featurizer.blocks.14.norm2.bias": "model-00001-of-00004.safetensors",
746
+ "vision_backbone.fused_featurizer.blocks.14.norm2.weight": "model-00001-of-00004.safetensors",
747
+ "vision_backbone.fused_featurizer.blocks.15.attn.proj.bias": "model-00001-of-00004.safetensors",
748
+ "vision_backbone.fused_featurizer.blocks.15.attn.proj.weight": "model-00001-of-00004.safetensors",
749
+ "vision_backbone.fused_featurizer.blocks.15.attn.qkv.bias": "model-00001-of-00004.safetensors",
750
+ "vision_backbone.fused_featurizer.blocks.15.attn.qkv.weight": "model-00001-of-00004.safetensors",
751
+ "vision_backbone.fused_featurizer.blocks.15.mlp.fc1.bias": "model-00001-of-00004.safetensors",
752
+ "vision_backbone.fused_featurizer.blocks.15.mlp.fc1.weight": "model-00001-of-00004.safetensors",
753
+ "vision_backbone.fused_featurizer.blocks.15.mlp.fc2.bias": "model-00001-of-00004.safetensors",
754
+ "vision_backbone.fused_featurizer.blocks.15.mlp.fc2.weight": "model-00001-of-00004.safetensors",
755
+ "vision_backbone.fused_featurizer.blocks.15.norm1.bias": "model-00001-of-00004.safetensors",
756
+ "vision_backbone.fused_featurizer.blocks.15.norm1.weight": "model-00001-of-00004.safetensors",
757
+ "vision_backbone.fused_featurizer.blocks.15.norm2.bias": "model-00001-of-00004.safetensors",
758
+ "vision_backbone.fused_featurizer.blocks.15.norm2.weight": "model-00001-of-00004.safetensors",
759
+ "vision_backbone.fused_featurizer.blocks.16.attn.proj.bias": "model-00001-of-00004.safetensors",
760
+ "vision_backbone.fused_featurizer.blocks.16.attn.proj.weight": "model-00001-of-00004.safetensors",
761
+ "vision_backbone.fused_featurizer.blocks.16.attn.qkv.bias": "model-00001-of-00004.safetensors",
762
+ "vision_backbone.fused_featurizer.blocks.16.attn.qkv.weight": "model-00001-of-00004.safetensors",
763
+ "vision_backbone.fused_featurizer.blocks.16.mlp.fc1.bias": "model-00001-of-00004.safetensors",
764
+ "vision_backbone.fused_featurizer.blocks.16.mlp.fc1.weight": "model-00001-of-00004.safetensors",
765
+ "vision_backbone.fused_featurizer.blocks.16.mlp.fc2.bias": "model-00001-of-00004.safetensors",
766
+ "vision_backbone.fused_featurizer.blocks.16.mlp.fc2.weight": "model-00001-of-00004.safetensors",
767
+ "vision_backbone.fused_featurizer.blocks.16.norm1.bias": "model-00001-of-00004.safetensors",
768
+ "vision_backbone.fused_featurizer.blocks.16.norm1.weight": "model-00001-of-00004.safetensors",
769
+ "vision_backbone.fused_featurizer.blocks.16.norm2.bias": "model-00001-of-00004.safetensors",
770
+ "vision_backbone.fused_featurizer.blocks.16.norm2.weight": "model-00001-of-00004.safetensors",
771
+ "vision_backbone.fused_featurizer.blocks.17.attn.proj.bias": "model-00001-of-00004.safetensors",
772
+ "vision_backbone.fused_featurizer.blocks.17.attn.proj.weight": "model-00001-of-00004.safetensors",
773
+ "vision_backbone.fused_featurizer.blocks.17.attn.qkv.bias": "model-00001-of-00004.safetensors",
774
+ "vision_backbone.fused_featurizer.blocks.17.attn.qkv.weight": "model-00001-of-00004.safetensors",
775
+ "vision_backbone.fused_featurizer.blocks.17.mlp.fc1.bias": "model-00001-of-00004.safetensors",
776
+ "vision_backbone.fused_featurizer.blocks.17.mlp.fc1.weight": "model-00001-of-00004.safetensors",
777
+ "vision_backbone.fused_featurizer.blocks.17.mlp.fc2.bias": "model-00001-of-00004.safetensors",
778
+ "vision_backbone.fused_featurizer.blocks.17.mlp.fc2.weight": "model-00001-of-00004.safetensors",
779
+ "vision_backbone.fused_featurizer.blocks.17.norm1.bias": "model-00001-of-00004.safetensors",
780
+ "vision_backbone.fused_featurizer.blocks.17.norm1.weight": "model-00001-of-00004.safetensors",
781
+ "vision_backbone.fused_featurizer.blocks.17.norm2.bias": "model-00001-of-00004.safetensors",
782
+ "vision_backbone.fused_featurizer.blocks.17.norm2.weight": "model-00001-of-00004.safetensors",
783
+ "vision_backbone.fused_featurizer.blocks.18.attn.proj.bias": "model-00001-of-00004.safetensors",
784
+ "vision_backbone.fused_featurizer.blocks.18.attn.proj.weight": "model-00001-of-00004.safetensors",
785
+ "vision_backbone.fused_featurizer.blocks.18.attn.qkv.bias": "model-00001-of-00004.safetensors",
786
+ "vision_backbone.fused_featurizer.blocks.18.attn.qkv.weight": "model-00001-of-00004.safetensors",
787
+ "vision_backbone.fused_featurizer.blocks.18.mlp.fc1.bias": "model-00001-of-00004.safetensors",
788
+ "vision_backbone.fused_featurizer.blocks.18.mlp.fc1.weight": "model-00001-of-00004.safetensors",
789
+ "vision_backbone.fused_featurizer.blocks.18.mlp.fc2.bias": "model-00001-of-00004.safetensors",
790
+ "vision_backbone.fused_featurizer.blocks.18.mlp.fc2.weight": "model-00001-of-00004.safetensors",
791
+ "vision_backbone.fused_featurizer.blocks.18.norm1.bias": "model-00001-of-00004.safetensors",
792
+ "vision_backbone.fused_featurizer.blocks.18.norm1.weight": "model-00001-of-00004.safetensors",
793
+ "vision_backbone.fused_featurizer.blocks.18.norm2.bias": "model-00001-of-00004.safetensors",
794
+ "vision_backbone.fused_featurizer.blocks.18.norm2.weight": "model-00001-of-00004.safetensors",
795
+ "vision_backbone.fused_featurizer.blocks.19.attn.proj.bias": "model-00001-of-00004.safetensors",
796
+ "vision_backbone.fused_featurizer.blocks.19.attn.proj.weight": "model-00001-of-00004.safetensors",
797
+ "vision_backbone.fused_featurizer.blocks.19.attn.qkv.bias": "model-00001-of-00004.safetensors",
798
+ "vision_backbone.fused_featurizer.blocks.19.attn.qkv.weight": "model-00001-of-00004.safetensors",
799
+ "vision_backbone.fused_featurizer.blocks.19.mlp.fc1.bias": "model-00001-of-00004.safetensors",
800
+ "vision_backbone.fused_featurizer.blocks.19.mlp.fc1.weight": "model-00001-of-00004.safetensors",
801
+ "vision_backbone.fused_featurizer.blocks.19.mlp.fc2.bias": "model-00001-of-00004.safetensors",
802
+ "vision_backbone.fused_featurizer.blocks.19.mlp.fc2.weight": "model-00001-of-00004.safetensors",
803
+ "vision_backbone.fused_featurizer.blocks.19.norm1.bias": "model-00001-of-00004.safetensors",
804
+ "vision_backbone.fused_featurizer.blocks.19.norm1.weight": "model-00001-of-00004.safetensors",
805
+ "vision_backbone.fused_featurizer.blocks.19.norm2.bias": "model-00001-of-00004.safetensors",
806
+ "vision_backbone.fused_featurizer.blocks.19.norm2.weight": "model-00001-of-00004.safetensors",
807
+ "vision_backbone.fused_featurizer.blocks.2.attn.proj.bias": "model-00001-of-00004.safetensors",
808
+ "vision_backbone.fused_featurizer.blocks.2.attn.proj.weight": "model-00001-of-00004.safetensors",
809
+ "vision_backbone.fused_featurizer.blocks.2.attn.qkv.bias": "model-00001-of-00004.safetensors",
810
+ "vision_backbone.fused_featurizer.blocks.2.attn.qkv.weight": "model-00001-of-00004.safetensors",
811
+ "vision_backbone.fused_featurizer.blocks.2.mlp.fc1.bias": "model-00001-of-00004.safetensors",
812
+ "vision_backbone.fused_featurizer.blocks.2.mlp.fc1.weight": "model-00001-of-00004.safetensors",
813
+ "vision_backbone.fused_featurizer.blocks.2.mlp.fc2.bias": "model-00001-of-00004.safetensors",
814
+ "vision_backbone.fused_featurizer.blocks.2.mlp.fc2.weight": "model-00001-of-00004.safetensors",
815
+ "vision_backbone.fused_featurizer.blocks.2.norm1.bias": "model-00001-of-00004.safetensors",
816
+ "vision_backbone.fused_featurizer.blocks.2.norm1.weight": "model-00001-of-00004.safetensors",
817
+ "vision_backbone.fused_featurizer.blocks.2.norm2.bias": "model-00001-of-00004.safetensors",
818
+ "vision_backbone.fused_featurizer.blocks.2.norm2.weight": "model-00001-of-00004.safetensors",
819
+ "vision_backbone.fused_featurizer.blocks.20.attn.proj.bias": "model-00001-of-00004.safetensors",
820
+ "vision_backbone.fused_featurizer.blocks.20.attn.proj.weight": "model-00001-of-00004.safetensors",
821
+ "vision_backbone.fused_featurizer.blocks.20.attn.qkv.bias": "model-00001-of-00004.safetensors",
822
+ "vision_backbone.fused_featurizer.blocks.20.attn.qkv.weight": "model-00001-of-00004.safetensors",
823
+ "vision_backbone.fused_featurizer.blocks.20.mlp.fc1.bias": "model-00001-of-00004.safetensors",
824
+ "vision_backbone.fused_featurizer.blocks.20.mlp.fc1.weight": "model-00001-of-00004.safetensors",
825
+ "vision_backbone.fused_featurizer.blocks.20.mlp.fc2.bias": "model-00001-of-00004.safetensors",
826
+ "vision_backbone.fused_featurizer.blocks.20.mlp.fc2.weight": "model-00001-of-00004.safetensors",
827
+ "vision_backbone.fused_featurizer.blocks.20.norm1.bias": "model-00001-of-00004.safetensors",
828
+ "vision_backbone.fused_featurizer.blocks.20.norm1.weight": "model-00001-of-00004.safetensors",
829
+ "vision_backbone.fused_featurizer.blocks.20.norm2.bias": "model-00001-of-00004.safetensors",
830
+ "vision_backbone.fused_featurizer.blocks.20.norm2.weight": "model-00001-of-00004.safetensors",
831
+ "vision_backbone.fused_featurizer.blocks.21.attn.proj.bias": "model-00001-of-00004.safetensors",
832
+ "vision_backbone.fused_featurizer.blocks.21.attn.proj.weight": "model-00001-of-00004.safetensors",
833
+ "vision_backbone.fused_featurizer.blocks.21.attn.qkv.bias": "model-00001-of-00004.safetensors",
834
+ "vision_backbone.fused_featurizer.blocks.21.attn.qkv.weight": "model-00001-of-00004.safetensors",
835
+ "vision_backbone.fused_featurizer.blocks.21.mlp.fc1.bias": "model-00001-of-00004.safetensors",
836
+ "vision_backbone.fused_featurizer.blocks.21.mlp.fc1.weight": "model-00001-of-00004.safetensors",
837
+ "vision_backbone.fused_featurizer.blocks.21.mlp.fc2.bias": "model-00001-of-00004.safetensors",
838
+ "vision_backbone.fused_featurizer.blocks.21.mlp.fc2.weight": "model-00001-of-00004.safetensors",
839
+ "vision_backbone.fused_featurizer.blocks.21.norm1.bias": "model-00001-of-00004.safetensors",
840
+ "vision_backbone.fused_featurizer.blocks.21.norm1.weight": "model-00001-of-00004.safetensors",
841
+ "vision_backbone.fused_featurizer.blocks.21.norm2.bias": "model-00001-of-00004.safetensors",
842
+ "vision_backbone.fused_featurizer.blocks.21.norm2.weight": "model-00001-of-00004.safetensors",
843
+ "vision_backbone.fused_featurizer.blocks.22.attn.proj.bias": "model-00001-of-00004.safetensors",
844
+ "vision_backbone.fused_featurizer.blocks.22.attn.proj.weight": "model-00001-of-00004.safetensors",
845
+ "vision_backbone.fused_featurizer.blocks.22.attn.qkv.bias": "model-00001-of-00004.safetensors",
846
+ "vision_backbone.fused_featurizer.blocks.22.attn.qkv.weight": "model-00001-of-00004.safetensors",
847
+ "vision_backbone.fused_featurizer.blocks.22.mlp.fc1.bias": "model-00001-of-00004.safetensors",
848
+ "vision_backbone.fused_featurizer.blocks.22.mlp.fc1.weight": "model-00001-of-00004.safetensors",
849
+ "vision_backbone.fused_featurizer.blocks.22.mlp.fc2.bias": "model-00001-of-00004.safetensors",
850
+ "vision_backbone.fused_featurizer.blocks.22.mlp.fc2.weight": "model-00001-of-00004.safetensors",
851
+ "vision_backbone.fused_featurizer.blocks.22.norm1.bias": "model-00001-of-00004.safetensors",
852
+ "vision_backbone.fused_featurizer.blocks.22.norm1.weight": "model-00001-of-00004.safetensors",
853
+ "vision_backbone.fused_featurizer.blocks.22.norm2.bias": "model-00001-of-00004.safetensors",
854
+ "vision_backbone.fused_featurizer.blocks.22.norm2.weight": "model-00001-of-00004.safetensors",
855
+ "vision_backbone.fused_featurizer.blocks.23.attn.proj.bias": "model-00001-of-00004.safetensors",
856
+ "vision_backbone.fused_featurizer.blocks.23.attn.proj.weight": "model-00001-of-00004.safetensors",
857
+ "vision_backbone.fused_featurizer.blocks.23.attn.qkv.bias": "model-00001-of-00004.safetensors",
858
+ "vision_backbone.fused_featurizer.blocks.23.attn.qkv.weight": "model-00001-of-00004.safetensors",
859
+ "vision_backbone.fused_featurizer.blocks.23.mlp.fc1.bias": "model-00001-of-00004.safetensors",
860
+ "vision_backbone.fused_featurizer.blocks.23.mlp.fc1.weight": "model-00001-of-00004.safetensors",
861
+ "vision_backbone.fused_featurizer.blocks.23.mlp.fc2.bias": "model-00001-of-00004.safetensors",
862
+ "vision_backbone.fused_featurizer.blocks.23.mlp.fc2.weight": "model-00001-of-00004.safetensors",
863
+ "vision_backbone.fused_featurizer.blocks.23.norm1.bias": "model-00001-of-00004.safetensors",
864
+ "vision_backbone.fused_featurizer.blocks.23.norm1.weight": "model-00001-of-00004.safetensors",
865
+ "vision_backbone.fused_featurizer.blocks.23.norm2.bias": "model-00001-of-00004.safetensors",
866
+ "vision_backbone.fused_featurizer.blocks.23.norm2.weight": "model-00001-of-00004.safetensors",
867
+ "vision_backbone.fused_featurizer.blocks.24.attn.proj.bias": "model-00001-of-00004.safetensors",
868
+ "vision_backbone.fused_featurizer.blocks.24.attn.proj.weight": "model-00001-of-00004.safetensors",
869
+ "vision_backbone.fused_featurizer.blocks.24.attn.qkv.bias": "model-00001-of-00004.safetensors",
870
+ "vision_backbone.fused_featurizer.blocks.24.attn.qkv.weight": "model-00001-of-00004.safetensors",
871
+ "vision_backbone.fused_featurizer.blocks.24.mlp.fc1.bias": "model-00001-of-00004.safetensors",
872
+ "vision_backbone.fused_featurizer.blocks.24.mlp.fc1.weight": "model-00001-of-00004.safetensors",
873
+ "vision_backbone.fused_featurizer.blocks.24.mlp.fc2.bias": "model-00001-of-00004.safetensors",
874
+ "vision_backbone.fused_featurizer.blocks.24.mlp.fc2.weight": "model-00001-of-00004.safetensors",
875
+ "vision_backbone.fused_featurizer.blocks.24.norm1.bias": "model-00001-of-00004.safetensors",
876
+ "vision_backbone.fused_featurizer.blocks.24.norm1.weight": "model-00001-of-00004.safetensors",
877
+ "vision_backbone.fused_featurizer.blocks.24.norm2.bias": "model-00001-of-00004.safetensors",
878
+ "vision_backbone.fused_featurizer.blocks.24.norm2.weight": "model-00001-of-00004.safetensors",
879
+ "vision_backbone.fused_featurizer.blocks.25.attn.proj.bias": "model-00001-of-00004.safetensors",
880
+ "vision_backbone.fused_featurizer.blocks.25.attn.proj.weight": "model-00001-of-00004.safetensors",
881
+ "vision_backbone.fused_featurizer.blocks.25.attn.qkv.bias": "model-00001-of-00004.safetensors",
882
+ "vision_backbone.fused_featurizer.blocks.25.attn.qkv.weight": "model-00001-of-00004.safetensors",
883
+ "vision_backbone.fused_featurizer.blocks.25.mlp.fc1.bias": "model-00001-of-00004.safetensors",
884
+ "vision_backbone.fused_featurizer.blocks.25.mlp.fc1.weight": "model-00001-of-00004.safetensors",
885
+ "vision_backbone.fused_featurizer.blocks.25.mlp.fc2.bias": "model-00001-of-00004.safetensors",
886
+ "vision_backbone.fused_featurizer.blocks.25.mlp.fc2.weight": "model-00001-of-00004.safetensors",
887
+ "vision_backbone.fused_featurizer.blocks.25.norm1.bias": "model-00001-of-00004.safetensors",
888
+ "vision_backbone.fused_featurizer.blocks.25.norm1.weight": "model-00001-of-00004.safetensors",
889
+ "vision_backbone.fused_featurizer.blocks.25.norm2.bias": "model-00001-of-00004.safetensors",
890
+ "vision_backbone.fused_featurizer.blocks.25.norm2.weight": "model-00001-of-00004.safetensors",
891
+ "vision_backbone.fused_featurizer.blocks.26.attn.proj.bias": "model-00001-of-00004.safetensors",
892
+ "vision_backbone.fused_featurizer.blocks.26.attn.proj.weight": "model-00001-of-00004.safetensors",
893
+ "vision_backbone.fused_featurizer.blocks.26.attn.qkv.bias": "model-00001-of-00004.safetensors",
894
+ "vision_backbone.fused_featurizer.blocks.26.attn.qkv.weight": "model-00001-of-00004.safetensors",
895
+ "vision_backbone.fused_featurizer.blocks.26.mlp.fc1.bias": "model-00001-of-00004.safetensors",
896
+ "vision_backbone.fused_featurizer.blocks.26.mlp.fc1.weight": "model-00001-of-00004.safetensors",
897
+ "vision_backbone.fused_featurizer.blocks.26.mlp.fc2.bias": "model-00001-of-00004.safetensors",
898
+ "vision_backbone.fused_featurizer.blocks.26.mlp.fc2.weight": "model-00001-of-00004.safetensors",
899
+ "vision_backbone.fused_featurizer.blocks.26.norm1.bias": "model-00001-of-00004.safetensors",
900
+ "vision_backbone.fused_featurizer.blocks.26.norm1.weight": "model-00001-of-00004.safetensors",
901
+ "vision_backbone.fused_featurizer.blocks.26.norm2.bias": "model-00001-of-00004.safetensors",
902
+ "vision_backbone.fused_featurizer.blocks.26.norm2.weight": "model-00001-of-00004.safetensors",
903
+ "vision_backbone.fused_featurizer.blocks.3.attn.proj.bias": "model-00001-of-00004.safetensors",
904
+ "vision_backbone.fused_featurizer.blocks.3.attn.proj.weight": "model-00001-of-00004.safetensors",
905
+ "vision_backbone.fused_featurizer.blocks.3.attn.qkv.bias": "model-00001-of-00004.safetensors",
906
+ "vision_backbone.fused_featurizer.blocks.3.attn.qkv.weight": "model-00001-of-00004.safetensors",
907
+ "vision_backbone.fused_featurizer.blocks.3.mlp.fc1.bias": "model-00001-of-00004.safetensors",
908
+ "vision_backbone.fused_featurizer.blocks.3.mlp.fc1.weight": "model-00001-of-00004.safetensors",
909
+ "vision_backbone.fused_featurizer.blocks.3.mlp.fc2.bias": "model-00001-of-00004.safetensors",
910
+ "vision_backbone.fused_featurizer.blocks.3.mlp.fc2.weight": "model-00001-of-00004.safetensors",
911
+ "vision_backbone.fused_featurizer.blocks.3.norm1.bias": "model-00001-of-00004.safetensors",
912
+ "vision_backbone.fused_featurizer.blocks.3.norm1.weight": "model-00001-of-00004.safetensors",
913
+ "vision_backbone.fused_featurizer.blocks.3.norm2.bias": "model-00001-of-00004.safetensors",
914
+ "vision_backbone.fused_featurizer.blocks.3.norm2.weight": "model-00001-of-00004.safetensors",
915
+ "vision_backbone.fused_featurizer.blocks.4.attn.proj.bias": "model-00001-of-00004.safetensors",
916
+ "vision_backbone.fused_featurizer.blocks.4.attn.proj.weight": "model-00001-of-00004.safetensors",
917
+ "vision_backbone.fused_featurizer.blocks.4.attn.qkv.bias": "model-00001-of-00004.safetensors",
918
+ "vision_backbone.fused_featurizer.blocks.4.attn.qkv.weight": "model-00001-of-00004.safetensors",
919
+ "vision_backbone.fused_featurizer.blocks.4.mlp.fc1.bias": "model-00001-of-00004.safetensors",
920
+ "vision_backbone.fused_featurizer.blocks.4.mlp.fc1.weight": "model-00001-of-00004.safetensors",
921
+ "vision_backbone.fused_featurizer.blocks.4.mlp.fc2.bias": "model-00001-of-00004.safetensors",
922
+ "vision_backbone.fused_featurizer.blocks.4.mlp.fc2.weight": "model-00001-of-00004.safetensors",
923
+ "vision_backbone.fused_featurizer.blocks.4.norm1.bias": "model-00001-of-00004.safetensors",
924
+ "vision_backbone.fused_featurizer.blocks.4.norm1.weight": "model-00001-of-00004.safetensors",
925
+ "vision_backbone.fused_featurizer.blocks.4.norm2.bias": "model-00001-of-00004.safetensors",
926
+ "vision_backbone.fused_featurizer.blocks.4.norm2.weight": "model-00001-of-00004.safetensors",
927
+ "vision_backbone.fused_featurizer.blocks.5.attn.proj.bias": "model-00001-of-00004.safetensors",
928
+ "vision_backbone.fused_featurizer.blocks.5.attn.proj.weight": "model-00001-of-00004.safetensors",
929
+ "vision_backbone.fused_featurizer.blocks.5.attn.qkv.bias": "model-00001-of-00004.safetensors",
930
+ "vision_backbone.fused_featurizer.blocks.5.attn.qkv.weight": "model-00001-of-00004.safetensors",
931
+ "vision_backbone.fused_featurizer.blocks.5.mlp.fc1.bias": "model-00001-of-00004.safetensors",
932
+ "vision_backbone.fused_featurizer.blocks.5.mlp.fc1.weight": "model-00001-of-00004.safetensors",
933
+ "vision_backbone.fused_featurizer.blocks.5.mlp.fc2.bias": "model-00001-of-00004.safetensors",
934
+ "vision_backbone.fused_featurizer.blocks.5.mlp.fc2.weight": "model-00001-of-00004.safetensors",
935
+ "vision_backbone.fused_featurizer.blocks.5.norm1.bias": "model-00001-of-00004.safetensors",
936
+ "vision_backbone.fused_featurizer.blocks.5.norm1.weight": "model-00001-of-00004.safetensors",
937
+ "vision_backbone.fused_featurizer.blocks.5.norm2.bias": "model-00001-of-00004.safetensors",
938
+ "vision_backbone.fused_featurizer.blocks.5.norm2.weight": "model-00001-of-00004.safetensors",
939
+ "vision_backbone.fused_featurizer.blocks.6.attn.proj.bias": "model-00001-of-00004.safetensors",
940
+ "vision_backbone.fused_featurizer.blocks.6.attn.proj.weight": "model-00001-of-00004.safetensors",
941
+ "vision_backbone.fused_featurizer.blocks.6.attn.qkv.bias": "model-00001-of-00004.safetensors",
942
+ "vision_backbone.fused_featurizer.blocks.6.attn.qkv.weight": "model-00001-of-00004.safetensors",
943
+ "vision_backbone.fused_featurizer.blocks.6.mlp.fc1.bias": "model-00001-of-00004.safetensors",
944
+ "vision_backbone.fused_featurizer.blocks.6.mlp.fc1.weight": "model-00001-of-00004.safetensors",
945
+ "vision_backbone.fused_featurizer.blocks.6.mlp.fc2.bias": "model-00001-of-00004.safetensors",
946
+ "vision_backbone.fused_featurizer.blocks.6.mlp.fc2.weight": "model-00001-of-00004.safetensors",
947
+ "vision_backbone.fused_featurizer.blocks.6.norm1.bias": "model-00001-of-00004.safetensors",
948
+ "vision_backbone.fused_featurizer.blocks.6.norm1.weight": "model-00001-of-00004.safetensors",
949
+ "vision_backbone.fused_featurizer.blocks.6.norm2.bias": "model-00001-of-00004.safetensors",
950
+ "vision_backbone.fused_featurizer.blocks.6.norm2.weight": "model-00001-of-00004.safetensors",
951
+ "vision_backbone.fused_featurizer.blocks.7.attn.proj.bias": "model-00001-of-00004.safetensors",
952
+ "vision_backbone.fused_featurizer.blocks.7.attn.proj.weight": "model-00001-of-00004.safetensors",
953
+ "vision_backbone.fused_featurizer.blocks.7.attn.qkv.bias": "model-00001-of-00004.safetensors",
954
+ "vision_backbone.fused_featurizer.blocks.7.attn.qkv.weight": "model-00001-of-00004.safetensors",
955
+ "vision_backbone.fused_featurizer.blocks.7.mlp.fc1.bias": "model-00001-of-00004.safetensors",
956
+ "vision_backbone.fused_featurizer.blocks.7.mlp.fc1.weight": "model-00001-of-00004.safetensors",
957
+ "vision_backbone.fused_featurizer.blocks.7.mlp.fc2.bias": "model-00001-of-00004.safetensors",
958
+ "vision_backbone.fused_featurizer.blocks.7.mlp.fc2.weight": "model-00001-of-00004.safetensors",
959
+ "vision_backbone.fused_featurizer.blocks.7.norm1.bias": "model-00001-of-00004.safetensors",
960
+ "vision_backbone.fused_featurizer.blocks.7.norm1.weight": "model-00001-of-00004.safetensors",
961
+ "vision_backbone.fused_featurizer.blocks.7.norm2.bias": "model-00001-of-00004.safetensors",
962
+ "vision_backbone.fused_featurizer.blocks.7.norm2.weight": "model-00001-of-00004.safetensors",
963
+ "vision_backbone.fused_featurizer.blocks.8.attn.proj.bias": "model-00001-of-00004.safetensors",
964
+ "vision_backbone.fused_featurizer.blocks.8.attn.proj.weight": "model-00001-of-00004.safetensors",
965
+ "vision_backbone.fused_featurizer.blocks.8.attn.qkv.bias": "model-00001-of-00004.safetensors",
966
+ "vision_backbone.fused_featurizer.blocks.8.attn.qkv.weight": "model-00001-of-00004.safetensors",
967
+ "vision_backbone.fused_featurizer.blocks.8.mlp.fc1.bias": "model-00001-of-00004.safetensors",
968
+ "vision_backbone.fused_featurizer.blocks.8.mlp.fc1.weight": "model-00001-of-00004.safetensors",
969
+ "vision_backbone.fused_featurizer.blocks.8.mlp.fc2.bias": "model-00001-of-00004.safetensors",
970
+ "vision_backbone.fused_featurizer.blocks.8.mlp.fc2.weight": "model-00001-of-00004.safetensors",
971
+ "vision_backbone.fused_featurizer.blocks.8.norm1.bias": "model-00001-of-00004.safetensors",
972
+ "vision_backbone.fused_featurizer.blocks.8.norm1.weight": "model-00001-of-00004.safetensors",
973
+ "vision_backbone.fused_featurizer.blocks.8.norm2.bias": "model-00001-of-00004.safetensors",
974
+ "vision_backbone.fused_featurizer.blocks.8.norm2.weight": "model-00001-of-00004.safetensors",
975
+ "vision_backbone.fused_featurizer.blocks.9.attn.proj.bias": "model-00001-of-00004.safetensors",
976
+ "vision_backbone.fused_featurizer.blocks.9.attn.proj.weight": "model-00001-of-00004.safetensors",
977
+ "vision_backbone.fused_featurizer.blocks.9.attn.qkv.bias": "model-00001-of-00004.safetensors",
978
+ "vision_backbone.fused_featurizer.blocks.9.attn.qkv.weight": "model-00001-of-00004.safetensors",
979
+ "vision_backbone.fused_featurizer.blocks.9.mlp.fc1.bias": "model-00001-of-00004.safetensors",
980
+ "vision_backbone.fused_featurizer.blocks.9.mlp.fc1.weight": "model-00001-of-00004.safetensors",
981
+ "vision_backbone.fused_featurizer.blocks.9.mlp.fc2.bias": "model-00001-of-00004.safetensors",
982
+ "vision_backbone.fused_featurizer.blocks.9.mlp.fc2.weight": "model-00001-of-00004.safetensors",
983
+ "vision_backbone.fused_featurizer.blocks.9.norm1.bias": "model-00001-of-00004.safetensors",
984
+ "vision_backbone.fused_featurizer.blocks.9.norm1.weight": "model-00001-of-00004.safetensors",
985
+ "vision_backbone.fused_featurizer.blocks.9.norm2.bias": "model-00001-of-00004.safetensors",
986
+ "vision_backbone.fused_featurizer.blocks.9.norm2.weight": "model-00001-of-00004.safetensors",
987
+ "vision_backbone.fused_featurizer.norm.bias": "model-00001-of-00004.safetensors",
988
+ "vision_backbone.fused_featurizer.norm.weight": "model-00001-of-00004.safetensors",
989
+ "vision_backbone.fused_featurizer.patch_embed.proj.bias": "model-00001-of-00004.safetensors",
990
+ "vision_backbone.fused_featurizer.patch_embed.proj.weight": "model-00001-of-00004.safetensors",
991
+ "vision_backbone.fused_featurizer.pos_embed": "model-00001-of-00004.safetensors"
992
+ }
993
+ }
modeling_prismatic.py ADDED
@@ -0,0 +1,1131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ modeling_prismatic.py
3
+
4
+ Core HuggingFace-style PrismaticPreTrainedModel and PrismaticForConditionalGeneration class definitions.
5
+ Inherits from the default `transformers.PretrainedModel`. Meant to be standalone and self-contained,
6
+ but exactly replicate the logic in `prismatic.models.vlms.prismatic.py`.
7
+ """
8
+
9
+ import logging
10
+ from dataclasses import dataclass
11
+ from functools import partial
12
+ from typing import Any, Callable, ClassVar, Dict, List, Optional, Tuple, Union
13
+
14
+ import numpy as np
15
+ import timm
16
+ import tokenizers
17
+ import torch
18
+ import torch.nn as nn
19
+ import transformers
20
+ from timm.models.vision_transformer import LayerScale
21
+ from transformers import AutoModelForCausalLM, PretrainedConfig, PreTrainedModel
22
+ from transformers.modeling_outputs import ModelOutput
23
+
24
+ from prismatic.training.train_utils import (
25
+ get_current_action_mask,
26
+ get_next_actions_mask,
27
+ )
28
+ from prismatic.vla.constants import (
29
+ ACTION_DIM,
30
+ ACTION_PROPRIO_NORMALIZATION_TYPE,
31
+ ACTION_TOKEN_BEGIN_IDX,
32
+ IGNORE_INDEX,
33
+ NUM_ACTIONS_CHUNK,
34
+ STOP_INDEX,
35
+ NormalizationType,
36
+ )
37
+
38
+ from .configuration_prismatic import OpenVLAConfig, PrismaticConfig
39
+
40
+ # Set up logger
41
+ logger = logging.getLogger(__name__)
42
+
43
+
44
+ # === Utility Functions for Monkey-Patching ===
45
+ def unpack_tuple(fn: Callable[[Any], Tuple[Any]]) -> Callable[[Any], Any]:
46
+ def wrapper(*args: Any, **kwargs: Any) -> Any:
47
+ result = fn(*args, **kwargs)
48
+ return result[0] if isinstance(result, tuple) else result
49
+
50
+ return wrapper
51
+
52
+
53
+ # HF Transformers overwrites parameters with names containing `gamma`; we're going to patch VisionBackbone.LayerScale.
54
+ # =>> TIMM :: https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/vision_transformer.py#L109
55
+ # =>> Transformers :: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L3960
56
+ def _ls_new_forward(self, x: torch.Tensor) -> torch.Tensor:
57
+ return x.mul_(self.scale_factor) if self.inplace else x * self.scale_factor
58
+
59
+
60
+ def ls_apply_patch(ls_module: LayerScale):
61
+ ls_module.scale_factor = nn.Parameter(ls_module.gamma.clone())
62
+ ls_module.forward = _ls_new_forward.__get__(ls_module, LayerScale)
63
+ del ls_module.gamma
64
+
65
+
66
+ class ProprioProjector(nn.Module):
67
+ """
68
+ Projects proprio state inputs into the LLM's embedding space.
69
+ """
70
+ def __init__(self, llm_dim: int, proprio_dim: int) -> None:
71
+ super().__init__()
72
+ self.llm_dim = llm_dim
73
+ self.proprio_dim = proprio_dim
74
+
75
+ self.fc1 = nn.Linear(self.proprio_dim, self.llm_dim, bias=True)
76
+ self.fc2 = nn.Linear(self.llm_dim, self.llm_dim, bias=True)
77
+ self.act_fn1 = nn.GELU()
78
+
79
+ def forward(self, proprio: torch.Tensor = None) -> torch.Tensor:
80
+ # proprio: (bsz, proprio_dim)
81
+ projected_features = self.fc1(proprio)
82
+ projected_features = self.act_fn1(projected_features)
83
+ projected_features = self.fc2(projected_features)
84
+ return projected_features
85
+ # === Prismatic Vision Backbone (nn.Module) Definitions (w/ Fused Backbone Support) ===
86
+ class PrismaticVisionBackbone(nn.Module):
87
+ """
88
+ Vision backbone for Prismatic models that handles image feature extraction.
89
+
90
+ Supports both single backbone (e.g., SigLIP) and fused backbone (e.g., SigLIP + DINOv2) configurations.
91
+ For fused backbones, features from both models are concatenated along the feature dimension.
92
+ """
93
+
94
+ def __init__(
95
+ self,
96
+ use_fused_vision_backbone: bool,
97
+ image_sizes: List[int],
98
+ timm_model_ids: List[str],
99
+ timm_override_act_layers: List[Optional[str]],
100
+ ) -> None:
101
+ """
102
+ Initialize the vision backbone.
103
+
104
+ Args:
105
+ use_fused_vision_backbone: Whether to use two backbones and fuse their features
106
+ image_sizes: List of image sizes for each backbone
107
+ timm_model_ids: List of TIMM model IDs to use for each backbone
108
+ timm_override_act_layers: List of activation layer overrides for each backbone
109
+ """
110
+ super().__init__()
111
+ self.use_fused_vision_backbone = use_fused_vision_backbone
112
+ self.num_images_in_input = 1 # Default value, can be overridden later
113
+
114
+ # Validate number of (fused) vision backbones
115
+ if len(timm_model_ids) > 2:
116
+ raise ValueError("Prismatic models only support up to 2 (fused) vision backbones!")
117
+
118
+ # Create primary featurizer
119
+ self.featurizer = self._create_featurizer(
120
+ model_id=timm_model_ids[0], img_size=image_sizes[0], act_layer=timm_override_act_layers[0]
121
+ )
122
+ self.embed_dim = self.featurizer.embed_dim
123
+
124
+ # Create secondary featurizer if using fused backbone
125
+ if self.use_fused_vision_backbone:
126
+ self.fused_featurizer = self._create_featurizer(
127
+ model_id=timm_model_ids[1], img_size=image_sizes[1], act_layer=timm_override_act_layers[1]
128
+ )
129
+ self.embed_dim += self.fused_featurizer.embed_dim
130
+
131
+ # Patch LayerScale modules for HF compatibility
132
+ self._patch_layer_scales()
133
+
134
+ def _create_featurizer(self, model_id: str, img_size: int, act_layer: Optional[str]) -> nn.Module:
135
+ """
136
+ Create a TIMM-based featurizer model with appropriate configurations.
137
+
138
+ Args:
139
+ model_id: The TIMM model ID to load
140
+ img_size: Input image size for the model
141
+ act_layer: Override for the activation layer type
142
+
143
+ Returns:
144
+ A configured featurizer model
145
+ """
146
+ featurizer = timm.create_model(
147
+ model_id,
148
+ pretrained=False,
149
+ num_classes=0,
150
+ img_size=img_size,
151
+ act_layer=act_layer,
152
+ )
153
+
154
+ # Monkey-patch the forward function to extract the second-to-last layer features
155
+ num_blocks = len(featurizer.blocks)
156
+ featurizer.forward = unpack_tuple(partial(featurizer.get_intermediate_layers, n={num_blocks - 2}))
157
+
158
+ return featurizer
159
+
160
+ def _patch_layer_scales(self) -> None:
161
+ """
162
+ Patch all LayerScale modules to be compatible with HF's parameter naming.
163
+
164
+ HF Transformers overwrites parameters with names containing 'gamma',
165
+ so we need to rename and modify the forward method.
166
+ """
167
+ # Patch primary featurizer
168
+ for module in self.featurizer.modules():
169
+ if isinstance(module, LayerScale):
170
+ ls_apply_patch(module)
171
+
172
+ # Patch secondary featurizer if it exists
173
+ if self.use_fused_vision_backbone:
174
+ for module in self.fused_featurizer.modules():
175
+ if isinstance(module, LayerScale):
176
+ ls_apply_patch(module)
177
+
178
+ def get_num_patches(self) -> int:
179
+ """
180
+ Returns the number of vision patches output by the vision backbone.
181
+
182
+ Returns:
183
+ Number of patches per image
184
+ """
185
+ return self.featurizer.patch_embed.num_patches
186
+
187
+ def get_num_images_in_input(self) -> int:
188
+ """
189
+ Returns the number of input images for the vision backbone.
190
+
191
+ Returns:
192
+ Number of images expected in the input
193
+ """
194
+ return self.num_images_in_input
195
+
196
+ def set_num_images_in_input(self, num_images_in_input: int) -> None:
197
+ """
198
+ Sets the number of input images for the vision backbone.
199
+
200
+ Args:
201
+ num_images_in_input: Number of images to expect in the input
202
+ """
203
+ self.num_images_in_input = num_images_in_input
204
+
205
+ def forward(self, pixel_values: torch.Tensor) -> torch.Tensor:
206
+ """
207
+ Implements the forward pass for the vision backbone.
208
+
209
+ If `self.use_fused_vision_backbone == True`, uses both SigLIP and DINOv2 transformers to extract visual features
210
+ (otherwise uses SigLIP only). Allows multi-image inputs (but only for fused vision backbone).
211
+
212
+ Args:
213
+ pixel_values (torch.Tensor): Pixels for input image(s), (B, C, H, W).
214
+ """
215
+ if self.num_images_in_input == 1:
216
+ if not self.use_fused_vision_backbone:
217
+ return self.featurizer(pixel_values)
218
+
219
+ # Split `pixel_values :: [bsz, 2 * 3, resolution, resolution]` =>> featurize =>> channel stack
220
+ img, img_fused = torch.split(pixel_values, [3, 3], dim=1)
221
+ patches, patches_fused = self.featurizer(img), self.fused_featurizer(img_fused)
222
+
223
+ return torch.cat([patches, patches_fused], dim=2)
224
+
225
+ else:
226
+ assert self.use_fused_vision_backbone, "Multi-image inputs require using fused backbone!"
227
+
228
+ # Split `pixel_values` into individual images (each with 6 channels: 3 for SigLIP + 3 for DINOv2)
229
+ images = torch.split(pixel_values, [6] * self.num_images_in_input, dim=1)
230
+
231
+ # Process each image and collect patches
232
+ all_patches = []
233
+ for img in images:
234
+ # Split each image further into two stacks of channels (each with 3 channels)
235
+ img_regular, img_fused = torch.split(img, [3, 3], dim=1)
236
+
237
+ # Get patches from both SigLIP and DINOv2 vision transformers
238
+ patches = self.featurizer(img_regular)
239
+ patches_fused = self.fused_featurizer(img_fused)
240
+
241
+ # Concatenate SigLIP and DINOv2 patches along the hidden dimension
242
+ combined_patches = torch.cat([patches, patches_fused], dim=2)
243
+ all_patches.append(combined_patches)
244
+
245
+ # Concatenate all patches along the patch dimension
246
+ return torch.cat(all_patches, dim=1)
247
+
248
+
249
+ # === Prismatic Projector (nn.Module) Definitions ===
250
+ class PrismaticProjector(nn.Module):
251
+ def __init__(self, use_fused_vision_backbone: bool, vision_dim: int, llm_dim: int) -> None:
252
+ super().__init__()
253
+ self.use_fused_vision_backbone = use_fused_vision_backbone
254
+ self.vision_dim, self.llm_dim = vision_dim, llm_dim
255
+
256
+ # Switch on `use_fused_vision_backbone` =>> use slightly different MLPs and projection factors!
257
+ if not self.use_fused_vision_backbone:
258
+ self.fc1 = nn.Linear(self.vision_dim, self.llm_dim, bias=True)
259
+ self.fc2 = nn.Linear(self.llm_dim, self.llm_dim, bias=True)
260
+ self.act_fn1 = nn.GELU()
261
+ else:
262
+ initial_projection_dim = 4 * vision_dim
263
+ self.fc1 = nn.Linear(self.vision_dim, initial_projection_dim, bias=True)
264
+ self.fc2 = nn.Linear(initial_projection_dim, self.llm_dim, bias=True)
265
+ self.fc3 = nn.Linear(self.llm_dim, self.llm_dim, bias=True)
266
+ self.act_fn1 = nn.GELU()
267
+ self.act_fn2 = nn.GELU()
268
+
269
+ def forward(self, img_patches: torch.Tensor) -> torch.Tensor:
270
+ if not self.use_fused_vision_backbone:
271
+ projected_features = self.fc1(img_patches)
272
+ projected_features = self.act_fn1(projected_features)
273
+ projected_features = self.fc2(projected_features)
274
+ else:
275
+ projected_features = self.fc1(img_patches)
276
+ projected_features = self.act_fn1(projected_features)
277
+ projected_features = self.fc2(projected_features)
278
+ projected_features = self.act_fn2(projected_features)
279
+ projected_features = self.fc3(projected_features)
280
+
281
+ return projected_features
282
+
283
+
284
+ # === Main HF Class Definitions ===
285
+ @dataclass
286
+ class PrismaticCausalLMOutputWithPast(ModelOutput):
287
+ """Base class for Prismatic casual (visually-conditioned) language model outputs; also exposes visual features."""
288
+
289
+ loss: Optional[torch.FloatTensor] = None
290
+ logits: torch.FloatTensor = None
291
+ past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None
292
+ hidden_states: Optional[Tuple[torch.FloatTensor, ...]] = None
293
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
294
+
295
+ # Additions for VLMs
296
+ projector_features: Optional[torch.FloatTensor] = None
297
+
298
+
299
+ class PrismaticPreTrainedModel(PreTrainedModel):
300
+ config_class: PretrainedConfig = PrismaticConfig
301
+ base_model_prefix: str = "model"
302
+ supports_gradient_checkpointing: bool = True
303
+
304
+ _no_split_modules: ClassVar[List[str]] = ["PrismaticProjector"]
305
+ _skip_keys_device_placement: str = "past_key_values"
306
+ _supports_flash_attn_2: bool = True
307
+
308
+ def _init_weights(self, module: nn.Module) -> None:
309
+ # Important :: this HF ported version is *not* meant for training from scratch; only inference and fine-tuning!
310
+ # => As such, this init_weights code is not correct; if training VLMs from scratch, use the main codebase at
311
+ # https://github.com/TRI-ML/prismatic-vlms
312
+ std = (
313
+ self.config.initializer_range
314
+ if hasattr(self.config, "initializer_range")
315
+ else self.config.text_config.initializer_range
316
+ )
317
+
318
+ if hasattr(module, "class_embedding"):
319
+ module.class_embedding.data.normal_(mean=0.0, std=std)
320
+
321
+ if isinstance(module, (nn.Linear, nn.Conv2d)):
322
+ module.weight.data.normal_(mean=0.0, std=std)
323
+ if module.bias is not None:
324
+ module.bias.data.zero_()
325
+ elif isinstance(module, nn.Embedding):
326
+ module.weight.data.normal_(mean=0.0, std=std)
327
+ if module.padding_idx is not None:
328
+ module.weight.data[module.padding_idx].zero_()
329
+
330
+ @property
331
+ def _supports_sdpa(self) -> bool:
332
+ """Check LLM supports SDPA Attention"""
333
+ return self.language_model._supports_sdpa
334
+
335
+
336
+ class PrismaticForConditionalGeneration(PrismaticPreTrainedModel):
337
+ def __init__(self, config: PrismaticConfig) -> None:
338
+ super().__init__(config)
339
+
340
+ # [Validation] Lightweight Validate on `config` Fields + Dependency Versions
341
+ if config.use_fused_vision_backbone is None:
342
+ raise ValueError("Missing config field `use_fused_vision_backbone`")
343
+
344
+ if timm.__version__ not in {"0.9.10", "0.9.11", "0.9.12", "0.9.16"}:
345
+ raise NotImplementedError(
346
+ "TIMM Version must be >= 0.9.10 and < 1.0.0 (breaking); please raise a GitHub Issue "
347
+ "if you urgently need support for latest TIMM versions."
348
+ )
349
+
350
+ if (transformers.__version__ != "4.40.1") or (tokenizers.__version__ != "0.19.1"):
351
+ logger.warning(
352
+ f"Expected `transformers==4.40.1` and `tokenizers==0.19.1` but got "
353
+ f"`transformers=={transformers.__version__}` and `tokenizers=={tokenizers.__version__}`; "
354
+ f"there might be inference-time regressions due to dependency changes. If in doubt, please"
355
+ f"use the above versions."
356
+ )
357
+
358
+ # Instantiate PrismaticVisionBackbone (w/ Potential Fused Backbone)
359
+ self.vision_backbone = PrismaticVisionBackbone(
360
+ config.use_fused_vision_backbone, config.image_sizes, config.timm_model_ids, config.timm_override_act_layers
361
+ )
362
+
363
+ # Create Multimodal Projector
364
+ self.projector = PrismaticProjector(
365
+ config.use_fused_vision_backbone,
366
+ vision_dim=self.vision_backbone.embed_dim,
367
+ llm_dim=config.text_config.hidden_size,
368
+ )
369
+ # self.proprio_projector = None
370
+ # assert config.use_proprio
371
+ # if config.use_proprio:
372
+ # self.proprio_projector = ProprioProjector(
373
+ # llm_dim=config.text_config.hidden_size,
374
+ # proprio_dim=config.proprio_dim
375
+ # )
376
+ # print("Add elf.proprio_projector in OPENVLA",flush=True)
377
+
378
+ self.proprio_projector = None #lhz add
379
+ if getattr(config, 'use_proprio', False):
380
+ self.proprio_projector = ProprioProjector(
381
+ llm_dim=config.text_config.hidden_size,
382
+ proprio_dim=config.proprio_dim
383
+ )
384
+ print("Add self.proprio_projector in OPENVLA", flush=True)
385
+
386
+ # Instantiate LLM Backbone
387
+ self.language_model = AutoModelForCausalLM.from_config(
388
+ config.text_config, attn_implementation=config._attn_implementation
389
+ )
390
+ self.vocab_size = config.text_config.vocab_size
391
+ self.pad_token_id = config.pad_token_id
392
+ self.llm_dim = config.text_config.hidden_size
393
+
394
+ # HF Boilerplate =>> initializes weights via `_init_weights()` and sets gradient checkpointing
395
+ self.post_init()
396
+
397
+ # === `PreTrainedModel` Boilerplate ===
398
+ def get_input_embeddings(self) -> nn.Module:
399
+ return self.language_model.get_input_embeddings()
400
+
401
+ def set_input_embeddings(self, value: nn.Module) -> None:
402
+ self.language_model.set_input_embeddings(value)
403
+
404
+ def get_output_embeddings(self) -> nn.Module:
405
+ return self.language_model.get_output_embeddings()
406
+
407
+ def set_output_embeddings(self, new_embeddings: nn.Module) -> None:
408
+ self.language_model.set_output_embeddings(new_embeddings)
409
+
410
+ def get_decoder(self) -> nn.Module:
411
+ return self.language_model.get_decoder()
412
+
413
+ def set_decoder(self, decoder: nn.Module) -> None:
414
+ self.language_model.set_decoder(decoder)
415
+
416
+ def tie_weights(self) -> None:
417
+ self.language_model.tie_weights() # Note: `Llama-2` and `Mistral` don't tie weights (no-op)
418
+
419
+ def resize_token_embeddings(
420
+ self, new_num_tokens: Optional[int] = None, pad_to_multiple_of: Optional[int] = None
421
+ ) -> nn.Embedding:
422
+ updated_embeddings = self.language_model.resize_token_embeddings(new_num_tokens, pad_to_multiple_of)
423
+
424
+ # Update config/instance variables
425
+ self.config.text_config.vocab_size = updated_embeddings.num_embeddings
426
+ self.vocab_size = updated_embeddings.num_embeddings
427
+
428
+ return updated_embeddings
429
+
430
+ def _replace_input_embeddings(self, input_embeddings, all_actions_mask, noisy_action_features):
431
+ """
432
+ Replace embeddings in input_embeddings at positions where all_actions_mask is True
433
+ with embeddings from noisy_action_features, using vectorized operations.
434
+
435
+ Args:
436
+ input_embeddings: Tensor of shape (B, S, D)
437
+ all_actions_mask: Boolean tensor of shape (B, S)
438
+ noisy_action_features: Tensor of shape (B, K, D) where K is the number of True values in mask per sample
439
+
440
+ Returns:
441
+ Modified input_embeddings tensor
442
+ """
443
+ # Clone input to avoid modifying the original tensor
444
+ new_input_embeddings = input_embeddings.clone()
445
+
446
+ # Create a tensor with the same shape of input_embeddings to hold the noisy action features
447
+ repositioned_noisy_action_features = torch.zeros_like(input_embeddings)
448
+
449
+ # Create batch indices for splicing
450
+ batch_indices = torch.arange(input_embeddings.shape[0], device=input_embeddings.device)
451
+ batch_indices = batch_indices.unsqueeze(1).expand(-1, noisy_action_features.shape[1])
452
+
453
+ # Get indices where mask is True for each sample
454
+ masked_indices = torch.stack([torch.where(mask)[0] for mask in all_actions_mask])
455
+
456
+ # Move the noisy action features into their correct positions
457
+ repositioned_noisy_action_features[batch_indices, masked_indices] = noisy_action_features
458
+
459
+ # Combine original input embeddings and noisy action embeddings using the mask
460
+ new_input_embeddings = torch.where(
461
+ all_actions_mask.unsqueeze(-1), repositioned_noisy_action_features, new_input_embeddings
462
+ )
463
+
464
+ return new_input_embeddings
465
+
466
+ def _process_action_masks(self, labels):
467
+ """Helper to get action masks from labels"""
468
+ current_action_mask = get_current_action_mask(labels)
469
+ next_actions_mask = get_next_actions_mask(labels)
470
+ all_actions_mask = current_action_mask | next_actions_mask # (B, seq_len)
471
+ return all_actions_mask
472
+
473
+ def _process_vision_features(self, pixel_values, language_embeddings=None, use_film=False):
474
+ """Process vision features with optional FiLM conditioning"""
475
+ if use_film:
476
+ # FiLM: Infuse language inputs into visual features
477
+ patch_features = self.vision_backbone(pixel_values, language_embeddings) # (bsz, 256 * num_images, D)
478
+ else:
479
+ patch_features = self.vision_backbone(pixel_values) # (bsz, 256 * num_images, D)
480
+
481
+ # Project patch embeddings into language embedding space
482
+ return self.projector(patch_features)
483
+
484
+ def _process_proprio_features(self, projected_patch_embeddings, proprio, proprio_projector):
485
+ """Process proprioceptive features and append to vision features"""
486
+ if proprio_projector is not None and proprio is not None:
487
+ # projected_patch_embeddings: (bsz, num_patches * num_images, llm_dim)
488
+ # proprio: (bsz, proprio_dim) or (propro_dim,)
489
+ proprio = proprio.reshape(projected_patch_embeddings.shape[0], -1) # (bsz, proprio_dim)
490
+ proprio_features = proprio_projector(proprio) # (bsz, llm_dim)
491
+ proprio_features = proprio_features.unsqueeze(dim=1) # (bsz, 1, llm_dim)
492
+ # For simplicity, just append proprio token to the end of projected vision patch tokens
493
+ return torch.cat((projected_patch_embeddings, proprio_features), dim=1)
494
+ return projected_patch_embeddings
495
+
496
+ def _build_multimodal_attention(self, input_embeddings, projected_patch_embeddings, attention_mask):
497
+ """Build multimodal embeddings and attention mask"""
498
+ # Update attention mask
499
+ projected_patch_attention_mask = None
500
+ if attention_mask is not None:
501
+ projected_patch_attention_mask = torch.full(
502
+ (projected_patch_embeddings.shape[0], projected_patch_embeddings.shape[1]),
503
+ fill_value=True,
504
+ dtype=attention_mask.dtype,
505
+ device=attention_mask.device,
506
+ )
507
+
508
+ # Build multimodal embeddings & attention mask; insert embeddings after <BOS> token (1:)
509
+ multimodal_embeddings = torch.cat(
510
+ [input_embeddings[:, :1, :], projected_patch_embeddings, input_embeddings[:, 1:, :]], dim=1
511
+ )
512
+
513
+ multimodal_attention_mask = None
514
+ if attention_mask is not None:
515
+ multimodal_attention_mask = torch.cat(
516
+ [attention_mask[:, :1], projected_patch_attention_mask, attention_mask[:, 1:]], dim=1
517
+ )
518
+
519
+ return multimodal_embeddings, multimodal_attention_mask
520
+
521
+ def _build_multimodal_labels(self, labels, projected_patch_embeddings):
522
+ """Build multimodal labels with IGNORE_INDEX for patch embeddings"""
523
+ if labels is not None:
524
+ projected_patch_labels = torch.full(
525
+ (projected_patch_embeddings.shape[0], projected_patch_embeddings.shape[1]),
526
+ fill_value=IGNORE_INDEX,
527
+ dtype=labels.dtype,
528
+ device=labels.device,
529
+ )
530
+ return torch.cat([labels[:, :1], projected_patch_labels, labels[:, 1:]], dim=1)
531
+ return None
532
+
533
+ # === Core Prismatic VLM `forward()` Logic ===
534
+ def forward(
535
+ self,
536
+ input_ids: Optional[torch.LongTensor] = None,
537
+ attention_mask: Optional[torch.Tensor] = None,
538
+ pixel_values: Optional[torch.FloatTensor] = None,
539
+ labels: Optional[torch.LongTensor] = None,
540
+ inputs_embeds: Optional[torch.FloatTensor] = None,
541
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
542
+ use_cache: Optional[bool] = None,
543
+ output_attentions: Optional[bool] = None,
544
+ output_hidden_states: Optional[bool] = None,
545
+ output_projector_features: Optional[bool] = None,
546
+ return_dict: Optional[bool] = None,
547
+ proprio=None,
548
+ proprio_projector=None,
549
+ noisy_actions=None,
550
+ noisy_action_projector=None,
551
+ diffusion_timestep_embeddings=None,
552
+ use_film: bool = False,
553
+ ) -> Union[Tuple, PrismaticCausalLMOutputWithPast]:
554
+ """Run a forward pass through the VLM, returning a PrismaticCausalLMOutputWithPast instance."""
555
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
556
+ output_hidden_states = (
557
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
558
+ )
559
+ output_projector_features = output_projector_features if output_projector_features is not None else False
560
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
561
+
562
+ # Respect `use_cache` only if not training (even if `gradient_checkpointing` is off)
563
+ use_cache = use_cache and not self.training
564
+
565
+ # Instantiate Placeholder for Projector Features
566
+ projected_patch_embeddings = None
567
+
568
+ # === Handle Generation with Cache (`input_ids.shape[1] == 1`) =>> requires `past_keys_values` ===
569
+ if input_ids.shape[1] == 1:
570
+ assert input_ids.shape[0] == 1, "Generation is only currently supported for batch size of 1!"
571
+ assert past_key_values is not None, "You must provide `past_key_values` during cached generation!"
572
+ assert labels is None, "Unexpected key `labels` provided during cached generation!"
573
+
574
+ language_model_output = self.language_model(
575
+ input_ids=input_ids,
576
+ attention_mask=None,
577
+ position_ids=None,
578
+ past_key_values=past_key_values,
579
+ inputs_embeds=None,
580
+ labels=None,
581
+ use_cache=use_cache,
582
+ output_attentions=output_attentions,
583
+ output_hidden_states=output_hidden_states,
584
+ return_dict=return_dict,
585
+ )
586
+
587
+ # === Handle Unimodal Forward ===
588
+ elif pixel_values is None:
589
+ assert (input_ids is not None) and (inputs_embeds is None), "Missing `input_ids` in language-only forward!"
590
+ assert past_key_values is None, "Unexpected key `past_key_values` provided during language-only forward!"
591
+
592
+ language_model_output = self.language_model(
593
+ input_ids=input_ids,
594
+ attention_mask=attention_mask,
595
+ position_ids=None,
596
+ past_key_values=None,
597
+ inputs_embeds=None,
598
+ labels=labels,
599
+ use_cache=use_cache,
600
+ output_attentions=output_attentions,
601
+ output_hidden_states=output_hidden_states,
602
+ return_dict=return_dict,
603
+ )
604
+
605
+ # === Handle Multimodal Forward ===
606
+ elif (input_ids.shape[0] == pixel_values.shape[0]) or (inputs_embeds.shape[0] == pixel_values.shape[0]):
607
+ assert past_key_values is None, "Unexpected key `past_key_values` provided during multimodal forward!"
608
+
609
+ # Get input embeddings (from language model embeddings)
610
+ input_embeddings = self.get_input_embeddings()(input_ids) # (B, seq_len, D)
611
+
612
+ # Extract action masks
613
+ all_actions_mask = self._process_action_masks(labels)
614
+
615
+ # Extract the language portion of the input embeddings (i.e. remove the action tokens portion)
616
+ language_embeddings = input_embeddings[~all_actions_mask].reshape(
617
+ input_embeddings.shape[0], -1, input_embeddings.shape[2]
618
+ ) # (B, lang_seq_len, llm_dim)
619
+
620
+ # Get visual features
621
+ projected_patch_embeddings = self._process_vision_features(pixel_values, language_embeddings, use_film)
622
+
623
+ # Add proprioceptive state if provided
624
+ projected_patch_embeddings = self._process_proprio_features(
625
+ projected_patch_embeddings, proprio, proprio_projector
626
+ )
627
+
628
+ # [Diffusion] Add diffusion timestep embedding if provided
629
+ if diffusion_timestep_embeddings is not None:
630
+ # For simplicity, just append diffusion timestep embedding to the end of projected vision patch tokens
631
+ projected_patch_embeddings = torch.cat(
632
+ (projected_patch_embeddings, diffusion_timestep_embeddings), dim=1
633
+ )
634
+
635
+ # Process action embeddings
636
+ if noisy_actions is not None:
637
+ # Get mask corresponding to all action tokens
638
+ all_actions_mask = self._process_action_masks(labels)
639
+
640
+ # Reshape noisy actions into individual action tokens
641
+ # noisy_actions: (B, chunk_len, action_dim) -> (B, chunk_len * action_dim, 1)
642
+ B = noisy_actions.shape[0]
643
+ noisy_actions = noisy_actions.reshape(B, -1).unsqueeze(-1)
644
+
645
+ # Project noisy action tokens into language model embedding space
646
+ noisy_action_features = noisy_action_projector(noisy_actions) # (B, chunk_len * action_dim, llm_dim)
647
+
648
+ # Replace embeddings of the action tokens with noisy action embeddings
649
+ input_embeddings = self._replace_input_embeddings(
650
+ input_embeddings, all_actions_mask, noisy_action_features
651
+ )
652
+ else:
653
+ # Replace the embeddings of the action tokens with zeros
654
+ # (Later on, the positional embeddings will be added to them)
655
+ all_actions_mask = all_actions_mask.unsqueeze(-1) # (B, seq_len, 1)
656
+ input_embeddings = input_embeddings * ~all_actions_mask
657
+
658
+ # Build multimodal embeddings & attention mask
659
+ multimodal_embeddings, multimodal_attention_mask = self._build_multimodal_attention(
660
+ input_embeddings, projected_patch_embeddings, attention_mask
661
+ )
662
+
663
+ # Build labels for multimodal sequence if needed
664
+ multimodal_labels = self._build_multimodal_labels(labels, projected_patch_embeddings)
665
+
666
+ # Dispatch to language model
667
+ language_model_output = self.language_model(
668
+ input_ids=None,
669
+ attention_mask=multimodal_attention_mask,
670
+ position_ids=None,
671
+ past_key_values=None,
672
+ inputs_embeds=multimodal_embeddings,
673
+ labels=multimodal_labels,
674
+ use_cache=use_cache,
675
+ output_attentions=output_attentions,
676
+ output_hidden_states=output_hidden_states,
677
+ return_dict=return_dict,
678
+ )
679
+
680
+ # === Otherwise =>> Assume Invalid! ===
681
+ elif (input_ids.shape[0] != pixel_values.shape[0]) or (inputs_embeds.shape[0] != pixel_values.shape[0]):
682
+ raise ValueError("Non-homogenous batch of (text, image) input -- forward() does not support mixed batches!")
683
+
684
+ else:
685
+ raise ValueError(
686
+ "Invalid PrismaticForConditionalGeneration `forward()` call with provided arguments:\n"
687
+ f"=> `input_ids` = {input_ids is not None}\n"
688
+ f"=> `attention_mask` = {attention_mask is not None}\n"
689
+ f"=> `pixel_values` = {pixel_values is not None}\n"
690
+ f"=> `labels` = {labels is not None}\n"
691
+ f"=> `input_embeds` = {inputs_embeds is not None}\n"
692
+ f"=> `past_key_values` = {past_key_values is not None}\n"
693
+ f"=> `use_cache` = {use_cache}"
694
+ )
695
+
696
+ # Unpack `language_model_output` and return PrismaticCausalLMOutputWithPast (or tuple if not `return_dict`)
697
+ if not return_dict:
698
+ if output_projector_features and (projected_patch_embeddings is not None):
699
+ return *language_model_output, projected_patch_embeddings
700
+
701
+ return language_model_output
702
+
703
+ return PrismaticCausalLMOutputWithPast(
704
+ loss=language_model_output.loss,
705
+ logits=language_model_output.logits,
706
+ past_key_values=language_model_output.past_key_values,
707
+ hidden_states=language_model_output.hidden_states,
708
+ attentions=language_model_output.attentions,
709
+ projector_features=projected_patch_embeddings,
710
+ )
711
+
712
+ # === GenerationMixin Methods ===
713
+ def prepare_inputs_for_generation(
714
+ self,
715
+ input_ids: Optional[torch.Tensor] = None,
716
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
717
+ inputs_embeds: Optional[torch.FloatTensor] = None,
718
+ pixel_values: Optional[torch.FloatTensor] = None,
719
+ attention_mask: Optional[torch.Tensor] = None,
720
+ **kwargs: str,
721
+ ) -> Dict[str, torch.Tensor]:
722
+ """Borrowed from `LlamaForCausalLM` and simplified for batch size = 1; mirrors original PrismaticVLM logic."""
723
+ if ((input_ids is not None) and (input_ids.shape[0] > 1)) or (
724
+ (inputs_embeds is not None) and (inputs_embeds.shape[0] > 1)
725
+ ):
726
+ raise ValueError("Generation with batch size > 1 is not currently supported!")
727
+
728
+ # Handle `past_key_values` (cache) =>> assume `input_ids` just has unprocessed tokens
729
+ if past_key_values is not None:
730
+ input_ids = input_ids[:, -1:]
731
+
732
+ # If `input_embeds` are passed, we only want to use them in the 1st generation step
733
+ if inputs_embeds is not None and past_key_values is None:
734
+ model_inputs = {"input_embeds": inputs_embeds}
735
+ else:
736
+ model_inputs = {"input_ids": input_ids}
737
+
738
+ # Make sure `pixel_values` are preserved in `model_inputs`
739
+ model_inputs.update(
740
+ {
741
+ "attention_mask": attention_mask,
742
+ "pixel_values": pixel_values,
743
+ "past_key_values": past_key_values,
744
+ "use_cache": kwargs.get("use_cache"),
745
+ }
746
+ )
747
+
748
+ return model_inputs
749
+
750
+ # Defer to Language Model (all handle this differently, with different return types)
751
+ def _reorder_cache(self, *args, **kwargs) -> Any:
752
+ return self.language_model._reorder_cache(*args, **kwargs)
753
+
754
+
755
+ class OpenVLAForActionPrediction(PrismaticForConditionalGeneration):
756
+ config_class: PretrainedConfig = OpenVLAConfig
757
+
758
+ def __init__(self, config: OpenVLAConfig) -> None:
759
+ super().__init__(config)
760
+ self.norm_stats = config.norm_stats
761
+
762
+ # Compute action bins
763
+ self.bins = np.linspace(-1, 1, config.n_action_bins)
764
+ self.bin_centers = (self.bins[:-1] + self.bins[1:]) / 2.0
765
+
766
+ # Compute vocab size for de-tokenization -- revert added "multiple of"
767
+ self.vocab_size = self.config.text_config.vocab_size - self.config.pad_to_multiple_of
768
+
769
+ def _prepare_input_for_action_prediction(self, input_ids, attention_mask):
770
+ """Prepares input for action prediction by adding necessary tokens"""
771
+ # Add (ACTION_DIM * NUM_ACTIONS_CHUNK) placeholder tokens to input_ids to simulate action tokens
772
+ placeholder_action_token_ids = (
773
+ torch.ones((input_ids.shape[0], ACTION_DIM * NUM_ACTIONS_CHUNK)).to(input_ids.device).to(input_ids.dtype)
774
+ )
775
+ input_ids = torch.cat([input_ids, placeholder_action_token_ids], dim=-1)
776
+
777
+ # Add stop token to sequence (needed in non-causal bi-directional self-attention, as it appears at train time)
778
+ stop_token_id = torch.ones((input_ids.shape[0], 1)).to(input_ids.device).to(input_ids.dtype) * STOP_INDEX
779
+ input_ids = torch.cat([input_ids, stop_token_id], dim=-1)
780
+
781
+ # Extend the attention mask to fit the new shape of input
782
+ # Note: Only batch size == 1 supported right now
783
+ mask_extension = (
784
+ torch.ones((attention_mask.shape[0], input_ids.shape[-1] - attention_mask.shape[-1]))
785
+ .to(attention_mask.device)
786
+ .to(attention_mask.dtype)
787
+ )
788
+ attention_mask = torch.cat([attention_mask, mask_extension], dim=-1)
789
+
790
+ return input_ids, attention_mask
791
+
792
+ def _prepare_labels_for_action_prediction(self, labels, input_ids):
793
+ """Creates labels tensor for action prediction if not provided"""
794
+ # Extend labels tensor with fake action labels
795
+ ARBITRARY_ACTION_TOKEN_IDX = ACTION_TOKEN_BEGIN_IDX + 1
796
+ labels_extension = (
797
+ torch.ones((labels.shape[0], input_ids.shape[-1] - labels.shape[-1])).to(labels.device).to(labels.dtype)
798
+ * ARBITRARY_ACTION_TOKEN_IDX
799
+ )
800
+ labels = torch.cat([labels, labels_extension], dim=-1)
801
+
802
+ # Replace last label token with stop token
803
+ labels[:, -1] = STOP_INDEX
804
+
805
+ return labels
806
+
807
+ def _unnormalize_actions(self, normalized_actions, unnorm_key=None):
808
+ """Unnormalize actions using dataset statistics"""
809
+ action_norm_stats = self.get_action_stats(unnorm_key)
810
+
811
+ if ACTION_PROPRIO_NORMALIZATION_TYPE == NormalizationType.BOUNDS:
812
+ mask = action_norm_stats.get("mask", np.ones_like(action_norm_stats["min"], dtype=bool))
813
+ action_high, action_low = np.array(action_norm_stats["max"]), np.array(action_norm_stats["min"])
814
+ elif ACTION_PROPRIO_NORMALIZATION_TYPE == NormalizationType.BOUNDS_Q99:
815
+ mask = action_norm_stats.get("mask", np.ones_like(action_norm_stats["q01"], dtype=bool))
816
+ action_high, action_low = np.array(action_norm_stats["q99"]), np.array(action_norm_stats["q01"])
817
+ else:
818
+ raise ValueError("Unsupported action/proprio normalization type detected!")
819
+
820
+ actions = np.where(
821
+ mask,
822
+ 0.5 * (normalized_actions + 1) * (action_high - action_low + 1e-8) + action_low,
823
+ normalized_actions,
824
+ )
825
+
826
+ return actions
827
+
828
+ def _run_diffusion_prediction(
829
+ self,
830
+ input_embeddings,
831
+ all_actions_mask,
832
+ noise,
833
+ action_head,
834
+ projected_patch_embeddings,
835
+ labels,
836
+ attention_mask,
837
+ NUM_PATCHES,
838
+ NUM_PROMPT_TOKENS,
839
+ noisy_action_projector,
840
+ ):
841
+ """Run diffusion-based action prediction"""
842
+ # Clone embedding for reuse in each timestep
843
+ orig_projected_patch_embeddings = projected_patch_embeddings.clone()
844
+ curr_noisy_actions = noise
845
+
846
+ # Reverse diffusion: Iteratively denoise to generate action prediction
847
+ for t in action_head.noise_scheduler.timesteps:
848
+ # Get diffusion model's noise prediction (conditioned on VLA latent embedding, current noisy action
849
+ # embedding, and diffusion timestep embedding)
850
+ timesteps = torch.Tensor([t]).to(labels.device)
851
+ diffusion_timestep_embeddings = (
852
+ action_head.time_encoder(timesteps).to(curr_noisy_actions.dtype).to(curr_noisy_actions.device)
853
+ ) # (B, llm_dim)
854
+ diffusion_timestep_embeddings = diffusion_timestep_embeddings.unsqueeze(1) # (B, 1, llm_dim)
855
+
856
+ # [Diffusion] Replace the embeddings of the action tokens with noisy actions
857
+ # (Later on, the positional embeddings will be added to them)
858
+
859
+ # For simplicity, append diffusion timestep embedding to the end of projected vision tokens
860
+ projected_patch_embeddings = torch.cat(
861
+ (orig_projected_patch_embeddings, diffusion_timestep_embeddings), dim=1
862
+ )
863
+
864
+ # Reshape and project noisy actions into language embedding space
865
+ B = curr_noisy_actions.shape[0]
866
+ orig_curr_noisy_actions_shape = curr_noisy_actions.shape
867
+ curr_noisy_actions = curr_noisy_actions.reshape(B, -1).unsqueeze(-1)
868
+ noisy_action_features = noisy_action_projector(curr_noisy_actions)
869
+ curr_noisy_actions = curr_noisy_actions.reshape(orig_curr_noisy_actions_shape)
870
+
871
+ # Replace action token embeddings with noisy action embeddings
872
+ input_embeddings = self._replace_input_embeddings(
873
+ input_embeddings.clone(), all_actions_mask, noisy_action_features
874
+ )
875
+
876
+ # Build multimodal embeddings and attention mask
877
+ multimodal_embeddings, multimodal_attention_mask = self._build_multimodal_attention(
878
+ input_embeddings, projected_patch_embeddings, attention_mask
879
+ )
880
+
881
+ # Forward pass through language model
882
+ language_model_output = self.language_model(
883
+ input_ids=None,
884
+ attention_mask=multimodal_attention_mask,
885
+ position_ids=None,
886
+ past_key_values=None,
887
+ inputs_embeds=multimodal_embeddings,
888
+ labels=None,
889
+ use_cache=None,
890
+ output_attentions=False,
891
+ output_hidden_states=True,
892
+ return_dict=True,
893
+ )
894
+
895
+ # Extract hidden states for action portion of response
896
+ last_hidden_states = language_model_output.hidden_states[-1] # (B, seq_len, D)
897
+ actions_hidden_states = last_hidden_states[
898
+ :,
899
+ NUM_PATCHES + NUM_PROMPT_TOKENS : NUM_PATCHES + NUM_PROMPT_TOKENS + ACTION_DIM * NUM_ACTIONS_CHUNK,
900
+ :,
901
+ ] # (B, act_chunk_len, D)
902
+
903
+ # Predict noise and update noisy actions: x_t -> x_{t-1}
904
+ noise_pred = action_head.predict_noise(actions_hidden_states)
905
+ curr_noisy_actions = action_head.noise_scheduler.step(noise_pred, t, curr_noisy_actions).prev_sample
906
+
907
+ curr_noisy_actions = curr_noisy_actions.reshape(NUM_ACTIONS_CHUNK, ACTION_DIM)
908
+
909
+ # Return final actions
910
+ return curr_noisy_actions.float().cpu().detach().numpy(), actions_hidden_states
911
+
912
+ def _regression_or_discrete_prediction(
913
+ self,
914
+ input_embeddings,
915
+ all_actions_mask,
916
+ projected_patch_embeddings,
917
+ attention_mask,
918
+ labels,
919
+ NUM_PATCHES,
920
+ NUM_PROMPT_TOKENS,
921
+ action_head=None,
922
+ ):
923
+ """Run L1 regression-based continuous action prediction or discrete action tokens prediction."""
924
+ # Zero out action token embeddings
925
+ all_actions_mask = all_actions_mask.unsqueeze(-1) # (B, seq_len, 1)
926
+ input_embeddings = input_embeddings * ~all_actions_mask
927
+
928
+ # Build multimodal embeddings and attention mask
929
+ multimodal_embeddings, multimodal_attention_mask = self._build_multimodal_attention(
930
+ input_embeddings, projected_patch_embeddings, attention_mask
931
+ )
932
+
933
+ # Forward pass through language model
934
+ language_model_output = self.language_model(
935
+ input_ids=None,
936
+ attention_mask=multimodal_attention_mask,
937
+ position_ids=None,
938
+ past_key_values=None,
939
+ inputs_embeds=multimodal_embeddings,
940
+ labels=None,
941
+ use_cache=None,
942
+ output_attentions=False,
943
+ output_hidden_states=True,
944
+ return_dict=True,
945
+ )
946
+
947
+ # Extract hidden states for action tokens
948
+ last_hidden_states = language_model_output.hidden_states[-1] # (B, seq_len, D)
949
+ actions_hidden_states = last_hidden_states[
950
+ :,
951
+ NUM_PATCHES + NUM_PROMPT_TOKENS : NUM_PATCHES + NUM_PROMPT_TOKENS + ACTION_DIM * NUM_ACTIONS_CHUNK,
952
+ :,
953
+ ] # (B, act_chunk_len, D)
954
+
955
+ # Handle different prediction methods
956
+ if action_head is not None:
957
+ # L1 regression prediction
958
+ normalized_actions = action_head.predict_action(actions_hidden_states)
959
+ normalized_actions = normalized_actions.reshape(NUM_ACTIONS_CHUNK, ACTION_DIM)
960
+ normalized_actions = normalized_actions.float().cpu().detach().numpy()
961
+ else:
962
+ # Discrete token-based prediction
963
+ predicted_action_token_ids = (
964
+ language_model_output.logits[
965
+ :,
966
+ NUM_PATCHES + NUM_PROMPT_TOKENS : NUM_PATCHES + NUM_PROMPT_TOKENS + ACTION_DIM * NUM_ACTIONS_CHUNK,
967
+ ]
968
+ .argmax(dim=2)
969
+ .cpu()
970
+ .numpy()
971
+ )
972
+ discretized_actions = self.vocab_size - predicted_action_token_ids
973
+
974
+ #print(f"discretized_actions \n {discretized_actions}")
975
+
976
+ discretized_actions = np.clip(discretized_actions - 1, a_min=0, a_max=self.bin_centers.shape[0] - 1)
977
+ normalized_actions = self.bin_centers[discretized_actions]
978
+ normalized_actions = normalized_actions.reshape(NUM_ACTIONS_CHUNK, ACTION_DIM)
979
+
980
+ return normalized_actions, actions_hidden_states
981
+
982
+ def predict_action(
983
+ self,
984
+ input_ids: Optional[torch.LongTensor] = None,
985
+ unnorm_key: Optional[str] = None,
986
+ proprio=None,
987
+ proprio_projector=None,
988
+ action_head=None,
989
+ noisy_action_projector=None,
990
+ use_film: bool = False,
991
+ **kwargs: str,
992
+ ) -> np.ndarray:
993
+ """Predict actions from input sequence, with options for different prediction methods.
994
+
995
+ Args:
996
+ input_ids: Input token ids
997
+ unnorm_key: Key for unnormalization statistics
998
+ proprio: Proprioceptive features
999
+ proprio_projector: Projector for proprioceptive features
1000
+ action_head: Optional head for L1 regression or diffusion-based prediction
1001
+ noisy_action_projector: Projector for noisy actions in diffusion-based prediction
1002
+ use_film: Whether to use FiLM conditioning
1003
+ **kwargs: Additional arguments including pixel_values and attention_mask
1004
+
1005
+ Returns:
1006
+ Tuple of (unnormalized_actions, action_hidden_states)
1007
+ """
1008
+ # If the special empty token ('') does not already appear after the colon (':') token in the prompt
1009
+ # (after "OUT:" or "ASSISTANT:"), insert it to match the inputs seen at training time
1010
+ if not torch.all(input_ids[:, -1] == 29871):
1011
+ input_ids = torch.cat(
1012
+ (input_ids, torch.unsqueeze(torch.Tensor([29871]).long(), dim=0).to(input_ids.device)), dim=1
1013
+ )
1014
+
1015
+ pixel_values = kwargs["pixel_values"]
1016
+ attention_mask = kwargs["attention_mask"]
1017
+
1018
+ # Create fake labels tensor (needed for action mask)
1019
+ labels = input_ids.clone()
1020
+ labels[:] = IGNORE_INDEX
1021
+
1022
+ # Get number of tokens in prompt (excluding the start token)
1023
+ NUM_PROMPT_TOKENS = input_ids.shape[-1] - 1 # Subtract action tokens and stop token
1024
+
1025
+ # Prepare inputs by adding necessary tokens
1026
+ input_ids, attention_mask = self._prepare_input_for_action_prediction(input_ids, attention_mask)
1027
+
1028
+ # Update labels tensor for action mask computation later
1029
+ labels = self._prepare_labels_for_action_prediction(labels, input_ids)
1030
+
1031
+ # Get input embeddings and action masks
1032
+ input_embeddings = self.get_input_embeddings()(input_ids)
1033
+ all_actions_mask = self._process_action_masks(labels)
1034
+
1035
+ # Extract language embeddings
1036
+ language_embeddings = input_embeddings[~all_actions_mask].reshape(
1037
+ input_embeddings.shape[0], -1, input_embeddings.shape[2]
1038
+ )
1039
+
1040
+ # Process vision features
1041
+ projected_patch_embeddings = self._process_vision_features(pixel_values, language_embeddings, use_film)
1042
+
1043
+ # Add proprioceptive features if provided #lhz add
1044
+ if self.proprio_projector is not None:
1045
+ use_proprio = self.proprio_projector is not None and proprio is not None
1046
+ if use_proprio:
1047
+ proprio = torch.Tensor(proprio).to(projected_patch_embeddings.device, dtype=projected_patch_embeddings.dtype)
1048
+ projected_patch_embeddings = self._process_proprio_features(
1049
+ projected_patch_embeddings, proprio, self.proprio_projector
1050
+ )
1051
+ else:
1052
+ use_proprio = proprio_projector is not None and proprio is not None
1053
+ if use_proprio:
1054
+ proprio = torch.Tensor(proprio).to(projected_patch_embeddings.device, dtype=projected_patch_embeddings.dtype)
1055
+ projected_patch_embeddings = self._process_proprio_features(
1056
+ projected_patch_embeddings, proprio, proprio_projector
1057
+ )
1058
+
1059
+ # Use diffusion if provided, otherwise use regression or discrete prediction
1060
+ use_diffusion = noisy_action_projector is not None and hasattr(action_head, "noise_scheduler")
1061
+
1062
+ # Calculate number of patches (including proprio token and/or diffusion timestep embedding if present)
1063
+ NUM_PATCHES = self.vision_backbone.get_num_patches() * self.vision_backbone.get_num_images_in_input()
1064
+ if use_proprio:
1065
+ NUM_PATCHES += 1
1066
+ if use_diffusion:
1067
+ NUM_PATCHES += 1
1068
+
1069
+ if use_diffusion:
1070
+ # Sample random noise with shape equal to output action, used as the starting state for reverse diffusion
1071
+ noise = torch.randn(
1072
+ size=(1, NUM_ACTIONS_CHUNK, ACTION_DIM), device=input_embeddings.device, dtype=input_embeddings.dtype
1073
+ )
1074
+
1075
+ # Run diffusion-based prediction
1076
+ normalized_actions, actions_hidden_states = self._run_diffusion_prediction(
1077
+ input_embeddings,
1078
+ all_actions_mask,
1079
+ noise,
1080
+ action_head,
1081
+ projected_patch_embeddings,
1082
+ labels,
1083
+ attention_mask,
1084
+ NUM_PATCHES,
1085
+ NUM_PROMPT_TOKENS,
1086
+ noisy_action_projector,
1087
+ )
1088
+ else:
1089
+ # Run regression or discrete token-based prediction
1090
+ normalized_actions, actions_hidden_states = self._regression_or_discrete_prediction(
1091
+ input_embeddings,
1092
+ all_actions_mask,
1093
+ projected_patch_embeddings,
1094
+ attention_mask,
1095
+ labels,
1096
+ NUM_PATCHES,
1097
+ NUM_PROMPT_TOKENS,
1098
+ action_head,
1099
+ )
1100
+ #print(f"normalized_actions, {normalized_actions}")
1101
+ # Unnormalize predicted actions
1102
+ actions = self._unnormalize_actions(normalized_actions, unnorm_key)
1103
+
1104
+ return actions, actions_hidden_states
1105
+
1106
+ @staticmethod
1107
+ def _check_unnorm_key(norm_stats: Dict[str, Dict[str, Any]], unnorm_key: Optional[str]) -> str:
1108
+ """Validate and resolve the unnormalization key for action statistics"""
1109
+ if unnorm_key is None:
1110
+ assert len(norm_stats) == 1, (
1111
+ f"Your model was trained on more than one dataset, "
1112
+ f"please pass a `unnorm_key` from the following options to choose the statistics "
1113
+ f"used for un-normalizing actions: {norm_stats.keys()}"
1114
+ )
1115
+ unnorm_key = next(iter(norm_stats.keys()))
1116
+
1117
+ assert unnorm_key in norm_stats, (
1118
+ f"The `unnorm_key` you chose is not in the set of available dataset statistics, "
1119
+ f"please choose from: {norm_stats.keys()}"
1120
+ )
1121
+ return unnorm_key
1122
+
1123
+ def get_action_dim(self, unnorm_key: Optional[str] = None) -> int:
1124
+ """Get the dimensionality of the policy's action space."""
1125
+ unnorm_key = self._check_unnorm_key(self.norm_stats, unnorm_key)
1126
+ return len(self.norm_stats[unnorm_key]["action"]["min"])
1127
+
1128
+ def get_action_stats(self, unnorm_key: Optional[str] = None) -> Dict[str, Any]:
1129
+ """Get all the logged statistics for the given dataset."""
1130
+ unnorm_key = self._check_unnorm_key(self.norm_stats, unnorm_key)
1131
+ return self.norm_stats[unnorm_key]["action"]
preprocessor_config.json ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoImageProcessor": "processing_prismatic.PrismaticImageProcessor",
4
+ "AutoProcessor": "processing_prismatic.PrismaticProcessor"
5
+ },
6
+ "image_processor_type": "PrismaticImageProcessor",
7
+ "image_resize_strategy": "resize-naive",
8
+ "input_sizes": [
9
+ [
10
+ 3,
11
+ 224,
12
+ 224
13
+ ],
14
+ [
15
+ 3,
16
+ 224,
17
+ 224
18
+ ]
19
+ ],
20
+ "interpolations": [
21
+ "bicubic",
22
+ "bicubic"
23
+ ],
24
+ "means": [
25
+ [
26
+ 0.485,
27
+ 0.456,
28
+ 0.406
29
+ ],
30
+ [
31
+ 0.5,
32
+ 0.5,
33
+ 0.5
34
+ ]
35
+ ],
36
+ "processor_class": "PrismaticProcessor",
37
+ "stds": [
38
+ [
39
+ 0.229,
40
+ 0.224,
41
+ 0.225
42
+ ],
43
+ [
44
+ 0.5,
45
+ 0.5,
46
+ 0.5
47
+ ]
48
+ ],
49
+ "tvf_crop_params": [
50
+ {
51
+ "output_size": [
52
+ 224,
53
+ 224
54
+ ]
55
+ },
56
+ {
57
+ "output_size": [
58
+ 224,
59
+ 224
60
+ ]
61
+ }
62
+ ],
63
+ "tvf_do_letterbox": false,
64
+ "tvf_letterbox_fill": null,
65
+ "tvf_normalize_params": [
66
+ {
67
+ "inplace": false,
68
+ "mean": [
69
+ 0.484375,
70
+ 0.455078125,
71
+ 0.40625
72
+ ],
73
+ "std": [
74
+ 0.228515625,
75
+ 0.2236328125,
76
+ 0.224609375
77
+ ]
78
+ },
79
+ {
80
+ "inplace": false,
81
+ "mean": [
82
+ 0.5,
83
+ 0.5,
84
+ 0.5
85
+ ],
86
+ "std": [
87
+ 0.5,
88
+ 0.5,
89
+ 0.5
90
+ ]
91
+ }
92
+ ],
93
+ "tvf_resize_params": [
94
+ {
95
+ "antialias": true,
96
+ "interpolation": 3,
97
+ "max_size": null,
98
+ "size": [
99
+ 224,
100
+ 224
101
+ ]
102
+ },
103
+ {
104
+ "antialias": true,
105
+ "interpolation": 3,
106
+ "max_size": null,
107
+ "size": [
108
+ 224,
109
+ 224
110
+ ]
111
+ }
112
+ ],
113
+ "use_fused_vision_backbone": true
114
+ }
processing_prismatic.py ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ processing_prismatic.py
3
+
4
+ HuggingFace-style preprocessor definitions for Prismatic VLMs, inheriting from `ProcessorMixin`. Default configuration
5
+ specifies `siglip-224px+7b`.
6
+ """
7
+
8
+ from typing import Any, ClassVar, List, Optional, Tuple, Union
9
+
10
+ import timm.data
11
+ import torch
12
+ import torchvision.transforms.functional as TVF
13
+ from PIL import Image
14
+ from torchvision.transforms import CenterCrop, Compose, Normalize, Resize, ToTensor
15
+ from transformers import PreTrainedTokenizerBase
16
+ from transformers.image_processing_utils import BatchFeature, ImageProcessingMixin
17
+ from transformers.processing_utils import ProcessorMixin
18
+ from transformers.tokenization_utils import PaddingStrategy, PreTokenizedInput, TextInput, TruncationStrategy
19
+ from transformers.utils import TensorType
20
+
21
+
22
+ # === Image Processing ===
23
+ def letterbox_pad_transform(image: Image.Image, padding_fill_value: Tuple[int, int, int]) -> Image.Image:
24
+ """Given a PIL.Image, pad to square by adding a symmetric border around the height/width."""
25
+ (w, h), max_wh = image.size, max(image.size)
26
+ horizontal_pad, vertical_pad = int((max_wh - w) / 2), int((max_wh - h) / 2)
27
+ padding = (horizontal_pad, vertical_pad, horizontal_pad, vertical_pad)
28
+
29
+ return TVF.pad(image, padding, fill=padding_fill_value, padding_mode="constant")
30
+
31
+
32
+ class PrismaticImageProcessor(ImageProcessingMixin):
33
+ model_input_names: ClassVar[List[str]] = ["pixel_values"]
34
+
35
+ def __init__(
36
+ self,
37
+ use_fused_vision_backbone: bool = False,
38
+ image_resize_strategy: str = "letterbox",
39
+ input_sizes: Optional[List[Tuple[int, int, int]]] = None,
40
+ interpolations: Optional[List[str]] = None,
41
+ means: Optional[List[Tuple[float, float, float]]] = None,
42
+ stds: Optional[List[Tuple[float, float, float]]] = None,
43
+ **kwargs: str,
44
+ ) -> None:
45
+ """
46
+ Initialize a PrismaticImageProcessor as a wrapper around a torchvision transform; this transform will be
47
+ created by TIMM, and edited to follow our custom `image_resize_strategy` logic.
48
+ @param use_fused_vision_backbone: Boolean indicating single or fused (dual) vision backbone
49
+ @param image_resize_strategy: Prismatic image resize strategy in < resize-naive | resize-crop | letterbox >
50
+ @param input_size: [TIMM :: `data_cfg`] Input image size as tuple (channels, width, height)
51
+ @param interpolation: [TIMM :: `data_cfg`] Interpolation as string (default: "bicubic")
52
+ @param mean: [TIMM :: `data_cfg`] Normalization mean as float tuple (or two-tuple if `fused_backbone`)
53
+ @param std: [TIMM :: `data_cfg`] Normalization std as float tuple (or two-tuple if `fused_backbone`)
54
+ """
55
+ self.use_fused_vision_backbone = use_fused_vision_backbone
56
+ self.image_resize_strategy = image_resize_strategy
57
+
58
+ # Handle `None` default values
59
+ input_sizes = [(3, 224, 224)] if input_sizes is None else input_sizes
60
+ means = [(0.5, 0.5, 0.5)] if means is None else means
61
+ stds = [(0.5, 0.5, 0.5)] if stds is None else stds
62
+
63
+ # TIMM `data_cfg` Parameters
64
+ self.input_sizes, self.interpolations, self.means, self.stds = input_sizes, interpolations, means, stds
65
+
66
+ # Grab torchvision transforms via TIMM =>> need to parse for specific "functional" transform values!
67
+ self.tvf_resize_params, self.tvf_crop_params, self.tvf_normalize_params = [], [], []
68
+ self.tvf_do_letterbox, self.tvf_letterbox_fill = False, None
69
+
70
+ for idx in range(len(input_sizes)):
71
+ transform = timm.data.create_transform(
72
+ input_size=self.input_sizes[idx],
73
+ interpolation=self.interpolations[idx],
74
+ mean=self.means[idx],
75
+ std=self.stds[idx],
76
+ crop_pct=1.0, # Set to 1.0 to ignore cropping (initial Resize sets `input_size`)
77
+ crop_mode="center", # Default crop mode -- no-op when `crop_pct == 1.0`
78
+ is_training=False, # No image augmentations when loading the transform!
79
+ )
80
+
81
+ # [Validation] Ensure appropriate transform structure, expected sizes
82
+ if not (
83
+ isinstance(transform, Compose)
84
+ and (len(transform.transforms) == 4)
85
+ and isinstance(transform.transforms[0], Resize)
86
+ and isinstance(transform.transforms[1], CenterCrop)
87
+ and isinstance(transform.transforms[2], ToTensor)
88
+ and isinstance(transform.transforms[3], Normalize)
89
+ and (transform.transforms[0].size == self.input_sizes[idx][-1])
90
+ and (transform.transforms[1].size == self.input_sizes[idx][-2:])
91
+ ):
92
+ raise ValueError(f"Unexpected TIMM image transformation structure/sizes: `{transform}`")
93
+
94
+ # HF Image Processors *must* be JSON-serializable; as such, cannot have torchvision. as an attribute.
95
+ # => Instead, we're going to parse the transform and call "torchvision.transforms.functional" (`tvf`)
96
+ resize_t, crop_t, norm_t = transform.transforms[0], transform.transforms[1], transform.transforms[3]
97
+ self.tvf_resize_params.append(
98
+ {
99
+ "size": resize_t.size,
100
+ "interpolation": TVF.pil_modes_mapping[resize_t.interpolation],
101
+ "max_size": None,
102
+ "antialias": True,
103
+ }
104
+ )
105
+ self.tvf_crop_params.append({"output_size": crop_t.size})
106
+ self.tvf_normalize_params.append(
107
+ {
108
+ "mean": norm_t.mean.float().numpy().tolist(),
109
+ "std": norm_t.std.float().numpy().tolist(),
110
+ "inplace": False,
111
+ }
112
+ )
113
+ self.tvf_do_letterbox, self.tvf_letterbox_fill = False, None
114
+
115
+ # Handle Prismatic `image_resize_strategy`
116
+ if self.image_resize_strategy == "resize-naive":
117
+ self.tvf_resize_params[idx]["size"] = (resize_t.size, resize_t.size)
118
+ elif self.image_resize_strategy == "letterbox":
119
+ self.tvf_do_letterbox, self.tvf_letterbox_fill = True, tuple([int(x * 255) for x in self.means[idx]])
120
+ elif self.image_resize_strategy == "resize-crop":
121
+ pass
122
+ else:
123
+ raise ValueError(f"Image resize strategy `{self.image_resize_strategy}` is not supported!")
124
+
125
+ # Dispatch **kwargs to super()
126
+ super().__init__(**kwargs)
127
+
128
+ def apply_transform(self, img: Image.Image) -> torch.Tensor:
129
+ """Apply `functional` variant of TIMM's Transform = Compose([Resize -> CenterCrop -> ToTensor -> Normalize])"""
130
+ if self.tvf_do_letterbox:
131
+ img = letterbox_pad_transform(img, self.tvf_letterbox_fill)
132
+
133
+ # [Contract] Fused Backbones expect "channel-stacked" inputs; we'll unpack on the model side!
134
+ imgs_t = []
135
+ for idx in range(len(self.input_sizes)):
136
+ img_idx = TVF.resize(img, **self.tvf_resize_params[idx])
137
+ img_idx = TVF.center_crop(img_idx, **self.tvf_crop_params[idx])
138
+ img_idx_t = TVF.to_tensor(img_idx)
139
+ img_idx_t = TVF.normalize(img_idx_t, **self.tvf_normalize_params[idx])
140
+ imgs_t.append(img_idx_t)
141
+
142
+ # [Contract] `imgs_t` is a list of Tensors of shape [3, input_size, input_size]; stack along dim = 0
143
+ img_t = torch.vstack(imgs_t)
144
+
145
+ return img_t
146
+
147
+ def preprocess(
148
+ self,
149
+ images: Union[Image.Image, List[Image.Image]],
150
+ return_tensors: Optional[Union[str, TensorType]] = None,
151
+ **_: str,
152
+ ) -> BatchFeature:
153
+ """
154
+ Preprocess an image (or batch of images); note that unlike the `transformers :: BaseImageProcessor` we
155
+ explicitly only handle PIL.Image.Image instances for simplicity.
156
+ @param images: A (batch of) PIL.Image.Image instance(s) to preprocess.
157
+ @param return_tensors: BatchFeature default Tensor format (e.g., "pt" for torch); if None, returns np.ndarray
158
+ @return: Instance of `transformers :: BatchFeature` with a single key "pixel_values"
159
+ """
160
+ if not isinstance(images, list):
161
+ images = [images]
162
+
163
+ # Apply `self.img_transform` to each image (will return list of torch.Tensors); stack into "batched" Tensor
164
+ pixel_values = torch.stack([self.apply_transform(img.convert("RGB")) for img in images])
165
+
166
+ # Return BatchFeature =>> note that for compatibility, constructor expects Dict[str, np.ndarray], so we convert
167
+ return BatchFeature(data={"pixel_values": pixel_values.float().numpy()}, tensor_type=return_tensors)
168
+
169
+ def __call__(self, images: Union[Image.Image, List[Image.Image]], **kwargs) -> BatchFeature:
170
+ return self.preprocess(images, **kwargs)
171
+
172
+
173
+ # === PrismaticProcessor =>> Wraps both ImageProcessor and Tokenizer ===
174
+ # =>> https://github.com/huggingface/transformers/blob/main/src/transformers/models/llava/processing_llava.py
175
+ class PrismaticProcessor(ProcessorMixin):
176
+ attributes: ClassVar[List[str]] = ["image_processor", "tokenizer"]
177
+ image_processor_class: str = "AutoImageProcessor"
178
+ tokenizer_class: str = "AutoTokenizer"
179
+
180
+ def __init__(
181
+ self,
182
+ image_processor: Optional[ImageProcessingMixin] = None,
183
+ tokenizer: Optional[PreTrainedTokenizerBase] = None,
184
+ ) -> None:
185
+ super().__init__(image_processor, tokenizer)
186
+
187
+ def __call__(
188
+ self,
189
+ text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]],
190
+ images: Union[Image.Image, List[Image.Image]],
191
+ padding: Union[bool, str, PaddingStrategy] = False,
192
+ truncation: Optional[Union[bool, str, TruncationStrategy]] = None,
193
+ max_length: Optional[int] = None,
194
+ return_tensors: Optional[Union[str, TensorType]] = TensorType.PYTORCH,
195
+ ) -> BatchFeature:
196
+ """
197
+ Preprocess a given (batch) of text/images for a Prismatic VLM; forwards text to the underlying LLM's tokenizer,
198
+ forwards images to PrismaticImageProcessor.
199
+ @param text: The (batch) of text to encode; must be a string or list of strings.
200
+ @param images: A (batch of) PIL.Image.Image instance(s) to preprocess.
201
+ @param padding: Sequence padding strategy (if multiple specified) in < True = "longest" | "max_length" | False >
202
+ @param truncation: Truncation strategy for the output sequences; requires `max_length` to be specified
203
+ @param max_length: Maximum length (in tokens) to truncate
204
+ @param return_tensors: Type of return tensors (usually "pt" or TensorType.PYTORCH)
205
+ @return: BatchFeature with keys for `input_ids`, `attention_mask` and `pixel_values`.
206
+ """
207
+ pixel_values = self.image_processor(images, return_tensors=return_tensors)["pixel_values"]
208
+ text_inputs = self.tokenizer(
209
+ text, return_tensors=return_tensors, padding=padding, truncation=truncation, max_length=max_length
210
+ )
211
+
212
+ # [Validate] Need same number of images and text inputs!
213
+ if pixel_values.shape[0] != text_inputs.input_ids.shape[0]:
214
+ raise ValueError("Batch is malformed; expected same number of images and text inputs!")
215
+
216
+ return BatchFeature(data={**text_inputs, "pixel_values": pixel_values})
217
+
218
+ # === Tokenizer Dispatch Utilities =>> check `PreTrainedTokenizerBase` for documentation ===
219
+ def batch_decode(
220
+ self,
221
+ sequences: Union[List[int], List[List[int]], torch.Tensor, Any], # `Any` = np.ndarray | tf.Tensor
222
+ skip_special_tokens: bool = False,
223
+ clean_up_tokenization_spaces: Optional[bool] = None,
224
+ **kwargs: str,
225
+ ) -> List[str]:
226
+ return self.tokenizer.batch_decode(
227
+ sequences=sequences,
228
+ skip_special_tokens=skip_special_tokens,
229
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
230
+ **kwargs,
231
+ )
232
+
233
+ def decode(
234
+ self,
235
+ token_ids: Union[int, List[int], torch.Tensor, Any], # `Any` = np.ndarray | tf.Tensor
236
+ skip_special_tokens: bool = False,
237
+ clean_up_tokenization_spaces: Optional[bool] = None,
238
+ **kwargs: str,
239
+ ) -> str:
240
+ return self.tokenizer.decode(
241
+ token_ids=token_ids,
242
+ skip_special_tokens=skip_special_tokens,
243
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
244
+ **kwargs,
245
+ )
246
+
247
+ @property
248
+ def model_input_names(self) -> List[str]:
249
+ tokenizer_input_names = self.tokenizer.model_input_names
250
+ image_processor_input_names = self.image_processor.model_input_names
251
+
252
+ return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
processor_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_prismatic.PrismaticProcessor"
4
+ },
5
+ "processor_class": "PrismaticProcessor"
6
+ }
proprio_projector--0_checkpoint.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:610f92315b5074b9b4ec686f2077ca5a671efb94ecb032aa631a9fa0cbda719c
3
+ size 33687824
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<PAD>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "32000": {
30
+ "content": "<PAD>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ }
37
+ },
38
+ "auto_map": {
39
+ "AutoProcessor": "processing_prismatic.PrismaticProcessor"
40
+ },
41
+ "bos_token": "<s>",
42
+ "clean_up_tokenization_spaces": false,
43
+ "eos_token": "</s>",
44
+ "legacy": false,
45
+ "model_max_length": 2048,
46
+ "pad_token": "<PAD>",
47
+ "padding_side": "right",
48
+ "processor_class": "PrismaticProcessor",
49
+ "sp_model_kwargs": {},
50
+ "tokenizer_class": "LlamaTokenizer",
51
+ "unk_token": "<unk>",
52
+ "use_default_system_prompt": false
53
+ }