Yuxuan Chen commited on
Commit
573aa43
·
verified ·
1 Parent(s): dba4bd9

Upload folder using huggingface_hub

Browse files
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "<PAD>": 32000
3
+ }
config.json ADDED
@@ -0,0 +1,3765 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "pruned_models/oft_llmpruner_object_0.9",
3
+ "arch_specifier": "no-align+fused-gelu-mlp",
4
+ "architectures": [
5
+ "OpenVLAForActionPrediction"
6
+ ],
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_prismatic.OpenVLAConfig",
9
+ "AutoModelForVision2Seq": "modeling_prismatic.OpenVLAForActionPrediction"
10
+ },
11
+ "hf_llm_id": "meta-llama/Llama-2-7b-hf",
12
+ "image_resize_strategy": "resize-naive",
13
+ "image_sizes": [
14
+ 224,
15
+ 224
16
+ ],
17
+ "llm_backbone_id": "llama2-7b-pure",
18
+ "llm_max_length": 2048,
19
+ "model_type": "openvla",
20
+ "n_action_bins": 256,
21
+ "norm_stats": {
22
+ "austin_buds_dataset_converted_externally_to_rlds": {
23
+ "action": {
24
+ "mask": [
25
+ true,
26
+ true,
27
+ true,
28
+ true,
29
+ true,
30
+ true,
31
+ false
32
+ ],
33
+ "max": [
34
+ 1.0,
35
+ 1.0,
36
+ 1.0,
37
+ 0.0,
38
+ 0.0,
39
+ 0.0,
40
+ 1.0
41
+ ],
42
+ "mean": [
43
+ -0.07678354531526566,
44
+ 0.0036849044263362885,
45
+ 0.05644911900162697,
46
+ 0.0,
47
+ 0.0,
48
+ 0.0,
49
+ 0.3510494828224182
50
+ ],
51
+ "min": [
52
+ -1.0,
53
+ -1.0,
54
+ -1.0,
55
+ 0.0,
56
+ 0.0,
57
+ 0.0,
58
+ 0.0
59
+ ],
60
+ "q01": [
61
+ -1.0,
62
+ -0.9599999785423279,
63
+ -0.8714285492897034,
64
+ 0.0,
65
+ 0.0,
66
+ 0.0,
67
+ 0.0
68
+ ],
69
+ "q99": [
70
+ 1.0,
71
+ 0.8600000143051147,
72
+ 1.0,
73
+ 0.0,
74
+ 0.0,
75
+ 0.0,
76
+ 1.0
77
+ ],
78
+ "std": [
79
+ 0.6367740631103516,
80
+ 0.37889179587364197,
81
+ 0.47796326875686646,
82
+ 0.0,
83
+ 0.0,
84
+ 0.0,
85
+ 0.47721168398857117
86
+ ]
87
+ },
88
+ "num_trajectories": 50,
89
+ "num_transitions": 34112,
90
+ "proprio": {
91
+ "max": [
92
+ 0.0,
93
+ 0.0,
94
+ 0.0,
95
+ 0.0,
96
+ 0.0,
97
+ 0.0,
98
+ 0.0
99
+ ],
100
+ "mean": [
101
+ 0.0,
102
+ 0.0,
103
+ 0.0,
104
+ 0.0,
105
+ 0.0,
106
+ 0.0,
107
+ 0.0
108
+ ],
109
+ "min": [
110
+ 0.0,
111
+ 0.0,
112
+ 0.0,
113
+ 0.0,
114
+ 0.0,
115
+ 0.0,
116
+ 0.0
117
+ ],
118
+ "q01": [
119
+ 0.0,
120
+ 0.0,
121
+ 0.0,
122
+ 0.0,
123
+ 0.0,
124
+ 0.0,
125
+ 0.0
126
+ ],
127
+ "q99": [
128
+ 0.0,
129
+ 0.0,
130
+ 0.0,
131
+ 0.0,
132
+ 0.0,
133
+ 0.0,
134
+ 0.0
135
+ ],
136
+ "std": [
137
+ 0.0,
138
+ 0.0,
139
+ 0.0,
140
+ 0.0,
141
+ 0.0,
142
+ 0.0,
143
+ 0.0
144
+ ]
145
+ }
146
+ },
147
+ "austin_sailor_dataset_converted_externally_to_rlds": {
148
+ "action": {
149
+ "mask": [
150
+ true,
151
+ true,
152
+ true,
153
+ true,
154
+ true,
155
+ true,
156
+ false
157
+ ],
158
+ "max": [
159
+ 1.0,
160
+ 1.0,
161
+ 1.0,
162
+ 0.0,
163
+ 0.0,
164
+ 0.375,
165
+ 1.0
166
+ ],
167
+ "mean": [
168
+ 0.011825348250567913,
169
+ 0.006461074110120535,
170
+ 0.06023626774549484,
171
+ 0.0,
172
+ 0.0,
173
+ 0.0016465914668515325,
174
+ 0.5260950326919556
175
+ ],
176
+ "min": [
177
+ -1.0,
178
+ -1.0,
179
+ -1.0,
180
+ 0.0,
181
+ 0.0,
182
+ -0.375,
183
+ 0.0
184
+ ],
185
+ "q01": [
186
+ -1.0,
187
+ -0.9828571677207947,
188
+ -0.6000000238418579,
189
+ 0.0,
190
+ 0.0,
191
+ -0.17249999940395355,
192
+ 0.0
193
+ ],
194
+ "q99": [
195
+ 1.0,
196
+ 0.9457142949104309,
197
+ 1.0,
198
+ 0.0,
199
+ 0.0,
200
+ 0.17892856895923615,
201
+ 1.0
202
+ ],
203
+ "std": [
204
+ 0.46348899602890015,
205
+ 0.41240179538726807,
206
+ 0.411862850189209,
207
+ 0.0,
208
+ 0.0,
209
+ 0.0578610822558403,
210
+ 0.49894046783447266
211
+ ]
212
+ },
213
+ "num_trajectories": 240,
214
+ "num_transitions": 353094,
215
+ "proprio": {
216
+ "max": [
217
+ 0.0,
218
+ 0.0,
219
+ 0.0,
220
+ 0.0,
221
+ 0.0,
222
+ 0.0,
223
+ 0.0
224
+ ],
225
+ "mean": [
226
+ 0.0,
227
+ 0.0,
228
+ 0.0,
229
+ 0.0,
230
+ 0.0,
231
+ 0.0,
232
+ 0.0
233
+ ],
234
+ "min": [
235
+ 0.0,
236
+ 0.0,
237
+ 0.0,
238
+ 0.0,
239
+ 0.0,
240
+ 0.0,
241
+ 0.0
242
+ ],
243
+ "q01": [
244
+ 0.0,
245
+ 0.0,
246
+ 0.0,
247
+ 0.0,
248
+ 0.0,
249
+ 0.0,
250
+ 0.0
251
+ ],
252
+ "q99": [
253
+ 0.0,
254
+ 0.0,
255
+ 0.0,
256
+ 0.0,
257
+ 0.0,
258
+ 0.0,
259
+ 0.0
260
+ ],
261
+ "std": [
262
+ 0.0,
263
+ 0.0,
264
+ 0.0,
265
+ 0.0,
266
+ 0.0,
267
+ 0.0,
268
+ 0.0
269
+ ]
270
+ }
271
+ },
272
+ "austin_sirius_dataset_converted_externally_to_rlds": {
273
+ "action": {
274
+ "mask": [
275
+ true,
276
+ true,
277
+ true,
278
+ true,
279
+ true,
280
+ true,
281
+ false
282
+ ],
283
+ "max": [
284
+ 1.0002285242080688,
285
+ 0.960608720779419,
286
+ 1.105179786682129,
287
+ 0.0,
288
+ 0.0,
289
+ 0.341785728931427,
290
+ 1.0
291
+ ],
292
+ "mean": [
293
+ 0.07747682929039001,
294
+ 0.03195561468601227,
295
+ 0.04244732856750488,
296
+ 0.0,
297
+ 0.0,
298
+ -0.01603456400334835,
299
+ 0.43260177969932556
300
+ ],
301
+ "min": [
302
+ -1.0183025598526,
303
+ -0.9800000190734863,
304
+ -0.9774575233459473,
305
+ 0.0,
306
+ 0.0,
307
+ -0.34607142210006714,
308
+ 0.0
309
+ ],
310
+ "q01": [
311
+ -0.780905865430832,
312
+ -0.5667179036140442,
313
+ -0.5254343223571777,
314
+ 0.0,
315
+ 0.0,
316
+ -0.28495091378688814,
317
+ 0.0
318
+ ],
319
+ "q99": [
320
+ 0.9569637751579284,
321
+ 0.6971374487876891,
322
+ 0.8124888157844541,
323
+ 0.0,
324
+ 0.0,
325
+ 0.1971428543329239,
326
+ 1.0
327
+ ],
328
+ "std": [
329
+ 0.3906329572200775,
330
+ 0.2998155355453491,
331
+ 0.2782271206378937,
332
+ 0.0,
333
+ 0.0,
334
+ 0.08120622485876083,
335
+ 0.49528297781944275
336
+ ]
337
+ },
338
+ "num_trajectories": 559,
339
+ "num_transitions": 279939,
340
+ "proprio": {
341
+ "max": [
342
+ 0.0,
343
+ 0.0,
344
+ 0.0,
345
+ 0.0,
346
+ 0.0,
347
+ 0.0,
348
+ 0.0
349
+ ],
350
+ "mean": [
351
+ 0.0,
352
+ 0.0,
353
+ 0.0,
354
+ 0.0,
355
+ 0.0,
356
+ 0.0,
357
+ 0.0
358
+ ],
359
+ "min": [
360
+ 0.0,
361
+ 0.0,
362
+ 0.0,
363
+ 0.0,
364
+ 0.0,
365
+ 0.0,
366
+ 0.0
367
+ ],
368
+ "q01": [
369
+ 0.0,
370
+ 0.0,
371
+ 0.0,
372
+ 0.0,
373
+ 0.0,
374
+ 0.0,
375
+ 0.0
376
+ ],
377
+ "q99": [
378
+ 0.0,
379
+ 0.0,
380
+ 0.0,
381
+ 0.0,
382
+ 0.0,
383
+ 0.0,
384
+ 0.0
385
+ ],
386
+ "std": [
387
+ 0.0,
388
+ 0.0,
389
+ 0.0,
390
+ 0.0,
391
+ 0.0,
392
+ 0.0,
393
+ 0.0
394
+ ]
395
+ }
396
+ },
397
+ "bc_z": {
398
+ "action": {
399
+ "mask": [
400
+ true,
401
+ true,
402
+ true,
403
+ true,
404
+ true,
405
+ true,
406
+ false
407
+ ],
408
+ "max": [
409
+ 0.2165454924106598,
410
+ 0.1251407265663147,
411
+ 0.10772687941789627,
412
+ 0.33544227480888367,
413
+ 0.28117990493774414,
414
+ 0.40614867210388184,
415
+ 1.0
416
+ ],
417
+ "mean": [
418
+ -0.009958467446267605,
419
+ 0.0008958321413956583,
420
+ 0.004995597992092371,
421
+ 0.00029755113064311445,
422
+ -0.008735382929444313,
423
+ -0.030693737789988518,
424
+ 0.8344562649726868
425
+ ],
426
+ "min": [
427
+ -0.1677047461271286,
428
+ -0.14630407094955444,
429
+ -0.10066790133714676,
430
+ -0.29421567916870117,
431
+ -0.32101404666900635,
432
+ -0.4635624885559082,
433
+ 0.0
434
+ ],
435
+ "q01": [
436
+ -0.09220654994249344,
437
+ -0.06456145539879798,
438
+ -0.049121275544166565,
439
+ -0.11594625547528267,
440
+ -0.14152548640966414,
441
+ -0.2251061636209488,
442
+ 0.0
443
+ ],
444
+ "q99": [
445
+ 0.07628866866230968,
446
+ 0.058019736707210584,
447
+ 0.052540797740221024,
448
+ 0.11740604028105736,
449
+ 0.11703975558280955,
450
+ 0.16729306846857078,
451
+ 1.0
452
+ ],
453
+ "std": [
454
+ 0.03053455986082554,
455
+ 0.0231423731893301,
456
+ 0.020641816779971123,
457
+ 0.04155943542718887,
458
+ 0.046427831053733826,
459
+ 0.0769818127155304,
460
+ 0.3610210120677948
461
+ ]
462
+ },
463
+ "num_trajectories": 43264,
464
+ "num_transitions": 6015535,
465
+ "proprio": {
466
+ "max": [
467
+ 0.0,
468
+ 0.0,
469
+ 0.0,
470
+ 0.0,
471
+ 0.0,
472
+ 0.0,
473
+ 0.0
474
+ ],
475
+ "mean": [
476
+ 0.0,
477
+ 0.0,
478
+ 0.0,
479
+ 0.0,
480
+ 0.0,
481
+ 0.0,
482
+ 0.0
483
+ ],
484
+ "min": [
485
+ 0.0,
486
+ 0.0,
487
+ 0.0,
488
+ 0.0,
489
+ 0.0,
490
+ 0.0,
491
+ 0.0
492
+ ],
493
+ "q01": [
494
+ 0.0,
495
+ 0.0,
496
+ 0.0,
497
+ 0.0,
498
+ 0.0,
499
+ 0.0,
500
+ 0.0
501
+ ],
502
+ "q99": [
503
+ 0.0,
504
+ 0.0,
505
+ 0.0,
506
+ 0.0,
507
+ 0.0,
508
+ 0.0,
509
+ 0.0
510
+ ],
511
+ "std": [
512
+ 0.0,
513
+ 0.0,
514
+ 0.0,
515
+ 0.0,
516
+ 0.0,
517
+ 0.0,
518
+ 0.0
519
+ ]
520
+ }
521
+ },
522
+ "berkeley_autolab_ur5": {
523
+ "action": {
524
+ "mask": [
525
+ true,
526
+ true,
527
+ true,
528
+ true,
529
+ true,
530
+ true,
531
+ false
532
+ ],
533
+ "max": [
534
+ 0.019999999552965164,
535
+ 0.019999999552965164,
536
+ 0.019999999552965164,
537
+ 0.06666667014360428,
538
+ 0.06666667014360428,
539
+ 0.06666667014360428,
540
+ 1.0
541
+ ],
542
+ "mean": [
543
+ 0.0005683620693162084,
544
+ 0.001217700308188796,
545
+ -0.0005296372692100704,
546
+ 0.00021029810886830091,
547
+ 6.0695128922816366e-05,
548
+ 0.001204986940138042,
549
+ 0.6298308372497559
550
+ ],
551
+ "min": [
552
+ -0.019999999552965164,
553
+ -0.019999999552965164,
554
+ -0.019999999552965164,
555
+ -0.06666667014360428,
556
+ -0.06666667014360428,
557
+ -0.06666667014360428,
558
+ 0.0
559
+ ],
560
+ "q01": [
561
+ -0.019999999552965164,
562
+ -0.019999999552965164,
563
+ -0.019999999552965164,
564
+ -0.02628571353852749,
565
+ -0.06666667014360428,
566
+ -0.03847619146108627,
567
+ 0.0
568
+ ],
569
+ "q99": [
570
+ 0.019999999552965164,
571
+ 0.019999999552965164,
572
+ 0.019999999552965164,
573
+ 0.031809523701667786,
574
+ 0.06666667014360428,
575
+ 0.036571428179740906,
576
+ 1.0
577
+ ],
578
+ "std": [
579
+ 0.0115329809486866,
580
+ 0.007990492507815361,
581
+ 0.009577835910022259,
582
+ 0.009432995691895485,
583
+ 0.016427582129836082,
584
+ 0.011053967289626598,
585
+ 0.48267969489097595
586
+ ]
587
+ },
588
+ "num_trajectories": 1000,
589
+ "num_transitions": 97939,
590
+ "proprio": {
591
+ "max": [
592
+ 0.0,
593
+ 0.0,
594
+ 0.0,
595
+ 0.0,
596
+ 0.0,
597
+ 0.0,
598
+ 0.0
599
+ ],
600
+ "mean": [
601
+ 0.0,
602
+ 0.0,
603
+ 0.0,
604
+ 0.0,
605
+ 0.0,
606
+ 0.0,
607
+ 0.0
608
+ ],
609
+ "min": [
610
+ 0.0,
611
+ 0.0,
612
+ 0.0,
613
+ 0.0,
614
+ 0.0,
615
+ 0.0,
616
+ 0.0
617
+ ],
618
+ "q01": [
619
+ 0.0,
620
+ 0.0,
621
+ 0.0,
622
+ 0.0,
623
+ 0.0,
624
+ 0.0,
625
+ 0.0
626
+ ],
627
+ "q99": [
628
+ 0.0,
629
+ 0.0,
630
+ 0.0,
631
+ 0.0,
632
+ 0.0,
633
+ 0.0,
634
+ 0.0
635
+ ],
636
+ "std": [
637
+ 0.0,
638
+ 0.0,
639
+ 0.0,
640
+ 0.0,
641
+ 0.0,
642
+ 0.0,
643
+ 0.0
644
+ ]
645
+ }
646
+ },
647
+ "berkeley_cable_routing": {
648
+ "action": {
649
+ "mask": [
650
+ true,
651
+ true,
652
+ true,
653
+ true,
654
+ true,
655
+ true,
656
+ false
657
+ ],
658
+ "max": [
659
+ 0.9633283019065857,
660
+ 1.0,
661
+ 1.0,
662
+ 0.0,
663
+ 0.0,
664
+ 1.0,
665
+ 0.0
666
+ ],
667
+ "mean": [
668
+ -0.07139874249696732,
669
+ 0.023609008640050888,
670
+ 0.10241943597793579,
671
+ 0.0,
672
+ 0.0,
673
+ 0.049671024084091187,
674
+ 0.0
675
+ ],
676
+ "min": [
677
+ -0.9809081554412842,
678
+ -0.9554349184036255,
679
+ -0.9994775056838989,
680
+ 0.0,
681
+ 0.0,
682
+ -1.0,
683
+ 0.0
684
+ ],
685
+ "q01": [
686
+ -0.5534318816661835,
687
+ -0.4797285574674606,
688
+ -0.5314934802055359,
689
+ 0.0,
690
+ 0.0,
691
+ -0.8855219376087189,
692
+ 0.0
693
+ ],
694
+ "q99": [
695
+ 0.42652835428714786,
696
+ 0.5000944086909298,
697
+ 0.639823433756829,
698
+ 0.0,
699
+ 0.0,
700
+ 0.984243879914284,
701
+ 0.0
702
+ ],
703
+ "std": [
704
+ 0.1815500408411026,
705
+ 0.1810990273952484,
706
+ 0.21220779418945312,
707
+ 0.0,
708
+ 0.0,
709
+ 0.3475511968135834,
710
+ 0.0
711
+ ]
712
+ },
713
+ "num_trajectories": 1647,
714
+ "num_transitions": 42328,
715
+ "proprio": {
716
+ "max": [
717
+ 0.0,
718
+ 0.0,
719
+ 0.0,
720
+ 0.0,
721
+ 0.0,
722
+ 0.0,
723
+ 0.0
724
+ ],
725
+ "mean": [
726
+ 0.0,
727
+ 0.0,
728
+ 0.0,
729
+ 0.0,
730
+ 0.0,
731
+ 0.0,
732
+ 0.0
733
+ ],
734
+ "min": [
735
+ 0.0,
736
+ 0.0,
737
+ 0.0,
738
+ 0.0,
739
+ 0.0,
740
+ 0.0,
741
+ 0.0
742
+ ],
743
+ "q01": [
744
+ 0.0,
745
+ 0.0,
746
+ 0.0,
747
+ 0.0,
748
+ 0.0,
749
+ 0.0,
750
+ 0.0
751
+ ],
752
+ "q99": [
753
+ 0.0,
754
+ 0.0,
755
+ 0.0,
756
+ 0.0,
757
+ 0.0,
758
+ 0.0,
759
+ 0.0
760
+ ],
761
+ "std": [
762
+ 0.0,
763
+ 0.0,
764
+ 0.0,
765
+ 0.0,
766
+ 0.0,
767
+ 0.0,
768
+ 0.0
769
+ ]
770
+ }
771
+ },
772
+ "berkeley_fanuc_manipulation": {
773
+ "action": {
774
+ "mask": [
775
+ true,
776
+ true,
777
+ true,
778
+ true,
779
+ true,
780
+ true,
781
+ false
782
+ ],
783
+ "max": [
784
+ 0.009999999776482582,
785
+ 0.009999999776482582,
786
+ 0.009999999776482582,
787
+ 0.03490658476948738,
788
+ 0.03490658476948738,
789
+ 0.03490658476948738,
790
+ 1.0
791
+ ],
792
+ "mean": [
793
+ 0.0007744057802483439,
794
+ -0.00031240080716088414,
795
+ -0.0015001941937953234,
796
+ -0.0007515158504247665,
797
+ -0.00015832878125365824,
798
+ 0.00014327642566058785,
799
+ 0.699295699596405
800
+ ],
801
+ "min": [
802
+ -0.009999999776482582,
803
+ -0.009999999776482582,
804
+ -0.009999999776482582,
805
+ -0.03490658476948738,
806
+ -0.03490658476948738,
807
+ -0.03490658476948738,
808
+ 0.0
809
+ ],
810
+ "q01": [
811
+ -0.009999999776482582,
812
+ -0.009999999776482582,
813
+ -0.009999999776482582,
814
+ -0.03490658476948738,
815
+ 0.0,
816
+ -0.03490658476948738,
817
+ 0.0
818
+ ],
819
+ "q99": [
820
+ 0.009999999776482582,
821
+ 0.009999999776482582,
822
+ 0.009999999776482582,
823
+ 0.03490658476948738,
824
+ 0.0,
825
+ 0.03490658476948738,
826
+ 1.0
827
+ ],
828
+ "std": [
829
+ 0.0034070091787725687,
830
+ 0.0049921851605176926,
831
+ 0.005344334989786148,
832
+ 0.00759894959628582,
833
+ 0.004081866703927517,
834
+ 0.008568956516683102,
835
+ 0.4586937427520752
836
+ ]
837
+ },
838
+ "num_trajectories": 415,
839
+ "num_transitions": 62613,
840
+ "proprio": {
841
+ "max": [
842
+ 0.0,
843
+ 0.0,
844
+ 0.0,
845
+ 0.0,
846
+ 0.0,
847
+ 0.0,
848
+ 0.0
849
+ ],
850
+ "mean": [
851
+ 0.0,
852
+ 0.0,
853
+ 0.0,
854
+ 0.0,
855
+ 0.0,
856
+ 0.0,
857
+ 0.0
858
+ ],
859
+ "min": [
860
+ 0.0,
861
+ 0.0,
862
+ 0.0,
863
+ 0.0,
864
+ 0.0,
865
+ 0.0,
866
+ 0.0
867
+ ],
868
+ "q01": [
869
+ 0.0,
870
+ 0.0,
871
+ 0.0,
872
+ 0.0,
873
+ 0.0,
874
+ 0.0,
875
+ 0.0
876
+ ],
877
+ "q99": [
878
+ 0.0,
879
+ 0.0,
880
+ 0.0,
881
+ 0.0,
882
+ 0.0,
883
+ 0.0,
884
+ 0.0
885
+ ],
886
+ "std": [
887
+ 0.0,
888
+ 0.0,
889
+ 0.0,
890
+ 0.0,
891
+ 0.0,
892
+ 0.0,
893
+ 0.0
894
+ ]
895
+ }
896
+ },
897
+ "bridge_orig": {
898
+ "action": {
899
+ "mask": [
900
+ true,
901
+ true,
902
+ true,
903
+ true,
904
+ true,
905
+ true,
906
+ false
907
+ ],
908
+ "max": [
909
+ 0.41691166162490845,
910
+ 0.25864794850349426,
911
+ 0.21218234300613403,
912
+ 3.122201919555664,
913
+ 1.8618112802505493,
914
+ 6.280478477478027,
915
+ 1.0
916
+ ],
917
+ "mean": [
918
+ 0.0002334194869035855,
919
+ 0.00013004911306779832,
920
+ -0.00012762474943883717,
921
+ -0.0001556558854645118,
922
+ -0.0004039328487124294,
923
+ 0.00023557482927571982,
924
+ 0.5764579176902771
925
+ ],
926
+ "min": [
927
+ -0.4007510244846344,
928
+ -0.13874775171279907,
929
+ -0.22553899884223938,
930
+ -3.2010786533355713,
931
+ -1.8618112802505493,
932
+ -6.279075622558594,
933
+ 0.0
934
+ ],
935
+ "q01": [
936
+ -0.02872725307941437,
937
+ -0.04170349963009357,
938
+ -0.026093858778476715,
939
+ -0.08092105075716972,
940
+ -0.09288699507713317,
941
+ -0.20718276381492615,
942
+ 0.0
943
+ ],
944
+ "q99": [
945
+ 0.028309678435325586,
946
+ 0.040855254605412394,
947
+ 0.040161586627364146,
948
+ 0.08192047759890528,
949
+ 0.07792850524187081,
950
+ 0.20382574498653397,
951
+ 1.0
952
+ ],
953
+ "std": [
954
+ 0.009765930473804474,
955
+ 0.013689135201275349,
956
+ 0.012667362578213215,
957
+ 0.028534092009067535,
958
+ 0.030637972056865692,
959
+ 0.07691419124603271,
960
+ 0.4973701536655426
961
+ ]
962
+ },
963
+ "num_trajectories": 60064,
964
+ "num_transitions": 2135463,
965
+ "proprio": {
966
+ "max": [
967
+ 0.0,
968
+ 0.0,
969
+ 0.0,
970
+ 0.0,
971
+ 0.0,
972
+ 0.0,
973
+ 0.0
974
+ ],
975
+ "mean": [
976
+ 0.0,
977
+ 0.0,
978
+ 0.0,
979
+ 0.0,
980
+ 0.0,
981
+ 0.0,
982
+ 0.0
983
+ ],
984
+ "min": [
985
+ 0.0,
986
+ 0.0,
987
+ 0.0,
988
+ 0.0,
989
+ 0.0,
990
+ 0.0,
991
+ 0.0
992
+ ],
993
+ "q01": [
994
+ 0.0,
995
+ 0.0,
996
+ 0.0,
997
+ 0.0,
998
+ 0.0,
999
+ 0.0,
1000
+ 0.0
1001
+ ],
1002
+ "q99": [
1003
+ 0.0,
1004
+ 0.0,
1005
+ 0.0,
1006
+ 0.0,
1007
+ 0.0,
1008
+ 0.0,
1009
+ 0.0
1010
+ ],
1011
+ "std": [
1012
+ 0.0,
1013
+ 0.0,
1014
+ 0.0,
1015
+ 0.0,
1016
+ 0.0,
1017
+ 0.0,
1018
+ 0.0
1019
+ ]
1020
+ }
1021
+ },
1022
+ "cmu_stretch": {
1023
+ "action": {
1024
+ "mask": [
1025
+ true,
1026
+ true,
1027
+ true,
1028
+ true,
1029
+ true,
1030
+ true,
1031
+ false
1032
+ ],
1033
+ "max": [
1034
+ 0.02338407188653946,
1035
+ 0.0,
1036
+ 0.023404927924275398,
1037
+ 0.0,
1038
+ 0.0,
1039
+ 0.0,
1040
+ 1.0
1041
+ ],
1042
+ "mean": [
1043
+ 0.00036304505192674696,
1044
+ 0.0,
1045
+ 0.0016466958913952112,
1046
+ 0.0,
1047
+ 0.0,
1048
+ 0.0,
1049
+ 0.3987048268318176
1050
+ ],
1051
+ "min": [
1052
+ -0.019353797659277916,
1053
+ 0.0,
1054
+ -0.02019215188920498,
1055
+ 0.0,
1056
+ 0.0,
1057
+ 0.0,
1058
+ 0.0
1059
+ ],
1060
+ "q01": [
1061
+ -0.011175686959177256,
1062
+ 0.0,
1063
+ -0.0032206363626755773,
1064
+ 0.0,
1065
+ 0.0,
1066
+ 0.0,
1067
+ 0.0
1068
+ ],
1069
+ "q99": [
1070
+ 0.014501785952597848,
1071
+ 0.0,
1072
+ 0.015056106168776728,
1073
+ 0.0,
1074
+ 0.0,
1075
+ 0.0,
1076
+ 1.0
1077
+ ],
1078
+ "std": [
1079
+ 0.004081828519701958,
1080
+ 0.0,
1081
+ 0.0037743328139185905,
1082
+ 0.0,
1083
+ 0.0,
1084
+ 0.0,
1085
+ 0.48963725566864014
1086
+ ]
1087
+ },
1088
+ "num_trajectories": 135,
1089
+ "num_transitions": 25016,
1090
+ "proprio": {
1091
+ "max": [
1092
+ 0.0,
1093
+ 0.0,
1094
+ 0.0,
1095
+ 0.0,
1096
+ 0.0,
1097
+ 0.0,
1098
+ 0.0
1099
+ ],
1100
+ "mean": [
1101
+ 0.0,
1102
+ 0.0,
1103
+ 0.0,
1104
+ 0.0,
1105
+ 0.0,
1106
+ 0.0,
1107
+ 0.0
1108
+ ],
1109
+ "min": [
1110
+ 0.0,
1111
+ 0.0,
1112
+ 0.0,
1113
+ 0.0,
1114
+ 0.0,
1115
+ 0.0,
1116
+ 0.0
1117
+ ],
1118
+ "q01": [
1119
+ 0.0,
1120
+ 0.0,
1121
+ 0.0,
1122
+ 0.0,
1123
+ 0.0,
1124
+ 0.0,
1125
+ 0.0
1126
+ ],
1127
+ "q99": [
1128
+ 0.0,
1129
+ 0.0,
1130
+ 0.0,
1131
+ 0.0,
1132
+ 0.0,
1133
+ 0.0,
1134
+ 0.0
1135
+ ],
1136
+ "std": [
1137
+ 0.0,
1138
+ 0.0,
1139
+ 0.0,
1140
+ 0.0,
1141
+ 0.0,
1142
+ 0.0,
1143
+ 0.0
1144
+ ]
1145
+ }
1146
+ },
1147
+ "dlr_edan_shared_control_converted_externally_to_rlds": {
1148
+ "action": {
1149
+ "mask": [
1150
+ true,
1151
+ true,
1152
+ true,
1153
+ true,
1154
+ true,
1155
+ true,
1156
+ false
1157
+ ],
1158
+ "max": [
1159
+ 0.18991442024707794,
1160
+ 0.0739002525806427,
1161
+ 0.18064819276332855,
1162
+ 0.0866486132144928,
1163
+ 0.13464981317520142,
1164
+ 0.16910280287265778,
1165
+ 1.0
1166
+ ],
1167
+ "mean": [
1168
+ 0.006647810339927673,
1169
+ -0.0007657372043468058,
1170
+ 0.006522852927446365,
1171
+ 0.0011679717572405934,
1172
+ -0.006395625416189432,
1173
+ -0.011902998201549053,
1174
+ 0.6985887289047241
1175
+ ],
1176
+ "min": [
1177
+ -0.10054297000169754,
1178
+ -0.08427435159683228,
1179
+ -0.13533438742160797,
1180
+ -0.17556548118591309,
1181
+ -0.18485672771930695,
1182
+ -0.2680685818195343,
1183
+ 0.0
1184
+ ],
1185
+ "q01": [
1186
+ -0.02987122368067503,
1187
+ -0.06013262912631035,
1188
+ -0.08286409199237824,
1189
+ -0.05924444157630205,
1190
+ -0.15986866518855095,
1191
+ -0.15636983573436739,
1192
+ 0.0
1193
+ ],
1194
+ "q99": [
1195
+ 0.08832092039287087,
1196
+ 0.042126184627413736,
1197
+ 0.11311905644834042,
1198
+ 0.0643695573508739,
1199
+ 0.03941855944693088,
1200
+ 0.156646853685379,
1201
+ 1.0
1202
+ ],
1203
+ "std": [
1204
+ 0.021393608301877975,
1205
+ 0.01814231649041176,
1206
+ 0.03374375030398369,
1207
+ 0.01743541844189167,
1208
+ 0.03394376486539841,
1209
+ 0.04641875624656677,
1210
+ 0.4588589072227478
1211
+ ]
1212
+ },
1213
+ "num_trajectories": 104,
1214
+ "num_transitions": 8928,
1215
+ "proprio": {
1216
+ "max": [
1217
+ 0.0,
1218
+ 0.0,
1219
+ 0.0,
1220
+ 0.0,
1221
+ 0.0,
1222
+ 0.0,
1223
+ 0.0
1224
+ ],
1225
+ "mean": [
1226
+ 0.0,
1227
+ 0.0,
1228
+ 0.0,
1229
+ 0.0,
1230
+ 0.0,
1231
+ 0.0,
1232
+ 0.0
1233
+ ],
1234
+ "min": [
1235
+ 0.0,
1236
+ 0.0,
1237
+ 0.0,
1238
+ 0.0,
1239
+ 0.0,
1240
+ 0.0,
1241
+ 0.0
1242
+ ],
1243
+ "q01": [
1244
+ 0.0,
1245
+ 0.0,
1246
+ 0.0,
1247
+ 0.0,
1248
+ 0.0,
1249
+ 0.0,
1250
+ 0.0
1251
+ ],
1252
+ "q99": [
1253
+ 0.0,
1254
+ 0.0,
1255
+ 0.0,
1256
+ 0.0,
1257
+ 0.0,
1258
+ 0.0,
1259
+ 0.0
1260
+ ],
1261
+ "std": [
1262
+ 0.0,
1263
+ 0.0,
1264
+ 0.0,
1265
+ 0.0,
1266
+ 0.0,
1267
+ 0.0,
1268
+ 0.0
1269
+ ]
1270
+ }
1271
+ },
1272
+ "dobbe": {
1273
+ "action": {
1274
+ "mask": [
1275
+ true,
1276
+ true,
1277
+ true,
1278
+ true,
1279
+ true,
1280
+ true,
1281
+ false
1282
+ ],
1283
+ "max": [
1284
+ 38.590423583984375,
1285
+ 17.932697296142578,
1286
+ 4.843764305114746,
1287
+ 1.4372116327285767,
1288
+ 0.4340403974056244,
1289
+ 1.2057193517684937,
1290
+ 0.9998947381973267
1291
+ ],
1292
+ "mean": [
1293
+ -0.0001120665911003016,
1294
+ 0.0011229600058868527,
1295
+ -0.00010194431524723768,
1296
+ -7.371398532995954e-05,
1297
+ -0.00067531579406932,
1298
+ -5.6643435527803376e-05,
1299
+ 0.6318281888961792
1300
+ ],
1301
+ "min": [
1302
+ -5.700923442840576,
1303
+ -21.605947494506836,
1304
+ -123.72489929199219,
1305
+ -1.7229845523834229,
1306
+ -0.4998578727245331,
1307
+ -0.8867913484573364,
1308
+ 1.4196479014572105e-06
1309
+ ],
1310
+ "q01": [
1311
+ -0.01119564864784479,
1312
+ -0.014266146533191203,
1313
+ -0.0071747214533388615,
1314
+ -0.009444301575422287,
1315
+ -0.03990109823644161,
1316
+ -0.017422311007976532,
1317
+ 4.003279136668425e-05
1318
+ ],
1319
+ "q99": [
1320
+ 0.01015154086053368,
1321
+ 0.017181577533483497,
1322
+ 0.007216989761218411,
1323
+ 0.010380979906767595,
1324
+ 0.03556173853576176,
1325
+ 0.018032474815845446,
1326
+ 0.9982578039169312
1327
+ ],
1328
+ "std": [
1329
+ 0.04264938458800316,
1330
+ 0.04428559169173241,
1331
+ 0.12224084138870239,
1332
+ 0.005388413090258837,
1333
+ 0.011246449314057827,
1334
+ 0.006287882570177317,
1335
+ 0.39732322096824646
1336
+ ]
1337
+ },
1338
+ "num_trajectories": 5208,
1339
+ "num_transitions": 1139911,
1340
+ "proprio": {
1341
+ "max": [
1342
+ 0.0,
1343
+ 0.0,
1344
+ 0.0,
1345
+ 0.0,
1346
+ 0.0,
1347
+ 0.0,
1348
+ 0.0
1349
+ ],
1350
+ "mean": [
1351
+ 0.0,
1352
+ 0.0,
1353
+ 0.0,
1354
+ 0.0,
1355
+ 0.0,
1356
+ 0.0,
1357
+ 0.0
1358
+ ],
1359
+ "min": [
1360
+ 0.0,
1361
+ 0.0,
1362
+ 0.0,
1363
+ 0.0,
1364
+ 0.0,
1365
+ 0.0,
1366
+ 0.0
1367
+ ],
1368
+ "q01": [
1369
+ 0.0,
1370
+ 0.0,
1371
+ 0.0,
1372
+ 0.0,
1373
+ 0.0,
1374
+ 0.0,
1375
+ 0.0
1376
+ ],
1377
+ "q99": [
1378
+ 0.0,
1379
+ 0.0,
1380
+ 0.0,
1381
+ 0.0,
1382
+ 0.0,
1383
+ 0.0,
1384
+ 0.0
1385
+ ],
1386
+ "std": [
1387
+ 0.0,
1388
+ 0.0,
1389
+ 0.0,
1390
+ 0.0,
1391
+ 0.0,
1392
+ 0.0,
1393
+ 0.0
1394
+ ]
1395
+ }
1396
+ },
1397
+ "fmb_dataset": {
1398
+ "action": {
1399
+ "mask": [
1400
+ true,
1401
+ true,
1402
+ true,
1403
+ true,
1404
+ true,
1405
+ true,
1406
+ false
1407
+ ],
1408
+ "max": [
1409
+ 1.399999976158142,
1410
+ 1.0,
1411
+ 1.399999976158142,
1412
+ 1.0,
1413
+ 1.0,
1414
+ 1.0,
1415
+ 1.0
1416
+ ],
1417
+ "mean": [
1418
+ 0.059029702097177505,
1419
+ -0.06476633995771408,
1420
+ -0.09787475317716599,
1421
+ 0.004325388930737972,
1422
+ 0.00028963794466108084,
1423
+ -0.04457257315516472,
1424
+ 0.7336440086364746
1425
+ ],
1426
+ "min": [
1427
+ -1.399999976158142,
1428
+ -1.399999976158142,
1429
+ -1.0,
1430
+ -1.0,
1431
+ -1.0,
1432
+ -1.0,
1433
+ 0.0
1434
+ ],
1435
+ "q01": [
1436
+ -0.8257142901420593,
1437
+ -1.399999976158142,
1438
+ -1.0,
1439
+ -1.0,
1440
+ -0.3028571307659149,
1441
+ -1.0,
1442
+ 0.0
1443
+ ],
1444
+ "q99": [
1445
+ 1.0,
1446
+ 0.5257142782211304,
1447
+ 1.0,
1448
+ 1.0,
1449
+ 0.3400000035762787,
1450
+ 1.0,
1451
+ 1.0
1452
+ ],
1453
+ "std": [
1454
+ 0.28809213638305664,
1455
+ 0.2820415794849396,
1456
+ 0.4626740515232086,
1457
+ 0.3266514539718628,
1458
+ 0.10842999070882797,
1459
+ 0.3440099358558655,
1460
+ 0.4435282051563263
1461
+ ]
1462
+ },
1463
+ "num_trajectories": 8612,
1464
+ "num_transitions": 1137459,
1465
+ "proprio": {
1466
+ "max": [
1467
+ 0.0,
1468
+ 0.0,
1469
+ 0.0,
1470
+ 0.0,
1471
+ 0.0,
1472
+ 0.0,
1473
+ 0.0
1474
+ ],
1475
+ "mean": [
1476
+ 0.0,
1477
+ 0.0,
1478
+ 0.0,
1479
+ 0.0,
1480
+ 0.0,
1481
+ 0.0,
1482
+ 0.0
1483
+ ],
1484
+ "min": [
1485
+ 0.0,
1486
+ 0.0,
1487
+ 0.0,
1488
+ 0.0,
1489
+ 0.0,
1490
+ 0.0,
1491
+ 0.0
1492
+ ],
1493
+ "q01": [
1494
+ 0.0,
1495
+ 0.0,
1496
+ 0.0,
1497
+ 0.0,
1498
+ 0.0,
1499
+ 0.0,
1500
+ 0.0
1501
+ ],
1502
+ "q99": [
1503
+ 0.0,
1504
+ 0.0,
1505
+ 0.0,
1506
+ 0.0,
1507
+ 0.0,
1508
+ 0.0,
1509
+ 0.0
1510
+ ],
1511
+ "std": [
1512
+ 0.0,
1513
+ 0.0,
1514
+ 0.0,
1515
+ 0.0,
1516
+ 0.0,
1517
+ 0.0,
1518
+ 0.0
1519
+ ]
1520
+ }
1521
+ },
1522
+ "fractal20220817_data": {
1523
+ "action": {
1524
+ "mask": [
1525
+ true,
1526
+ true,
1527
+ true,
1528
+ true,
1529
+ true,
1530
+ true,
1531
+ false
1532
+ ],
1533
+ "max": [
1534
+ 2.9984593391418457,
1535
+ 22.09052848815918,
1536
+ 2.7507524490356445,
1537
+ 1.570636510848999,
1538
+ 1.5321086645126343,
1539
+ 1.5691522359848022,
1540
+ 1.0
1541
+ ],
1542
+ "mean": [
1543
+ 0.006987582892179489,
1544
+ 0.006265917327255011,
1545
+ -0.01262515690177679,
1546
+ 0.04333311319351196,
1547
+ -0.005756212864071131,
1548
+ 0.0009130256366916001,
1549
+ 0.5354204773902893
1550
+ ],
1551
+ "min": [
1552
+ -2.0204520225524902,
1553
+ -5.497899532318115,
1554
+ -2.031663417816162,
1555
+ -1.569917917251587,
1556
+ -1.569892168045044,
1557
+ -1.570419430732727,
1558
+ 0.0
1559
+ ],
1560
+ "q01": [
1561
+ -0.22453527510166169,
1562
+ -0.14820013284683228,
1563
+ -0.231589707583189,
1564
+ -0.3517994859814644,
1565
+ -0.4193011274933815,
1566
+ -0.43643461108207704,
1567
+ 0.0
1568
+ ],
1569
+ "q99": [
1570
+ 0.17824687153100965,
1571
+ 0.14938379630446405,
1572
+ 0.21842354819178575,
1573
+ 0.5892666035890578,
1574
+ 0.35272657424211445,
1575
+ 0.44796681255102094,
1576
+ 1.0
1577
+ ],
1578
+ "std": [
1579
+ 0.0692116990685463,
1580
+ 0.05970962345600128,
1581
+ 0.07353084534406662,
1582
+ 0.15610496699810028,
1583
+ 0.13164450228214264,
1584
+ 0.14593800902366638,
1585
+ 0.497110515832901
1586
+ ]
1587
+ },
1588
+ "num_trajectories": 87212,
1589
+ "num_transitions": 3786400,
1590
+ "proprio": {
1591
+ "max": [
1592
+ 0.0,
1593
+ 0.0,
1594
+ 0.0,
1595
+ 0.0,
1596
+ 0.0,
1597
+ 0.0,
1598
+ 0.0
1599
+ ],
1600
+ "mean": [
1601
+ 0.0,
1602
+ 0.0,
1603
+ 0.0,
1604
+ 0.0,
1605
+ 0.0,
1606
+ 0.0,
1607
+ 0.0
1608
+ ],
1609
+ "min": [
1610
+ 0.0,
1611
+ 0.0,
1612
+ 0.0,
1613
+ 0.0,
1614
+ 0.0,
1615
+ 0.0,
1616
+ 0.0
1617
+ ],
1618
+ "q01": [
1619
+ 0.0,
1620
+ 0.0,
1621
+ 0.0,
1622
+ 0.0,
1623
+ 0.0,
1624
+ 0.0,
1625
+ 0.0
1626
+ ],
1627
+ "q99": [
1628
+ 0.0,
1629
+ 0.0,
1630
+ 0.0,
1631
+ 0.0,
1632
+ 0.0,
1633
+ 0.0,
1634
+ 0.0
1635
+ ],
1636
+ "std": [
1637
+ 0.0,
1638
+ 0.0,
1639
+ 0.0,
1640
+ 0.0,
1641
+ 0.0,
1642
+ 0.0,
1643
+ 0.0
1644
+ ]
1645
+ }
1646
+ },
1647
+ "furniture_bench_dataset_converted_externally_to_rlds": {
1648
+ "action": {
1649
+ "mask": [
1650
+ true,
1651
+ true,
1652
+ true,
1653
+ true,
1654
+ true,
1655
+ true,
1656
+ false
1657
+ ],
1658
+ "max": [
1659
+ 0.10000000149011612,
1660
+ 0.10000000149011612,
1661
+ 0.10000000149011612,
1662
+ 0.8651833534240723,
1663
+ 1.0909736156463623,
1664
+ 2.863185405731201,
1665
+ 1.0
1666
+ ],
1667
+ "mean": [
1668
+ 0.00014610752987209707,
1669
+ 0.0010830952087417245,
1670
+ 0.0006224989192560315,
1671
+ -0.003303206292912364,
1672
+ -0.0026880695950239897,
1673
+ 0.018242603167891502,
1674
+ 0.48854944109916687
1675
+ ],
1676
+ "min": [
1677
+ -0.10495579987764359,
1678
+ -0.10939455777406693,
1679
+ -0.10000000149011612,
1680
+ -0.971906840801239,
1681
+ -1.0475432872772217,
1682
+ -3.06000018119812,
1683
+ 0.0
1684
+ ],
1685
+ "q01": [
1686
+ -0.053988199681043625,
1687
+ -0.05049169331789017,
1688
+ -0.032499241530895236,
1689
+ -0.1953887003660202,
1690
+ -0.41674559473991396,
1691
+ -0.8886768388748169,
1692
+ 0.0
1693
+ ],
1694
+ "q99": [
1695
+ 0.05414841488003723,
1696
+ 0.04965164884924884,
1697
+ 0.060055799782276154,
1698
+ 0.18231668293476103,
1699
+ 0.39867786407470646,
1700
+ 0.8772023963928218,
1701
+ 1.0
1702
+ ],
1703
+ "std": [
1704
+ 0.01610708422958851,
1705
+ 0.014891477301716805,
1706
+ 0.014014219865202904,
1707
+ 0.058274295181035995,
1708
+ 0.11417088657617569,
1709
+ 0.33479776978492737,
1710
+ 0.49991825222969055
1711
+ ]
1712
+ },
1713
+ "num_trajectories": 5100,
1714
+ "num_transitions": 3948057,
1715
+ "proprio": {
1716
+ "max": [
1717
+ 0.0,
1718
+ 0.0,
1719
+ 0.0,
1720
+ 0.0,
1721
+ 0.0,
1722
+ 0.0,
1723
+ 0.0
1724
+ ],
1725
+ "mean": [
1726
+ 0.0,
1727
+ 0.0,
1728
+ 0.0,
1729
+ 0.0,
1730
+ 0.0,
1731
+ 0.0,
1732
+ 0.0
1733
+ ],
1734
+ "min": [
1735
+ 0.0,
1736
+ 0.0,
1737
+ 0.0,
1738
+ 0.0,
1739
+ 0.0,
1740
+ 0.0,
1741
+ 0.0
1742
+ ],
1743
+ "q01": [
1744
+ 0.0,
1745
+ 0.0,
1746
+ 0.0,
1747
+ 0.0,
1748
+ 0.0,
1749
+ 0.0,
1750
+ 0.0
1751
+ ],
1752
+ "q99": [
1753
+ 0.0,
1754
+ 0.0,
1755
+ 0.0,
1756
+ 0.0,
1757
+ 0.0,
1758
+ 0.0,
1759
+ 0.0
1760
+ ],
1761
+ "std": [
1762
+ 0.0,
1763
+ 0.0,
1764
+ 0.0,
1765
+ 0.0,
1766
+ 0.0,
1767
+ 0.0,
1768
+ 0.0
1769
+ ]
1770
+ }
1771
+ },
1772
+ "iamlab_cmu_pickup_insert_converted_externally_to_rlds": {
1773
+ "action": {
1774
+ "mask": [
1775
+ true,
1776
+ true,
1777
+ true,
1778
+ true,
1779
+ true,
1780
+ true,
1781
+ false
1782
+ ],
1783
+ "max": [
1784
+ 0.6634981632232666,
1785
+ 0.23428471386432648,
1786
+ 0.4308285415172577,
1787
+ 3.1415927410125732,
1788
+ 0.13647015392780304,
1789
+ 3.141592502593994,
1790
+ 1.0
1791
+ ],
1792
+ "mean": [
1793
+ 0.5274372696876526,
1794
+ 0.02858201041817665,
1795
+ 0.18712575733661652,
1796
+ 1.2339589595794678,
1797
+ 0.03226623684167862,
1798
+ -1.4199490547180176,
1799
+ 0.5550631880760193
1800
+ ],
1801
+ "min": [
1802
+ 0.3071657121181488,
1803
+ -0.29754969477653503,
1804
+ 0.06578229367733002,
1805
+ -3.1415927410125732,
1806
+ -0.04584203287959099,
1807
+ -3.141592502593994,
1808
+ 0.0
1809
+ ],
1810
+ "q01": [
1811
+ 0.3148897051811218,
1812
+ -0.20317550599575043,
1813
+ 0.06785467118024827,
1814
+ -3.140952730178833,
1815
+ -0.029743434861302376,
1816
+ -3.141091251373291,
1817
+ 0.0
1818
+ ],
1819
+ "q99": [
1820
+ 0.6472805738449097,
1821
+ 0.20846802592277527,
1822
+ 0.36855655312538155,
1823
+ 3.1409926891326903,
1824
+ 0.11424950212240226,
1825
+ 3.1410969257354737,
1826
+ 1.0
1827
+ ],
1828
+ "std": [
1829
+ 0.08108345419168472,
1830
+ 0.1116757020354271,
1831
+ 0.07747554779052734,
1832
+ 2.8737246990203857,
1833
+ 0.02774704433977604,
1834
+ 2.7678682804107666,
1835
+ 0.49695101380348206
1836
+ ]
1837
+ },
1838
+ "num_trajectories": 631,
1839
+ "num_transitions": 146241,
1840
+ "proprio": {
1841
+ "max": [
1842
+ 0.0,
1843
+ 0.0,
1844
+ 0.0,
1845
+ 0.0,
1846
+ 0.0,
1847
+ 0.0,
1848
+ 0.0
1849
+ ],
1850
+ "mean": [
1851
+ 0.0,
1852
+ 0.0,
1853
+ 0.0,
1854
+ 0.0,
1855
+ 0.0,
1856
+ 0.0,
1857
+ 0.0
1858
+ ],
1859
+ "min": [
1860
+ 0.0,
1861
+ 0.0,
1862
+ 0.0,
1863
+ 0.0,
1864
+ 0.0,
1865
+ 0.0,
1866
+ 0.0
1867
+ ],
1868
+ "q01": [
1869
+ 0.0,
1870
+ 0.0,
1871
+ 0.0,
1872
+ 0.0,
1873
+ 0.0,
1874
+ 0.0,
1875
+ 0.0
1876
+ ],
1877
+ "q99": [
1878
+ 0.0,
1879
+ 0.0,
1880
+ 0.0,
1881
+ 0.0,
1882
+ 0.0,
1883
+ 0.0,
1884
+ 0.0
1885
+ ],
1886
+ "std": [
1887
+ 0.0,
1888
+ 0.0,
1889
+ 0.0,
1890
+ 0.0,
1891
+ 0.0,
1892
+ 0.0,
1893
+ 0.0
1894
+ ]
1895
+ }
1896
+ },
1897
+ "jaco_play": {
1898
+ "action": {
1899
+ "mask": [
1900
+ true,
1901
+ true,
1902
+ true,
1903
+ true,
1904
+ true,
1905
+ true,
1906
+ false
1907
+ ],
1908
+ "max": [
1909
+ 0.20000000298023224,
1910
+ 0.20000000298023224,
1911
+ 0.20000000298023224,
1912
+ 0.0,
1913
+ 0.0,
1914
+ 0.0,
1915
+ 1.0
1916
+ ],
1917
+ "mean": [
1918
+ 0.0009658430935814977,
1919
+ -0.00580078037455678,
1920
+ -0.00395062193274498,
1921
+ 0.0,
1922
+ 0.0,
1923
+ 0.0,
1924
+ 0.34934908151626587
1925
+ ],
1926
+ "min": [
1927
+ -0.20000000298023224,
1928
+ -0.20000000298023224,
1929
+ -0.20000000298023224,
1930
+ 0.0,
1931
+ 0.0,
1932
+ 0.0,
1933
+ 0.0
1934
+ ],
1935
+ "q01": [
1936
+ -0.20000000298023224,
1937
+ -0.20000000298023224,
1938
+ -0.20000000298023224,
1939
+ 0.0,
1940
+ 0.0,
1941
+ 0.0,
1942
+ 0.0
1943
+ ],
1944
+ "q99": [
1945
+ 0.20000000298023224,
1946
+ 0.20000000298023224,
1947
+ 0.20000000298023224,
1948
+ 0.0,
1949
+ 0.0,
1950
+ 0.0,
1951
+ 1.0
1952
+ ],
1953
+ "std": [
1954
+ 0.12235074490308762,
1955
+ 0.09678777307271957,
1956
+ 0.11155334860086441,
1957
+ 0.0,
1958
+ 0.0,
1959
+ 0.0,
1960
+ 0.4768252968788147
1961
+ ]
1962
+ },
1963
+ "num_trajectories": 1085,
1964
+ "num_transitions": 77965,
1965
+ "proprio": {
1966
+ "max": [
1967
+ 0.0,
1968
+ 0.0,
1969
+ 0.0,
1970
+ 0.0,
1971
+ 0.0,
1972
+ 0.0,
1973
+ 0.0
1974
+ ],
1975
+ "mean": [
1976
+ 0.0,
1977
+ 0.0,
1978
+ 0.0,
1979
+ 0.0,
1980
+ 0.0,
1981
+ 0.0,
1982
+ 0.0
1983
+ ],
1984
+ "min": [
1985
+ 0.0,
1986
+ 0.0,
1987
+ 0.0,
1988
+ 0.0,
1989
+ 0.0,
1990
+ 0.0,
1991
+ 0.0
1992
+ ],
1993
+ "q01": [
1994
+ 0.0,
1995
+ 0.0,
1996
+ 0.0,
1997
+ 0.0,
1998
+ 0.0,
1999
+ 0.0,
2000
+ 0.0
2001
+ ],
2002
+ "q99": [
2003
+ 0.0,
2004
+ 0.0,
2005
+ 0.0,
2006
+ 0.0,
2007
+ 0.0,
2008
+ 0.0,
2009
+ 0.0
2010
+ ],
2011
+ "std": [
2012
+ 0.0,
2013
+ 0.0,
2014
+ 0.0,
2015
+ 0.0,
2016
+ 0.0,
2017
+ 0.0,
2018
+ 0.0
2019
+ ]
2020
+ }
2021
+ },
2022
+ "kuka": {
2023
+ "action": {
2024
+ "mask": [
2025
+ true,
2026
+ true,
2027
+ true,
2028
+ true,
2029
+ true,
2030
+ true,
2031
+ false
2032
+ ],
2033
+ "max": [
2034
+ 0.1697135865688324,
2035
+ 0.2777623236179352,
2036
+ 0.43710532784461975,
2037
+ 0.0,
2038
+ 0.0,
2039
+ 1.9684287309646606,
2040
+ 1.0
2041
+ ],
2042
+ "mean": [
2043
+ -0.0004668905457947403,
2044
+ 0.00040138536132872105,
2045
+ -0.001280792523175478,
2046
+ 0.0,
2047
+ 0.0,
2048
+ -0.03722453489899635,
2049
+ 0.4131543040275574
2050
+ ],
2051
+ "min": [
2052
+ -0.159867063164711,
2053
+ -0.2892282009124756,
2054
+ -0.2795473635196686,
2055
+ 0.0,
2056
+ 0.0,
2057
+ -1.9875637292861938,
2058
+ 0.0
2059
+ ],
2060
+ "q01": [
2061
+ -0.06619441494345665,
2062
+ -0.08713878810405731,
2063
+ -0.15083016991615295,
2064
+ 0.0,
2065
+ 0.0,
2066
+ -0.5415697038173676,
2067
+ 0.0
2068
+ ],
2069
+ "q99": [
2070
+ 0.06601839080452929,
2071
+ 0.08732476785779003,
2072
+ 0.18168179214000715,
2073
+ 0.0,
2074
+ 0.0,
2075
+ 0.2923380345106127,
2076
+ 1.0
2077
+ ],
2078
+ "std": [
2079
+ 0.02083250693976879,
2080
+ 0.02915887162089348,
2081
+ 0.06422865390777588,
2082
+ 0.0,
2083
+ 0.0,
2084
+ 0.14224295318126678,
2085
+ 0.49086448550224304
2086
+ ]
2087
+ },
2088
+ "num_trajectories": 209880,
2089
+ "num_transitions": 2455879,
2090
+ "proprio": {
2091
+ "max": [
2092
+ 0.0,
2093
+ 0.0,
2094
+ 0.0,
2095
+ 0.0,
2096
+ 0.0,
2097
+ 0.0,
2098
+ 0.0
2099
+ ],
2100
+ "mean": [
2101
+ 0.0,
2102
+ 0.0,
2103
+ 0.0,
2104
+ 0.0,
2105
+ 0.0,
2106
+ 0.0,
2107
+ 0.0
2108
+ ],
2109
+ "min": [
2110
+ 0.0,
2111
+ 0.0,
2112
+ 0.0,
2113
+ 0.0,
2114
+ 0.0,
2115
+ 0.0,
2116
+ 0.0
2117
+ ],
2118
+ "q01": [
2119
+ 0.0,
2120
+ 0.0,
2121
+ 0.0,
2122
+ 0.0,
2123
+ 0.0,
2124
+ 0.0,
2125
+ 0.0
2126
+ ],
2127
+ "q99": [
2128
+ 0.0,
2129
+ 0.0,
2130
+ 0.0,
2131
+ 0.0,
2132
+ 0.0,
2133
+ 0.0,
2134
+ 0.0
2135
+ ],
2136
+ "std": [
2137
+ 0.0,
2138
+ 0.0,
2139
+ 0.0,
2140
+ 0.0,
2141
+ 0.0,
2142
+ 0.0,
2143
+ 0.0
2144
+ ]
2145
+ }
2146
+ },
2147
+ "libero_10_no_noops": {
2148
+ "action": {
2149
+ "mask": [
2150
+ true,
2151
+ true,
2152
+ true,
2153
+ true,
2154
+ true,
2155
+ true,
2156
+ false
2157
+ ],
2158
+ "max": [
2159
+ 0.9375,
2160
+ 0.9375,
2161
+ 0.9375,
2162
+ 0.30000001192092896,
2163
+ 0.29357144236564636,
2164
+ 0.375,
2165
+ 1.0
2166
+ ],
2167
+ "mean": [
2168
+ 0.01820324920117855,
2169
+ 0.05858374014496803,
2170
+ -0.05592384561896324,
2171
+ 0.004626928828656673,
2172
+ 0.00289608770981431,
2173
+ -0.007673131301999092,
2174
+ 0.5457824468612671
2175
+ ],
2176
+ "min": [
2177
+ -0.9375,
2178
+ -0.9375,
2179
+ -0.9375,
2180
+ -0.23642857372760773,
2181
+ -0.3053571283817291,
2182
+ -0.3675000071525574,
2183
+ 0.0
2184
+ ],
2185
+ "q01": [
2186
+ -0.6348214149475098,
2187
+ -0.7741071581840515,
2188
+ -0.7633928656578064,
2189
+ -0.09749999642372131,
2190
+ -0.14819999992847435,
2191
+ -0.2742857038974762,
2192
+ 0.0
2193
+ ],
2194
+ "q99": [
2195
+ 0.7714285850524902,
2196
+ 0.8464285731315613,
2197
+ 0.9375,
2198
+ 0.13928571343421936,
2199
+ 0.15964286029338837,
2200
+ 0.3246428668498993,
2201
+ 1.0
2202
+ ],
2203
+ "std": [
2204
+ 0.2825464606285095,
2205
+ 0.35904666781425476,
2206
+ 0.3673802614212036,
2207
+ 0.03770702704787254,
2208
+ 0.05429719388484955,
2209
+ 0.08725254982709885,
2210
+ 0.49815231561660767
2211
+ ]
2212
+ },
2213
+ "num_trajectories": 379,
2214
+ "num_transitions": 101469,
2215
+ "proprio": {
2216
+ "max": [
2217
+ 0.21031762659549713,
2218
+ 0.39128610491752625,
2219
+ 1.3332009315490723,
2220
+ 3.6714255809783936,
2221
+ 3.560650587081909,
2222
+ 1.386339545249939,
2223
+ 0.04160946607589722,
2224
+ 0.0013633022317662835
2225
+ ],
2226
+ "mean": [
2227
+ -0.04190658777952194,
2228
+ 0.03539430722594261,
2229
+ 0.8257141709327698,
2230
+ 2.908308267593384,
2231
+ -0.5562185049057007,
2232
+ -0.16649018228054047,
2233
+ 0.028316624462604523,
2234
+ -0.028561657294631004
2235
+ ],
2236
+ "min": [
2237
+ -0.4828203022480011,
2238
+ -0.3255046010017395,
2239
+ 0.445506751537323,
2240
+ 1.1321442127227783,
2241
+ -3.641430377960205,
2242
+ -1.842738389968872,
2243
+ -0.0010040868073701859,
2244
+ -0.04111652821302414
2245
+ ],
2246
+ "q01": [
2247
+ -0.3899900782108307,
2248
+ -0.2838300323486328,
2249
+ 0.44795057058334353,
2250
+ 1.8810229921340942,
2251
+ -2.886677579879761,
2252
+ -1.1599004411697387,
2253
+ 0.002066459748893976,
2254
+ -0.04001387819647789
2255
+ ],
2256
+ "q99": [
2257
+ 0.1530261474847791,
2258
+ 0.32915401458740223,
2259
+ 1.2546923208236693,
2260
+ 3.303542451858519,
2261
+ 2.7496529006957933,
2262
+ 0.6893712210655194,
2263
+ 0.040048558115959164,
2264
+ -0.0017598449345678235
2265
+ ],
2266
+ "std": [
2267
+ 0.10743364691734314,
2268
+ 0.14424669742584229,
2269
+ 0.2572328448295593,
2270
+ 0.3441362977027893,
2271
+ 1.234421730041504,
2272
+ 0.3579835891723633,
2273
+ 0.013308707624673843,
2274
+ 0.013174631632864475
2275
+ ]
2276
+ }
2277
+ },
2278
+ "libero_goal_no_noops": {
2279
+ "action": {
2280
+ "mask": [
2281
+ true,
2282
+ true,
2283
+ true,
2284
+ true,
2285
+ true,
2286
+ true,
2287
+ false
2288
+ ],
2289
+ "max": [
2290
+ 0.9375,
2291
+ 0.9375,
2292
+ 0.9375,
2293
+ 0.3557142913341522,
2294
+ 0.375,
2295
+ 0.375,
2296
+ 1.0
2297
+ ],
2298
+ "mean": [
2299
+ 0.04721052572131157,
2300
+ 0.028835246339440346,
2301
+ -0.1485840231180191,
2302
+ -0.0025010062381625175,
2303
+ 0.026408178731799126,
2304
+ 0.027379808947443962,
2305
+ 0.6299911737442017
2306
+ ],
2307
+ "min": [
2308
+ -0.9375,
2309
+ -0.9375,
2310
+ -0.9375,
2311
+ -0.2582142949104309,
2312
+ -0.375,
2313
+ -0.2871428430080414,
2314
+ 0.0
2315
+ ],
2316
+ "q01": [
2317
+ -0.8785714507102966,
2318
+ -0.7553571462631226,
2319
+ -0.9375,
2320
+ -0.1510714292526245,
2321
+ -0.1639285683631897,
2322
+ -0.13777500048279764,
2323
+ 0.0
2324
+ ],
2325
+ "q99": [
2326
+ 0.9375,
2327
+ 0.9107142686843872,
2328
+ 0.9375,
2329
+ 0.20357142388820648,
2330
+ 0.26357144117355347,
2331
+ 0.375,
2332
+ 1.0
2333
+ ],
2334
+ "std": [
2335
+ 0.3968801498413086,
2336
+ 0.3473387360572815,
2337
+ 0.49239858984947205,
2338
+ 0.055331431329250336,
2339
+ 0.07844757288694382,
2340
+ 0.10008802264928818,
2341
+ 0.48270025849342346
2342
+ ]
2343
+ },
2344
+ "num_trajectories": 428,
2345
+ "num_transitions": 52042,
2346
+ "proprio": {
2347
+ "max": [
2348
+ 0.13579000532627106,
2349
+ 0.33316105604171753,
2350
+ 1.3660105466842651,
2351
+ 3.473310708999634,
2352
+ 2.6688623428344727,
2353
+ 0.8255361318588257,
2354
+ 0.04233968257904053,
2355
+ 0.0010111660230904818
2356
+ ],
2357
+ "mean": [
2358
+ -0.09923473745584488,
2359
+ 0.013597904704511166,
2360
+ 1.0694637298583984,
2361
+ 2.82898211479187,
2362
+ 0.30799180269241333,
2363
+ -0.274286687374115,
2364
+ 0.028092455118894577,
2365
+ -0.027339335530996323
2366
+ ],
2367
+ "min": [
2368
+ -0.46141114830970764,
2369
+ -0.30129560828208923,
2370
+ 0.9083037972450256,
2371
+ 0.35277295112609863,
2372
+ -1.4858465194702148,
2373
+ -1.5227035284042358,
2374
+ -0.0013586411951109767,
2375
+ -0.042040832340717316
2376
+ ],
2377
+ "q01": [
2378
+ -0.42401049643754957,
2379
+ -0.27338370531797407,
2380
+ 0.911226047873497,
2381
+ 1.3085840785503386,
2382
+ -0.691297555565834,
2383
+ -1.130668159723282,
2384
+ 0.0016738151130266487,
2385
+ -0.040336399003863335
2386
+ ],
2387
+ "q99": [
2388
+ 0.08990443304181095,
2389
+ 0.26473945528268716,
2390
+ 1.2910678112506866,
2391
+ 3.2425890421867365,
2392
+ 2.3376442337036116,
2393
+ 0.4659483411908149,
2394
+ 0.040610933862626555,
2395
+ -0.0015016929572448147
2396
+ ],
2397
+ "std": [
2398
+ 0.11653962731361389,
2399
+ 0.11478105187416077,
2400
+ 0.10487838834524155,
2401
+ 0.5570293664932251,
2402
+ 0.7221656441688538,
2403
+ 0.36479514837265015,
2404
+ 0.01507475133985281,
2405
+ 0.014990941621363163
2406
+ ]
2407
+ }
2408
+ },
2409
+ "libero_object_no_noops": {
2410
+ "action": {
2411
+ "mask": [
2412
+ true,
2413
+ true,
2414
+ true,
2415
+ true,
2416
+ true,
2417
+ true,
2418
+ false
2419
+ ],
2420
+ "max": [
2421
+ 0.9375,
2422
+ 0.8919642567634583,
2423
+ 0.9375,
2424
+ 0.17678570747375488,
2425
+ 0.35035714507102966,
2426
+ 0.1810714304447174,
2427
+ 1.0
2428
+ ],
2429
+ "mean": [
2430
+ 0.07096529006958008,
2431
+ 0.13498851656913757,
2432
+ -0.04601382836699486,
2433
+ 0.00123520044144243,
2434
+ 0.006998839322477579,
2435
+ -0.015027612447738647,
2436
+ 0.46428999304771423
2437
+ ],
2438
+ "min": [
2439
+ -0.8839285969734192,
2440
+ -0.9375,
2441
+ -0.9375,
2442
+ -0.15000000596046448,
2443
+ -0.29035714268684387,
2444
+ -0.32892856001853943,
2445
+ 0.0
2446
+ ],
2447
+ "q01": [
2448
+ -0.5383928418159485,
2449
+ -0.8758928775787354,
2450
+ -0.9375,
2451
+ -0.06964285671710968,
2452
+ -0.11678571254014969,
2453
+ -0.15964286029338837,
2454
+ 0.0
2455
+ ],
2456
+ "q99": [
2457
+ 0.8464285731315613,
2458
+ 0.84375,
2459
+ 0.9375,
2460
+ 0.08142857253551483,
2461
+ 0.14892856776714325,
2462
+ 0.0867857113480568,
2463
+ 1.0
2464
+ ],
2465
+ "std": [
2466
+ 0.2681235373020172,
2467
+ 0.43846824765205383,
2468
+ 0.4474974274635315,
2469
+ 0.024446550756692886,
2470
+ 0.049355510622262955,
2471
+ 0.042107198387384415,
2472
+ 0.49879148602485657
2473
+ ]
2474
+ },
2475
+ "num_trajectories": 454,
2476
+ "num_transitions": 66984,
2477
+ "proprio": {
2478
+ "max": [
2479
+ 0.14580604434013367,
2480
+ 0.33216384053230286,
2481
+ 0.3857804834842682,
2482
+ 3.4003844261169434,
2483
+ 0.7954911589622498,
2484
+ 0.6642207503318787,
2485
+ 0.04104341194033623,
2486
+ -0.00018117300351150334
2487
+ ],
2488
+ "mean": [
2489
+ -0.02999030612409115,
2490
+ -0.007947085425257683,
2491
+ 0.20293472707271576,
2492
+ 3.1086409091949463,
2493
+ -0.21404768526554108,
2494
+ -0.11307074874639511,
2495
+ 0.029380427673459053,
2496
+ -0.030556727200746536
2497
+ ],
2498
+ "min": [
2499
+ -0.1765444278717041,
2500
+ -0.29457300901412964,
2501
+ 0.008128180168569088,
2502
+ 2.2890501022338867,
2503
+ -1.883241891860962,
2504
+ -1.0600427389144897,
2505
+ 0.0006495157140307128,
2506
+ -0.041782498359680176
2507
+ ],
2508
+ "q01": [
2509
+ -0.14911890715360643,
2510
+ -0.25978428691625594,
2511
+ 0.009925739830359817,
2512
+ 2.7545341420173646,
2513
+ -1.3996034812927245,
2514
+ -0.6867720144987106,
2515
+ 0.008197814421728254,
2516
+ -0.04015838988125324
2517
+ ],
2518
+ "q99": [
2519
+ 0.09063626825809479,
2520
+ 0.29066365867853167,
2521
+ 0.3370887073874472,
2522
+ 3.2611824750900267,
2523
+ 0.32092821151018125,
2524
+ 0.4037663781642913,
2525
+ 0.039891827926039694,
2526
+ -0.009106044843792932
2527
+ ],
2528
+ "std": [
2529
+ 0.06694897264242172,
2530
+ 0.17608462274074554,
2531
+ 0.07807064801454544,
2532
+ 0.0868484303355217,
2533
+ 0.33540457487106323,
2534
+ 0.20728276669979095,
2535
+ 0.00956575945019722,
2536
+ 0.009197483770549297
2537
+ ]
2538
+ }
2539
+ },
2540
+ "libero_spatial_no_noops": {
2541
+ "action": {
2542
+ "mask": [
2543
+ true,
2544
+ true,
2545
+ true,
2546
+ true,
2547
+ true,
2548
+ true,
2549
+ false
2550
+ ],
2551
+ "max": [
2552
+ 0.9375,
2553
+ 0.9375,
2554
+ 0.9375,
2555
+ 0.1971428543329239,
2556
+ 0.33642858266830444,
2557
+ 0.375,
2558
+ 1.0
2559
+ ],
2560
+ "mean": [
2561
+ 0.15312479436397552,
2562
+ 0.13707277178764343,
2563
+ -0.15526802837848663,
2564
+ -0.005176450591534376,
2565
+ -0.01120874285697937,
2566
+ -0.020194264128804207,
2567
+ 0.4578818082809448
2568
+ ],
2569
+ "min": [
2570
+ -0.9375,
2571
+ -0.9375,
2572
+ -0.9375,
2573
+ -0.1875,
2574
+ -0.3675000071525574,
2575
+ -0.36000001430511475,
2576
+ 0.0
2577
+ ],
2578
+ "q01": [
2579
+ -0.7454732114076613,
2580
+ -0.6616071462631226,
2581
+ -0.9375,
2582
+ -0.1071428582072258,
2583
+ -0.20678570866584778,
2584
+ -0.1842857152223587,
2585
+ 0.0
2586
+ ],
2587
+ "q99": [
2588
+ 0.9375,
2589
+ 0.8758928775787354,
2590
+ 0.9321428537368774,
2591
+ 0.1039285734295845,
2592
+ 0.17678570747375488,
2593
+ 0.14571428298950195,
2594
+ 1.0
2595
+ ],
2596
+ "std": [
2597
+ 0.41272708773612976,
2598
+ 0.34724321961402893,
2599
+ 0.50869220495224,
2600
+ 0.037266165018081665,
2601
+ 0.07244449853897095,
2602
+ 0.05762382969260216,
2603
+ 0.49827873706817627
2604
+ ]
2605
+ },
2606
+ "num_trajectories": 432,
2607
+ "num_transitions": 52970,
2608
+ "proprio": {
2609
+ "max": [
2610
+ 0.1759040206670761,
2611
+ 0.3904820382595062,
2612
+ 1.3290715217590332,
2613
+ 3.4566118717193604,
2614
+ 1.2268599271774292,
2615
+ 1.0429412126541138,
2616
+ 0.041053611785173416,
2617
+ 0.000775813648942858
2618
+ ],
2619
+ "mean": [
2620
+ -0.024462558329105377,
2621
+ 0.106529600918293,
2622
+ 1.0580483675003052,
2623
+ 3.0628468990325928,
2624
+ -0.10464039444923401,
2625
+ 0.08307311683893204,
2626
+ 0.01995457336306572,
2627
+ -0.020162804052233696
2628
+ ],
2629
+ "min": [
2630
+ -0.3095473051071167,
2631
+ -0.29250794649124146,
2632
+ 0.9095591306686401,
2633
+ 2.497488260269165,
2634
+ -1.8006486892700195,
2635
+ -0.7207611203193665,
2636
+ -0.0004703797458205372,
2637
+ -0.041536275297403336
2638
+ ],
2639
+ "q01": [
2640
+ -0.2727657300233841,
2641
+ -0.23721413239836692,
2642
+ 0.9160063165426254,
2643
+ 2.77949666261673,
2644
+ -1.3187511622905732,
2645
+ -0.41989982962608335,
2646
+ 0.001503719249740243,
2647
+ -0.03989770736545324
2648
+ ],
2649
+ "q99": [
2650
+ 0.13529365032911292,
2651
+ 0.3629165390133857,
2652
+ 1.2862326657772063,
2653
+ 3.2829698753356933,
2654
+ 0.9332760351896285,
2655
+ 0.6325724506378171,
2656
+ 0.039933966137468815,
2657
+ -0.001671919699292631
2658
+ ],
2659
+ "std": [
2660
+ 0.1101478561758995,
2661
+ 0.13784688711166382,
2662
+ 0.1044282391667366,
2663
+ 0.10451053828001022,
2664
+ 0.4112098217010498,
2665
+ 0.2176690548658371,
2666
+ 0.017260896041989326,
2667
+ 0.0171116404235363
2668
+ ]
2669
+ }
2670
+ },
2671
+ "nyu_franka_play_dataset_converted_externally_to_rlds": {
2672
+ "action": {
2673
+ "mask": [
2674
+ true,
2675
+ true,
2676
+ true,
2677
+ true,
2678
+ true,
2679
+ true,
2680
+ false
2681
+ ],
2682
+ "max": [
2683
+ 0.06424188613891602,
2684
+ 0.07027634978294373,
2685
+ 0.06129661202430725,
2686
+ 6.281067848205566,
2687
+ 0.1967729926109314,
2688
+ 0.26377415657043457,
2689
+ 1.0
2690
+ ],
2691
+ "mean": [
2692
+ 0.001021989737637341,
2693
+ -0.00012002651783404872,
2694
+ 0.00032894269679673016,
2695
+ 0.0015034361276775599,
2696
+ -0.002198522910475731,
2697
+ -0.001663230243138969,
2698
+ 0.7230083346366882
2699
+ ],
2700
+ "min": [
2701
+ -0.05952230095863342,
2702
+ -0.07232445478439331,
2703
+ -0.06730806827545166,
2704
+ -6.278434753417969,
2705
+ -0.21479034423828125,
2706
+ -0.3627619743347168,
2707
+ 0.0
2708
+ ],
2709
+ "q01": [
2710
+ -0.03199600875377655,
2711
+ -0.032861671447753905,
2712
+ -0.03368805110454559,
2713
+ -0.12080862045288086,
2714
+ -0.12175218224525451,
2715
+ -0.11370223641395569,
2716
+ 0.0
2717
+ ],
2718
+ "q99": [
2719
+ 0.03101520001888276,
2720
+ 0.0373908892273903,
2721
+ 0.03646374464035038,
2722
+ 0.11764093399047852,
2723
+ 0.1258920183777809,
2724
+ 0.09366151213645942,
2725
+ 1.0
2726
+ ],
2727
+ "std": [
2728
+ 0.01327415369451046,
2729
+ 0.013215910643339157,
2730
+ 0.012822109274566174,
2731
+ 0.2732451558113098,
2732
+ 0.057022541761398315,
2733
+ 0.039172880351543427,
2734
+ 0.44752755761146545
2735
+ ]
2736
+ },
2737
+ "num_trajectories": 456,
2738
+ "num_transitions": 44875,
2739
+ "proprio": {
2740
+ "max": [
2741
+ 0.0,
2742
+ 0.0,
2743
+ 0.0,
2744
+ 0.0,
2745
+ 0.0,
2746
+ 0.0,
2747
+ 0.0
2748
+ ],
2749
+ "mean": [
2750
+ 0.0,
2751
+ 0.0,
2752
+ 0.0,
2753
+ 0.0,
2754
+ 0.0,
2755
+ 0.0,
2756
+ 0.0
2757
+ ],
2758
+ "min": [
2759
+ 0.0,
2760
+ 0.0,
2761
+ 0.0,
2762
+ 0.0,
2763
+ 0.0,
2764
+ 0.0,
2765
+ 0.0
2766
+ ],
2767
+ "q01": [
2768
+ 0.0,
2769
+ 0.0,
2770
+ 0.0,
2771
+ 0.0,
2772
+ 0.0,
2773
+ 0.0,
2774
+ 0.0
2775
+ ],
2776
+ "q99": [
2777
+ 0.0,
2778
+ 0.0,
2779
+ 0.0,
2780
+ 0.0,
2781
+ 0.0,
2782
+ 0.0,
2783
+ 0.0
2784
+ ],
2785
+ "std": [
2786
+ 0.0,
2787
+ 0.0,
2788
+ 0.0,
2789
+ 0.0,
2790
+ 0.0,
2791
+ 0.0,
2792
+ 0.0
2793
+ ]
2794
+ }
2795
+ },
2796
+ "roboturk": {
2797
+ "action": {
2798
+ "mask": [
2799
+ true,
2800
+ true,
2801
+ true,
2802
+ true,
2803
+ true,
2804
+ true,
2805
+ false
2806
+ ],
2807
+ "max": [
2808
+ 0.39124172925949097,
2809
+ 0.4601028263568878,
2810
+ 0.4870833456516266,
2811
+ 1.816888689994812,
2812
+ 1.8240282535552979,
2813
+ 1.4824820756912231,
2814
+ 1.0
2815
+ ],
2816
+ "mean": [
2817
+ 0.0014448732836171985,
2818
+ -0.0015945249469950795,
2819
+ -0.0011753785656765103,
2820
+ 0.0023012510500848293,
2821
+ -0.0009382463176734746,
2822
+ -0.00011485807772260159,
2823
+ 0.5746025443077087
2824
+ ],
2825
+ "min": [
2826
+ -0.6546999216079712,
2827
+ -0.6365841031074524,
2828
+ -0.4217723608016968,
2829
+ -1.6695482730865479,
2830
+ -1.8023357391357422,
2831
+ -1.4630827903747559,
2832
+ 0.0
2833
+ ],
2834
+ "q01": [
2835
+ -0.1342635464668274,
2836
+ -0.19996687173843383,
2837
+ -0.1482972100377083,
2838
+ -0.20720748245716095,
2839
+ -0.09676413893699647,
2840
+ -0.18075634717941286,
2841
+ 0.0
2842
+ ],
2843
+ "q99": [
2844
+ 0.14956976801157001,
2845
+ 0.1805950567126275,
2846
+ 0.18841815620660796,
2847
+ 0.21615413755178453,
2848
+ 0.09457383215427405,
2849
+ 0.18543301910162005,
2850
+ 1.0
2851
+ ],
2852
+ "std": [
2853
+ 0.04935386776924133,
2854
+ 0.0635455846786499,
2855
+ 0.061164740473032,
2856
+ 0.09553450345993042,
2857
+ 0.08420111238956451,
2858
+ 0.06517903506755829,
2859
+ 0.49452081322669983
2860
+ ]
2861
+ },
2862
+ "num_trajectories": 1995,
2863
+ "num_transitions": 187507,
2864
+ "proprio": {
2865
+ "max": [
2866
+ 0.0,
2867
+ 0.0,
2868
+ 0.0,
2869
+ 0.0,
2870
+ 0.0,
2871
+ 0.0,
2872
+ 0.0
2873
+ ],
2874
+ "mean": [
2875
+ 0.0,
2876
+ 0.0,
2877
+ 0.0,
2878
+ 0.0,
2879
+ 0.0,
2880
+ 0.0,
2881
+ 0.0
2882
+ ],
2883
+ "min": [
2884
+ 0.0,
2885
+ 0.0,
2886
+ 0.0,
2887
+ 0.0,
2888
+ 0.0,
2889
+ 0.0,
2890
+ 0.0
2891
+ ],
2892
+ "q01": [
2893
+ 0.0,
2894
+ 0.0,
2895
+ 0.0,
2896
+ 0.0,
2897
+ 0.0,
2898
+ 0.0,
2899
+ 0.0
2900
+ ],
2901
+ "q99": [
2902
+ 0.0,
2903
+ 0.0,
2904
+ 0.0,
2905
+ 0.0,
2906
+ 0.0,
2907
+ 0.0,
2908
+ 0.0
2909
+ ],
2910
+ "std": [
2911
+ 0.0,
2912
+ 0.0,
2913
+ 0.0,
2914
+ 0.0,
2915
+ 0.0,
2916
+ 0.0,
2917
+ 0.0
2918
+ ]
2919
+ }
2920
+ },
2921
+ "stanford_hydra_dataset_converted_externally_to_rlds": {
2922
+ "action": {
2923
+ "mask": [
2924
+ true,
2925
+ true,
2926
+ true,
2927
+ true,
2928
+ true,
2929
+ true,
2930
+ false
2931
+ ],
2932
+ "max": [
2933
+ 0.02499854564666748,
2934
+ 0.02499903365969658,
2935
+ 0.024999922141432762,
2936
+ 0.24974457919597626,
2937
+ 0.24997030198574066,
2938
+ 0.24999946355819702,
2939
+ 1.0
2940
+ ],
2941
+ "mean": [
2942
+ 0.0007790001109242439,
2943
+ 0.00013707754260394722,
2944
+ -0.0002548607881180942,
2945
+ 0.0012903271708637476,
2946
+ -0.004751681815832853,
2947
+ 0.002692886395379901,
2948
+ 0.48855218291282654
2949
+ ],
2950
+ "min": [
2951
+ -0.024999044835567474,
2952
+ -0.024999700486660004,
2953
+ -0.02499929815530777,
2954
+ -0.24993225932121277,
2955
+ -0.2499666064977646,
2956
+ -0.2499932497739792,
2957
+ 0.0
2958
+ ],
2959
+ "q01": [
2960
+ -0.019992006458342076,
2961
+ -0.02415412735193968,
2962
+ -0.022941758055239916,
2963
+ -0.11085530579090118,
2964
+ -0.12024572037160397,
2965
+ -0.13314770206809043,
2966
+ 0.0
2967
+ ],
2968
+ "q99": [
2969
+ 0.022886231057345868,
2970
+ 0.022358838934451335,
2971
+ 0.02410089675337076,
2972
+ 0.12370114490389822,
2973
+ 0.11323311634361738,
2974
+ 0.18474749639630164,
2975
+ 1.0
2976
+ ],
2977
+ "std": [
2978
+ 0.008022161200642586,
2979
+ 0.009131459519267082,
2980
+ 0.009574338793754578,
2981
+ 0.04122216999530792,
2982
+ 0.0384303517639637,
2983
+ 0.04606688767671585,
2984
+ 0.49976691603660583
2985
+ ]
2986
+ },
2987
+ "num_trajectories": 570,
2988
+ "num_transitions": 358234,
2989
+ "proprio": {
2990
+ "max": [
2991
+ 0.0,
2992
+ 0.0,
2993
+ 0.0,
2994
+ 0.0,
2995
+ 0.0,
2996
+ 0.0,
2997
+ 0.0
2998
+ ],
2999
+ "mean": [
3000
+ 0.0,
3001
+ 0.0,
3002
+ 0.0,
3003
+ 0.0,
3004
+ 0.0,
3005
+ 0.0,
3006
+ 0.0
3007
+ ],
3008
+ "min": [
3009
+ 0.0,
3010
+ 0.0,
3011
+ 0.0,
3012
+ 0.0,
3013
+ 0.0,
3014
+ 0.0,
3015
+ 0.0
3016
+ ],
3017
+ "q01": [
3018
+ 0.0,
3019
+ 0.0,
3020
+ 0.0,
3021
+ 0.0,
3022
+ 0.0,
3023
+ 0.0,
3024
+ 0.0
3025
+ ],
3026
+ "q99": [
3027
+ 0.0,
3028
+ 0.0,
3029
+ 0.0,
3030
+ 0.0,
3031
+ 0.0,
3032
+ 0.0,
3033
+ 0.0
3034
+ ],
3035
+ "std": [
3036
+ 0.0,
3037
+ 0.0,
3038
+ 0.0,
3039
+ 0.0,
3040
+ 0.0,
3041
+ 0.0,
3042
+ 0.0
3043
+ ]
3044
+ }
3045
+ },
3046
+ "taco_play": {
3047
+ "action": {
3048
+ "mask": [
3049
+ true,
3050
+ true,
3051
+ true,
3052
+ true,
3053
+ true,
3054
+ true,
3055
+ false
3056
+ ],
3057
+ "max": [
3058
+ 1.4915844202041626,
3059
+ 2.1842432022094727,
3060
+ 2.6836395263671875,
3061
+ 5.035226821899414,
3062
+ 2.665864944458008,
3063
+ 4.250768661499023,
3064
+ 1.0
3065
+ ],
3066
+ "mean": [
3067
+ -0.003845922416076064,
3068
+ 0.009671456180512905,
3069
+ 0.012780580669641495,
3070
+ -0.005403771996498108,
3071
+ -0.009606587700545788,
3072
+ -0.002480733208358288,
3073
+ 0.4263913035392761
3074
+ ],
3075
+ "min": [
3076
+ -4.242457866668701,
3077
+ -3.192805051803589,
3078
+ -1.3371467590332031,
3079
+ -4.202683448791504,
3080
+ -2.6722638607025146,
3081
+ -3.3467135429382324,
3082
+ 0.0
3083
+ ],
3084
+ "q01": [
3085
+ -0.7106140398979186,
3086
+ -1.056944659948349,
3087
+ -0.5878450274467468,
3088
+ -0.7682853937149048,
3089
+ -0.7180147767066956,
3090
+ -1.5527938604354858,
3091
+ 0.0
3092
+ ],
3093
+ "q99": [
3094
+ 0.6482916426658629,
3095
+ 1.0051310062408447,
3096
+ 0.9480248689651489,
3097
+ 0.6926478147506714,
3098
+ 0.6351067513227462,
3099
+ 1.628010264635086,
3100
+ 1.0
3101
+ ],
3102
+ "std": [
3103
+ 0.23254038393497467,
3104
+ 0.36298269033432007,
3105
+ 0.28692901134490967,
3106
+ 0.2617705166339874,
3107
+ 0.2438892275094986,
3108
+ 0.5216503143310547,
3109
+ 0.4946896731853485
3110
+ ]
3111
+ },
3112
+ "num_trajectories": 3603,
3113
+ "num_transitions": 237798,
3114
+ "proprio": {
3115
+ "max": [
3116
+ 0.0,
3117
+ 0.0,
3118
+ 0.0,
3119
+ 0.0,
3120
+ 0.0,
3121
+ 0.0,
3122
+ 0.0
3123
+ ],
3124
+ "mean": [
3125
+ 0.0,
3126
+ 0.0,
3127
+ 0.0,
3128
+ 0.0,
3129
+ 0.0,
3130
+ 0.0,
3131
+ 0.0
3132
+ ],
3133
+ "min": [
3134
+ 0.0,
3135
+ 0.0,
3136
+ 0.0,
3137
+ 0.0,
3138
+ 0.0,
3139
+ 0.0,
3140
+ 0.0
3141
+ ],
3142
+ "q01": [
3143
+ 0.0,
3144
+ 0.0,
3145
+ 0.0,
3146
+ 0.0,
3147
+ 0.0,
3148
+ 0.0,
3149
+ 0.0
3150
+ ],
3151
+ "q99": [
3152
+ 0.0,
3153
+ 0.0,
3154
+ 0.0,
3155
+ 0.0,
3156
+ 0.0,
3157
+ 0.0,
3158
+ 0.0
3159
+ ],
3160
+ "std": [
3161
+ 0.0,
3162
+ 0.0,
3163
+ 0.0,
3164
+ 0.0,
3165
+ 0.0,
3166
+ 0.0,
3167
+ 0.0
3168
+ ]
3169
+ }
3170
+ },
3171
+ "toto": {
3172
+ "action": {
3173
+ "mask": [
3174
+ true,
3175
+ true,
3176
+ true,
3177
+ true,
3178
+ true,
3179
+ true,
3180
+ false
3181
+ ],
3182
+ "max": [
3183
+ 0.6839867234230042,
3184
+ 0.4454185664653778,
3185
+ 0.7984078526496887,
3186
+ 2.120781660079956,
3187
+ 1.371164321899414,
3188
+ 1.4118704795837402,
3189
+ 0.0
3190
+ ],
3191
+ "mean": [
3192
+ 0.38542115688323975,
3193
+ 0.007769413758069277,
3194
+ 0.3632740378379822,
3195
+ -0.6652036905288696,
3196
+ 0.1890396922826767,
3197
+ 0.03298724442720413,
3198
+ 0.0
3199
+ ],
3200
+ "min": [
3201
+ 0.09922284632921219,
3202
+ -0.5180193781852722,
3203
+ 0.13791072368621826,
3204
+ -2.635117530822754,
3205
+ -1.0734480619430542,
3206
+ -1.9282547235488892,
3207
+ 0.0
3208
+ ],
3209
+ "q01": [
3210
+ 0.1756722891330719,
3211
+ -0.3077590811252594,
3212
+ 0.235383919775486,
3213
+ -2.0908505964279174,
3214
+ -0.6191593289375306,
3215
+ -0.7488683319091797,
3216
+ 0.0
3217
+ ],
3218
+ "q99": [
3219
+ 0.6136963081359863,
3220
+ 0.33704194784164443,
3221
+ 0.6681221985816956,
3222
+ 0.7422861719131538,
3223
+ 0.7955395007133507,
3224
+ 0.740464625358582,
3225
+ 0.0
3226
+ ],
3227
+ "std": [
3228
+ 0.12211652100086212,
3229
+ 0.19378550350666046,
3230
+ 0.10178236663341522,
3231
+ 0.5725259184837341,
3232
+ 0.29884573817253113,
3233
+ 0.3259911835193634,
3234
+ 0.0
3235
+ ]
3236
+ },
3237
+ "num_trajectories": 1003,
3238
+ "num_transitions": 325699,
3239
+ "proprio": {
3240
+ "max": [
3241
+ 0.0,
3242
+ 0.0,
3243
+ 0.0,
3244
+ 0.0,
3245
+ 0.0,
3246
+ 0.0,
3247
+ 0.0
3248
+ ],
3249
+ "mean": [
3250
+ 0.0,
3251
+ 0.0,
3252
+ 0.0,
3253
+ 0.0,
3254
+ 0.0,
3255
+ 0.0,
3256
+ 0.0
3257
+ ],
3258
+ "min": [
3259
+ 0.0,
3260
+ 0.0,
3261
+ 0.0,
3262
+ 0.0,
3263
+ 0.0,
3264
+ 0.0,
3265
+ 0.0
3266
+ ],
3267
+ "q01": [
3268
+ 0.0,
3269
+ 0.0,
3270
+ 0.0,
3271
+ 0.0,
3272
+ 0.0,
3273
+ 0.0,
3274
+ 0.0
3275
+ ],
3276
+ "q99": [
3277
+ 0.0,
3278
+ 0.0,
3279
+ 0.0,
3280
+ 0.0,
3281
+ 0.0,
3282
+ 0.0,
3283
+ 0.0
3284
+ ],
3285
+ "std": [
3286
+ 0.0,
3287
+ 0.0,
3288
+ 0.0,
3289
+ 0.0,
3290
+ 0.0,
3291
+ 0.0,
3292
+ 0.0
3293
+ ]
3294
+ }
3295
+ },
3296
+ "ucsd_kitchen_dataset_converted_externally_to_rlds": {
3297
+ "action": {
3298
+ "mask": [
3299
+ true,
3300
+ true,
3301
+ true,
3302
+ true,
3303
+ true,
3304
+ true,
3305
+ false
3306
+ ],
3307
+ "max": [
3308
+ 678.0,
3309
+ 400.0,
3310
+ 507.0,
3311
+ 180.00001525878906,
3312
+ 6.000013828277588,
3313
+ 116.99998474121094,
3314
+ 1.0
3315
+ ],
3316
+ "mean": [
3317
+ 410.37567138671875,
3318
+ 116.9518814086914,
3319
+ 192.35032653808594,
3320
+ -121.22441864013672,
3321
+ -33.84893035888672,
3322
+ 50.016136169433594,
3323
+ 0.741813600063324
3324
+ ],
3325
+ "min": [
3326
+ 172.0,
3327
+ -166.0,
3328
+ -99.99999237060547,
3329
+ -180.00001525878906,
3330
+ -89.0,
3331
+ -96.00010681152344,
3332
+ 0.0
3333
+ ],
3334
+ "q01": [
3335
+ 200.00001052856445,
3336
+ -102.31004211425781,
3337
+ -94.99993370056153,
3338
+ -180.00001525878906,
3339
+ -88.00001525878906,
3340
+ -38.999977111816406,
3341
+ 0.0
3342
+ ],
3343
+ "q99": [
3344
+ 637.0,
3345
+ 368.30999999999995,
3346
+ 493.0,
3347
+ 180.00001525878906,
3348
+ 0.999983012676239,
3349
+ 105.00001525878906,
3350
+ 1.0
3351
+ ],
3352
+ "std": [
3353
+ 122.81494903564453,
3354
+ 108.8009033203125,
3355
+ 130.303466796875,
3356
+ 116.28205108642578,
3357
+ 27.621843338012695,
3358
+ 41.02094650268555,
3359
+ 0.43763357400894165
3360
+ ]
3361
+ },
3362
+ "num_trajectories": 150,
3363
+ "num_transitions": 3970,
3364
+ "proprio": {
3365
+ "max": [
3366
+ 0.0,
3367
+ 0.0,
3368
+ 0.0,
3369
+ 0.0,
3370
+ 0.0,
3371
+ 0.0,
3372
+ 0.0
3373
+ ],
3374
+ "mean": [
3375
+ 0.0,
3376
+ 0.0,
3377
+ 0.0,
3378
+ 0.0,
3379
+ 0.0,
3380
+ 0.0,
3381
+ 0.0
3382
+ ],
3383
+ "min": [
3384
+ 0.0,
3385
+ 0.0,
3386
+ 0.0,
3387
+ 0.0,
3388
+ 0.0,
3389
+ 0.0,
3390
+ 0.0
3391
+ ],
3392
+ "q01": [
3393
+ 0.0,
3394
+ 0.0,
3395
+ 0.0,
3396
+ 0.0,
3397
+ 0.0,
3398
+ 0.0,
3399
+ 0.0
3400
+ ],
3401
+ "q99": [
3402
+ 0.0,
3403
+ 0.0,
3404
+ 0.0,
3405
+ 0.0,
3406
+ 0.0,
3407
+ 0.0,
3408
+ 0.0
3409
+ ],
3410
+ "std": [
3411
+ 0.0,
3412
+ 0.0,
3413
+ 0.0,
3414
+ 0.0,
3415
+ 0.0,
3416
+ 0.0,
3417
+ 0.0
3418
+ ]
3419
+ }
3420
+ },
3421
+ "utaustin_mutex": {
3422
+ "action": {
3423
+ "mask": [
3424
+ true,
3425
+ true,
3426
+ true,
3427
+ true,
3428
+ true,
3429
+ true,
3430
+ false
3431
+ ],
3432
+ "max": [
3433
+ 1.0,
3434
+ 1.0,
3435
+ 1.0,
3436
+ 0.375,
3437
+ 0.375,
3438
+ 0.375,
3439
+ 1.0
3440
+ ],
3441
+ "mean": [
3442
+ 0.06176406890153885,
3443
+ -0.005005486309528351,
3444
+ 0.10216785222291946,
3445
+ -0.03314131125807762,
3446
+ 0.013895004987716675,
3447
+ -0.011317633092403412,
3448
+ 0.5038976669311523
3449
+ ],
3450
+ "min": [
3451
+ -1.0,
3452
+ -1.0,
3453
+ -1.0,
3454
+ -0.375,
3455
+ -0.375,
3456
+ -0.375,
3457
+ 0.0
3458
+ ],
3459
+ "q01": [
3460
+ -0.4285714328289032,
3461
+ -0.9800000190734863,
3462
+ -0.5571428537368774,
3463
+ -0.375,
3464
+ -0.15642857551574707,
3465
+ -0.335357129573822,
3466
+ 0.0
3467
+ ],
3468
+ "q99": [
3469
+ 0.5914285778999329,
3470
+ 0.9714285731315613,
3471
+ 1.0,
3472
+ 0.3278571367263794,
3473
+ 0.207857146859169,
3474
+ 0.25607141852378845,
3475
+ 1.0
3476
+ ],
3477
+ "std": [
3478
+ 0.1875014752149582,
3479
+ 0.4468473494052887,
3480
+ 0.3792876601219177,
3481
+ 0.14097853004932404,
3482
+ 0.06453701853752136,
3483
+ 0.11765272170305252,
3484
+ 0.501045286655426
3485
+ ]
3486
+ },
3487
+ "num_trajectories": 1500,
3488
+ "num_transitions": 361883,
3489
+ "proprio": {
3490
+ "max": [
3491
+ 0.0,
3492
+ 0.0,
3493
+ 0.0,
3494
+ 0.0,
3495
+ 0.0,
3496
+ 0.0,
3497
+ 0.0
3498
+ ],
3499
+ "mean": [
3500
+ 0.0,
3501
+ 0.0,
3502
+ 0.0,
3503
+ 0.0,
3504
+ 0.0,
3505
+ 0.0,
3506
+ 0.0
3507
+ ],
3508
+ "min": [
3509
+ 0.0,
3510
+ 0.0,
3511
+ 0.0,
3512
+ 0.0,
3513
+ 0.0,
3514
+ 0.0,
3515
+ 0.0
3516
+ ],
3517
+ "q01": [
3518
+ 0.0,
3519
+ 0.0,
3520
+ 0.0,
3521
+ 0.0,
3522
+ 0.0,
3523
+ 0.0,
3524
+ 0.0
3525
+ ],
3526
+ "q99": [
3527
+ 0.0,
3528
+ 0.0,
3529
+ 0.0,
3530
+ 0.0,
3531
+ 0.0,
3532
+ 0.0,
3533
+ 0.0
3534
+ ],
3535
+ "std": [
3536
+ 0.0,
3537
+ 0.0,
3538
+ 0.0,
3539
+ 0.0,
3540
+ 0.0,
3541
+ 0.0,
3542
+ 0.0
3543
+ ]
3544
+ }
3545
+ },
3546
+ "viola": {
3547
+ "action": {
3548
+ "mask": [
3549
+ true,
3550
+ true,
3551
+ true,
3552
+ true,
3553
+ true,
3554
+ true,
3555
+ false
3556
+ ],
3557
+ "max": [
3558
+ 1.0,
3559
+ 1.0,
3560
+ 1.0,
3561
+ 0.375,
3562
+ 0.36321428418159485,
3563
+ 0.375,
3564
+ 1.0
3565
+ ],
3566
+ "mean": [
3567
+ 0.04761844128370285,
3568
+ -0.029204415157437325,
3569
+ 0.05586736649274826,
3570
+ -0.002618510741740465,
3571
+ 0.006867344491183758,
3572
+ -0.01682133786380291,
3573
+ 0.7323777675628662
3574
+ ],
3575
+ "min": [
3576
+ -1.0,
3577
+ -1.0,
3578
+ -1.0,
3579
+ -0.375,
3580
+ -0.375,
3581
+ -0.375,
3582
+ 0.0
3583
+ ],
3584
+ "q01": [
3585
+ -0.9628571271896362,
3586
+ -1.0,
3587
+ -1.0,
3588
+ -0.26249998807907104,
3589
+ -0.21321429312229156,
3590
+ -0.3385714292526245,
3591
+ 0.0
3592
+ ],
3593
+ "q99": [
3594
+ 0.9114285707473755,
3595
+ 0.868571400642395,
3596
+ 1.0,
3597
+ 0.2817857265472412,
3598
+ 0.2239285707473755,
3599
+ 0.3557142913341522,
3600
+ 1.0
3601
+ ],
3602
+ "std": [
3603
+ 0.39157867431640625,
3604
+ 0.4076525568962097,
3605
+ 0.40077948570251465,
3606
+ 0.10023996233940125,
3607
+ 0.0844319611787796,
3608
+ 0.10375042259693146,
3609
+ 0.44260647892951965
3610
+ ]
3611
+ },
3612
+ "num_trajectories": 150,
3613
+ "num_transitions": 76324,
3614
+ "proprio": {
3615
+ "max": [
3616
+ 0.0,
3617
+ 0.0,
3618
+ 0.0,
3619
+ 0.0,
3620
+ 0.0,
3621
+ 0.0,
3622
+ 0.0
3623
+ ],
3624
+ "mean": [
3625
+ 0.0,
3626
+ 0.0,
3627
+ 0.0,
3628
+ 0.0,
3629
+ 0.0,
3630
+ 0.0,
3631
+ 0.0
3632
+ ],
3633
+ "min": [
3634
+ 0.0,
3635
+ 0.0,
3636
+ 0.0,
3637
+ 0.0,
3638
+ 0.0,
3639
+ 0.0,
3640
+ 0.0
3641
+ ],
3642
+ "q01": [
3643
+ 0.0,
3644
+ 0.0,
3645
+ 0.0,
3646
+ 0.0,
3647
+ 0.0,
3648
+ 0.0,
3649
+ 0.0
3650
+ ],
3651
+ "q99": [
3652
+ 0.0,
3653
+ 0.0,
3654
+ 0.0,
3655
+ 0.0,
3656
+ 0.0,
3657
+ 0.0,
3658
+ 0.0
3659
+ ],
3660
+ "std": [
3661
+ 0.0,
3662
+ 0.0,
3663
+ 0.0,
3664
+ 0.0,
3665
+ 0.0,
3666
+ 0.0,
3667
+ 0.0
3668
+ ]
3669
+ }
3670
+ }
3671
+ },
3672
+ "output_projector_states": false,
3673
+ "pad_to_multiple_of": 64,
3674
+ "pad_token_id": 32000,
3675
+ "text_config": {
3676
+ "is_pruned": true,
3677
+ "model_type": "llama",
3678
+ "pad_token_id": 32000,
3679
+ "pruned_intermediate_sizes": [
3680
+ 11008,
3681
+ 550,
3682
+ 550,
3683
+ 550,
3684
+ 550,
3685
+ 550,
3686
+ 550,
3687
+ 550,
3688
+ 550,
3689
+ 550,
3690
+ 550,
3691
+ 550,
3692
+ 550,
3693
+ 550,
3694
+ 550,
3695
+ 550,
3696
+ 550,
3697
+ 550,
3698
+ 550,
3699
+ 550,
3700
+ 550,
3701
+ 550,
3702
+ 550,
3703
+ 550,
3704
+ 550,
3705
+ 550,
3706
+ 550,
3707
+ 550,
3708
+ 550,
3709
+ 550,
3710
+ 550,
3711
+ 11008
3712
+ ],
3713
+ "pruned_num_heads": [
3714
+ 32,
3715
+ 2,
3716
+ 2,
3717
+ 2,
3718
+ 2,
3719
+ 2,
3720
+ 2,
3721
+ 2,
3722
+ 2,
3723
+ 2,
3724
+ 2,
3725
+ 2,
3726
+ 2,
3727
+ 2,
3728
+ 2,
3729
+ 2,
3730
+ 2,
3731
+ 2,
3732
+ 2,
3733
+ 2,
3734
+ 2,
3735
+ 2,
3736
+ 2,
3737
+ 2,
3738
+ 2,
3739
+ 2,
3740
+ 2,
3741
+ 2,
3742
+ 2,
3743
+ 2,
3744
+ 2,
3745
+ 32
3746
+ ],
3747
+ "pruner_type": "taylor",
3748
+ "pruning_method": "llm-pruner",
3749
+ "taylor_variant": "param_first",
3750
+ "torch_dtype": "bfloat16",
3751
+ "vocab_size": 32064
3752
+ },
3753
+ "timm_model_ids": [
3754
+ "vit_large_patch14_reg4_dinov2.lvd142m",
3755
+ "vit_so400m_patch14_siglip_224"
3756
+ ],
3757
+ "timm_override_act_layers": [
3758
+ null,
3759
+ null
3760
+ ],
3761
+ "torch_dtype": "bfloat16",
3762
+ "transformers_version": "4.40.1",
3763
+ "use_fused_vision_backbone": true,
3764
+ "vision_backbone_id": "dinosiglip-vit-so-224px"
3765
+ }
configuration_prismatic.py ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ configuration_prismatic.py
3
+
4
+ HuggingFace-style configuration definition for Prismatic VLMs, inheriting from `transformers.PretrainedConfig`.
5
+ Default configuration specifies `siglip-224px+7b`.
6
+ """
7
+
8
+ from typing import Any, Dict, List, Optional
9
+
10
+ from transformers import PretrainedConfig
11
+ from transformers.models.auto import CONFIG_MAPPING
12
+
13
+ # === Utilities for Mapping Prismatic names to HF names ===
14
+ # fmt: off
15
+ VISION_BACKBONE_TO_RESOLUTION: Dict[str, List[int]] = {
16
+ "clip-vit-l": [224], "siglip-vit-so400m": [224], "dinov2-vit-l": [224], "in1k-vit-l": [224],
17
+
18
+ "clip-vit-l-336px": [336],
19
+ "siglip-vit-so400m-384px": [384],
20
+
21
+ "dinoclip-vit-l-336px": [336, 336],
22
+ "dinosiglip-vit-so-224px": [224, 224],
23
+ "dinosiglip-vit-so-384px": [384, 384],
24
+ }
25
+ VISION_BACKBONE_TO_TIMM_ID: Dict[str, List[str]] = {
26
+ "clip-vit-l": ["vit_large_patch14_clip_224.openai"],
27
+ "clip-vit-l-336px": ["vit_large_patch14_clip_336.openai"],
28
+
29
+ "dinov2-vit-l": ["vit_large_patch14_reg4_dinov2.lvd142m"],
30
+ "in1k-vit-l": ["vit_large_patch16_224.augreg_in21k_ft_in1k"],
31
+
32
+ "siglip-vit-so400m": ["vit_so400m_patch14_siglip_224"],
33
+ "siglip-vit-so400m-384px": ["vit_so400m_patch14_siglip_384"],
34
+
35
+ "dinoclip-vit-l-336px": ["vit_large_patch14_reg4_dinov2.lvd142m", "vit_large_patch14_clip_336.openai"],
36
+ "dinosiglip-vit-so-224px": ["vit_large_patch14_reg4_dinov2.lvd142m", "vit_so400m_patch14_siglip_224"],
37
+ "dinosiglip-vit-so-384px": ["vit_large_patch14_reg4_dinov2.lvd142m", "vit_so400m_patch14_siglip_384"],
38
+ }
39
+ TIMM_OVERRIDE_ACT_LAYER: Dict[str, List[Optional[str]]] = {
40
+ "clip-vit-l": ["quick_gelu"], "clip-vit-l-336px": ["quick_gelu"],
41
+ "dinov2-vit-l": [None], "in1k-vit-l": [None],
42
+ "siglip-vit-so400m": [None], "siglip-vit-so400m-384px": [None],
43
+ "dinoclip-vit-l-336px": [None, "quick_gelu"],
44
+ "dinosiglip-vit-so-224px": [None, None], "dinosiglip-vit-so-384px": [None, None]
45
+ }
46
+
47
+ LLM_BACKBONE_TO_HF_PATH = {
48
+ "llama2-7b-pure": "meta-llama/Llama-2-7b-hf", "llama2-13b-pure": "meta-llama/Llama-2-13b-hf",
49
+ "llama2-7b-chat": "meta-llama/Llama-2-7b-chat-hf", "llama2-13b-chat": "meta-llama/Llama-2-13b-chat-hf",
50
+
51
+ "vicuna-v15-7b": "lmsys/vicuna-7b-v1.5", "vicuna-v15-13b": "lmsys/vicuna-13b-v1.5",
52
+
53
+ "mistral-v0.1-7b-pure": "mistralai/Mistral-7B-v0.1",
54
+ "mistral-v0.1-7b-instruct": "mistralai/Mistral-7B-Instruct-v0.1",
55
+
56
+ "phi-2-3b": "microsoft/phi-2",
57
+ }
58
+ LLM_BACKBONE_TO_HF_METACLASS = {
59
+ "llama2-7b-pure": "llama", "llama2-13b-pure": "llama", "llama2-7b-chat": "llama", "llama2-13b-chat": "llama",
60
+ "vicuna-v15-7b": "llama", "vicuna-v15-13b": "llama",
61
+
62
+ "mistral-v0.1-7b-pure": "mistral", "mistral-v0.1-7b-instruct": "mistral",
63
+
64
+ "phi-2-3b": "phi",
65
+ }
66
+
67
+ VALID_VISION_BACKBONES = set(VISION_BACKBONE_TO_RESOLUTION.keys())
68
+ VALID_LLM_BACKBONES = set(LLM_BACKBONE_TO_HF_PATH)
69
+ # fmt: on
70
+
71
+
72
+ class PrismaticConfig(PretrainedConfig):
73
+ model_type: str = "prismatic"
74
+ is_composition: bool = False
75
+
76
+ def __init__(
77
+ self,
78
+ vision_backbone_id: str = "siglip-vit-so400m",
79
+ llm_backbone_id: str = "vicuna-v15-7b",
80
+ arch_specifier: str = "no-align+gelu-mlp",
81
+ use_fused_vision_backbone: Optional[bool] = None,
82
+ image_resize_strategy: str = "letterbox",
83
+ text_config: Optional[Dict[str, Any]] = None,
84
+ llm_max_length: int = 2048,
85
+ pad_token_id: int = 32000,
86
+ pad_to_multiple_of: int = 64,
87
+ output_projector_states: bool = False,
88
+ **kwargs: str,
89
+ ) -> None:
90
+ if vision_backbone_id not in VALID_VISION_BACKBONES:
91
+ raise ValueError(f"Vision backbone `{vision_backbone_id}` not in {VALID_VISION_BACKBONES = }")
92
+
93
+ if llm_backbone_id not in VALID_LLM_BACKBONES:
94
+ raise ValueError(f"LLM backbone `{llm_backbone_id}` not in {VALID_LLM_BACKBONES = }")
95
+
96
+ # Set Prismatic Configuration Fields
97
+ self.vision_backbone_id = vision_backbone_id
98
+ self.llm_backbone_id = llm_backbone_id
99
+ self.arch_specifier = arch_specifier
100
+ self.output_projector_states = output_projector_states
101
+
102
+ # [Contract] All vision backbone parameters are lists =>> supports fused backbones with different preprocessing
103
+ self.use_fused_vision_backbone = (
104
+ use_fused_vision_backbone
105
+ if use_fused_vision_backbone is not None
106
+ else any(self.vision_backbone_id.startswith(v) for v in ["dinoclip", "dinosiglip"])
107
+ )
108
+
109
+ self.timm_model_ids = VISION_BACKBONE_TO_TIMM_ID[self.vision_backbone_id]
110
+ self.timm_override_act_layers = TIMM_OVERRIDE_ACT_LAYER[self.vision_backbone_id]
111
+ self.image_sizes = VISION_BACKBONE_TO_RESOLUTION[self.vision_backbone_id]
112
+ self.image_resize_strategy = image_resize_strategy
113
+
114
+ self.hf_llm_id = LLM_BACKBONE_TO_HF_PATH[self.llm_backbone_id]
115
+ self.llm_max_length = llm_max_length
116
+ self.pad_token_id, self.pad_to_multiple_of = pad_token_id, pad_to_multiple_of
117
+
118
+ # [IMPORTANT] HF Utilities actually look for a `text_config` field... we need to use that specific naming!
119
+ self.text_config = (
120
+ CONFIG_MAPPING[LLM_BACKBONE_TO_HF_METACLASS[self.llm_backbone_id]](**text_config)
121
+ if text_config is not None
122
+ else CONFIG_MAPPING[LLM_BACKBONE_TO_HF_METACLASS[self.llm_backbone_id]]()
123
+ )
124
+
125
+ # Dispatch **kwargs to super() =>> note that `pad_token_id` collides, so we pass it in here as well...
126
+ super().__init__(pad_token_id=pad_token_id, **kwargs)
127
+
128
+
129
+ class OpenVLAConfig(PrismaticConfig):
130
+ model_type: str = "openvla"
131
+
132
+ def __init__(
133
+ self,
134
+ norm_stats: Optional[Dict[str, Dict[str, Dict[str, Dict[str, List[float]]]]]] = None,
135
+ n_action_bins: int = 256,
136
+ **kwargs: str,
137
+ ) -> None:
138
+ self.norm_stats, self.n_action_bins = norm_stats, n_action_bins
139
+
140
+ super().__init__(**kwargs)
dataset_statistics.json ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "libero_object_no_noops": {
3
+ "action": {
4
+ "mean": [
5
+ 0.07096529006958008,
6
+ 0.13498851656913757,
7
+ -0.04601382836699486,
8
+ 0.00123520044144243,
9
+ 0.006998839322477579,
10
+ -0.015027612447738647,
11
+ 0.46428999304771423
12
+ ],
13
+ "std": [
14
+ 0.2681235373020172,
15
+ 0.43846824765205383,
16
+ 0.4474974274635315,
17
+ 0.024446550756692886,
18
+ 0.049355510622262955,
19
+ 0.042107198387384415,
20
+ 0.49879148602485657
21
+ ],
22
+ "max": [
23
+ 0.9375,
24
+ 0.8919642567634583,
25
+ 0.9375,
26
+ 0.17678570747375488,
27
+ 0.35035714507102966,
28
+ 0.1810714304447174,
29
+ 1.0
30
+ ],
31
+ "min": [
32
+ -0.8839285969734192,
33
+ -0.9375,
34
+ -0.9375,
35
+ -0.15000000596046448,
36
+ -0.29035714268684387,
37
+ -0.32892856001853943,
38
+ 0.0
39
+ ],
40
+ "q01": [
41
+ -0.5383928418159485,
42
+ -0.8758928775787354,
43
+ -0.9375,
44
+ -0.06964285671710968,
45
+ -0.11678571254014969,
46
+ -0.15964286029338837,
47
+ 0.0
48
+ ],
49
+ "q99": [
50
+ 0.8464285731315613,
51
+ 0.84375,
52
+ 0.9375,
53
+ 0.08142857253551483,
54
+ 0.14892856776714325,
55
+ 0.0867857113480568,
56
+ 1.0
57
+ ],
58
+ "mask": [
59
+ true,
60
+ true,
61
+ true,
62
+ true,
63
+ true,
64
+ true,
65
+ false
66
+ ]
67
+ },
68
+ "proprio": {
69
+ "mean": [
70
+ -0.02999030612409115,
71
+ -0.007947085425257683,
72
+ 0.20293472707271576,
73
+ 3.1086409091949463,
74
+ -0.21404768526554108,
75
+ -0.11307074874639511,
76
+ 0.029380427673459053,
77
+ -0.030556727200746536
78
+ ],
79
+ "std": [
80
+ 0.06694897264242172,
81
+ 0.17608462274074554,
82
+ 0.07807064801454544,
83
+ 0.0868484303355217,
84
+ 0.33540457487106323,
85
+ 0.20728276669979095,
86
+ 0.00956575945019722,
87
+ 0.009197483770549297
88
+ ],
89
+ "max": [
90
+ 0.14580604434013367,
91
+ 0.33216384053230286,
92
+ 0.3857804834842682,
93
+ 3.4003844261169434,
94
+ 0.7954911589622498,
95
+ 0.6642207503318787,
96
+ 0.04104341194033623,
97
+ -0.00018117300351150334
98
+ ],
99
+ "min": [
100
+ -0.1765444278717041,
101
+ -0.29457300901412964,
102
+ 0.008128180168569088,
103
+ 2.2890501022338867,
104
+ -1.883241891860962,
105
+ -1.0600427389144897,
106
+ 0.0006495157140307128,
107
+ -0.041782498359680176
108
+ ],
109
+ "q01": [
110
+ -0.14911890715360643,
111
+ -0.25978428691625594,
112
+ 0.009925739830359817,
113
+ 2.7545341420173646,
114
+ -1.3996034812927245,
115
+ -0.6867720144987106,
116
+ 0.008197814421728254,
117
+ -0.04015838988125324
118
+ ],
119
+ "q99": [
120
+ 0.09063626825809479,
121
+ 0.29066365867853167,
122
+ 0.3370887073874472,
123
+ 3.2611824750900267,
124
+ 0.32092821151018125,
125
+ 0.4037663781642913,
126
+ 0.039891827926039694,
127
+ -0.009106044843792932
128
+ ]
129
+ },
130
+ "num_transitions": 66984,
131
+ "num_trajectories": 454
132
+ }
133
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 32000,
6
+ "transformers_version": "4.40.1"
7
+ }
lora_adapter/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: pruned_models/oft_llmpruner_object_0.9
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.11.1
lora_adapter/adapter_config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": {
4
+ "base_model_class": "OpenVLAForActionPrediction",
5
+ "parent_library": "transformers_modules.oft_llmpruner_object_0.9.modeling_prismatic"
6
+ },
7
+ "base_model_name_or_path": "pruned_models/oft_llmpruner_object_0.9",
8
+ "bias": "none",
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": "gaussian",
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 16,
17
+ "lora_dropout": 0.0,
18
+ "megatron_config": null,
19
+ "megatron_core": "megatron.core",
20
+ "modules_to_save": null,
21
+ "peft_type": "LORA",
22
+ "r": 32,
23
+ "rank_pattern": {},
24
+ "revision": null,
25
+ "target_modules": [
26
+ "proj",
27
+ "up_proj",
28
+ "k_proj",
29
+ "lm_head",
30
+ "fc1",
31
+ "kv",
32
+ "qkv",
33
+ "fc2",
34
+ "v_proj",
35
+ "gate_proj",
36
+ "q_proj",
37
+ "fc3",
38
+ "o_proj",
39
+ "down_proj",
40
+ "q"
41
+ ],
42
+ "task_type": null,
43
+ "use_dora": false,
44
+ "use_rslora": false
45
+ }
lora_adapter/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab800773de47d6ede75937d36110dce39eb2d521b5b6dff104bdd8bcf79a6b23
3
+ size 394729016
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2d0fb5c19496711e210d18db54c74311a9d5d36eff15b92b95fa5bdd08383f6
3
+ size 3597252664
modeling_prismatic.py ADDED
@@ -0,0 +1,1143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ modeling_prismatic.py
3
+
4
+ Core HuggingFace-style PrismaticPreTrainedModel and PrismaticForConditionalGeneration class definitions.
5
+ Inherits from the default `transformers.PretrainedModel`. Meant to be standalone and self-contained,
6
+ but exactly replicate the logic in `prismatic.models.vlms.prismatic.py`.
7
+ """
8
+
9
+ import logging
10
+ from dataclasses import dataclass
11
+ from functools import partial
12
+ from typing import Any, Callable, ClassVar, Dict, List, Optional, Tuple, Union
13
+
14
+ import numpy as np
15
+ import timm
16
+ import tokenizers
17
+ import torch
18
+ import torch.nn as nn
19
+ import transformers
20
+ from timm.models.vision_transformer import LayerScale
21
+ from transformers import AutoModelForCausalLM, PretrainedConfig, PreTrainedModel
22
+ from transformers.modeling_outputs import ModelOutput
23
+
24
+ from prismatic.training.train_utils import (
25
+ get_current_action_mask,
26
+ get_next_actions_mask,
27
+ )
28
+ from prismatic.vla.constants import (
29
+ ACTION_DIM,
30
+ ACTION_PROPRIO_NORMALIZATION_TYPE,
31
+ ACTION_TOKEN_BEGIN_IDX,
32
+ IGNORE_INDEX,
33
+ NUM_ACTIONS_CHUNK,
34
+ STOP_INDEX,
35
+ NormalizationType,
36
+ )
37
+
38
+ from .configuration_prismatic import OpenVLAConfig, PrismaticConfig
39
+
40
+ # Set up logger
41
+ logger = logging.getLogger(__name__)
42
+
43
+
44
+ # === Utility Functions for Monkey-Patching ===
45
+ def unpack_tuple(fn: Callable[[Any], Tuple[Any]]) -> Callable[[Any], Any]:
46
+ def wrapper(*args: Any, **kwargs: Any) -> Any:
47
+ result = fn(*args, **kwargs)
48
+ return result[0] if isinstance(result, tuple) else result
49
+
50
+ return wrapper
51
+
52
+
53
+ # HF Transformers overwrites parameters with names containing `gamma`; we're going to patch VisionBackbone.LayerScale.
54
+ # =>> TIMM :: https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/vision_transformer.py#L109
55
+ # =>> Transformers :: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L3960
56
+ def _ls_new_forward(self, x: torch.Tensor) -> torch.Tensor:
57
+ return x.mul_(self.scale_factor) if self.inplace else x * self.scale_factor
58
+
59
+
60
+ def ls_apply_patch(ls_module: LayerScale):
61
+ ls_module.scale_factor = nn.Parameter(ls_module.gamma.clone())
62
+ ls_module.forward = _ls_new_forward.__get__(ls_module, LayerScale)
63
+ del ls_module.gamma
64
+
65
+
66
+ # === Prismatic Vision Backbone (nn.Module) Definitions (w/ Fused Backbone Support) ===
67
+ class PrismaticVisionBackbone(nn.Module):
68
+ """
69
+ Vision backbone for Prismatic models that handles image feature extraction.
70
+
71
+ Supports both single backbone (e.g., SigLIP) and fused backbone (e.g., SigLIP + DINOv2) configurations.
72
+ For fused backbones, features from both models are concatenated along the feature dimension.
73
+ """
74
+
75
+ def __init__(
76
+ self,
77
+ use_fused_vision_backbone: bool,
78
+ image_sizes: List[int],
79
+ timm_model_ids: List[str],
80
+ timm_override_act_layers: List[Optional[str]],
81
+ ) -> None:
82
+ """
83
+ Initialize the vision backbone.
84
+
85
+ Args:
86
+ use_fused_vision_backbone: Whether to use two backbones and fuse their features
87
+ image_sizes: List of image sizes for each backbone
88
+ timm_model_ids: List of TIMM model IDs to use for each backbone
89
+ timm_override_act_layers: List of activation layer overrides for each backbone
90
+ """
91
+ super().__init__()
92
+ self.use_fused_vision_backbone = use_fused_vision_backbone
93
+ self.num_images_in_input = 1 # Default value, can be overridden later
94
+
95
+ # Validate number of (fused) vision backbones
96
+ if len(timm_model_ids) > 2:
97
+ raise ValueError("Prismatic models only support up to 2 (fused) vision backbones!")
98
+
99
+ # Create primary featurizer
100
+ self.featurizer = self._create_featurizer(
101
+ model_id=timm_model_ids[0], img_size=image_sizes[0], act_layer=timm_override_act_layers[0]
102
+ )
103
+ self.embed_dim = self.featurizer.embed_dim
104
+
105
+ # Create secondary featurizer if using fused backbone
106
+ if self.use_fused_vision_backbone:
107
+ self.fused_featurizer = self._create_featurizer(
108
+ model_id=timm_model_ids[1], img_size=image_sizes[1], act_layer=timm_override_act_layers[1]
109
+ )
110
+ self.embed_dim += self.fused_featurizer.embed_dim
111
+
112
+ # Patch LayerScale modules for HF compatibility
113
+ self._patch_layer_scales()
114
+
115
+ def _create_featurizer(self, model_id: str, img_size: int, act_layer: Optional[str]) -> nn.Module:
116
+ """
117
+ Create a TIMM-based featurizer model with appropriate configurations.
118
+
119
+ Args:
120
+ model_id: The TIMM model ID to load
121
+ img_size: Input image size for the model
122
+ act_layer: Override for the activation layer type
123
+
124
+ Returns:
125
+ A configured featurizer model
126
+ """
127
+ featurizer = timm.create_model(
128
+ model_id,
129
+ pretrained=False,
130
+ num_classes=0,
131
+ img_size=img_size,
132
+ act_layer=act_layer,
133
+ )
134
+
135
+ # Monkey-patch the forward function to extract the second-to-last layer features
136
+ num_blocks = len(featurizer.blocks)
137
+ featurizer.forward = unpack_tuple(partial(featurizer.get_intermediate_layers, n={num_blocks - 2}))
138
+
139
+ return featurizer
140
+
141
+ def _patch_layer_scales(self) -> None:
142
+ """
143
+ Patch all LayerScale modules to be compatible with HF's parameter naming.
144
+
145
+ HF Transformers overwrites parameters with names containing 'gamma',
146
+ so we need to rename and modify the forward method.
147
+ """
148
+ # Patch primary featurizer
149
+ for module in self.featurizer.modules():
150
+ if isinstance(module, LayerScale):
151
+ ls_apply_patch(module)
152
+
153
+ # Patch secondary featurizer if it exists
154
+ if self.use_fused_vision_backbone:
155
+ for module in self.fused_featurizer.modules():
156
+ if isinstance(module, LayerScale):
157
+ ls_apply_patch(module)
158
+
159
+ def get_num_patches(self) -> int:
160
+ """
161
+ Returns the number of vision patches output by the vision backbone.
162
+
163
+ Returns:
164
+ Number of patches per image
165
+ """
166
+ return self.featurizer.patch_embed.num_patches
167
+
168
+ def get_num_images_in_input(self) -> int:
169
+ """
170
+ Returns the number of input images for the vision backbone.
171
+
172
+ Returns:
173
+ Number of images expected in the input
174
+ """
175
+ return self.num_images_in_input
176
+
177
+ def set_num_images_in_input(self, num_images_in_input: int) -> None:
178
+ """
179
+ Sets the number of input images for the vision backbone.
180
+
181
+ Args:
182
+ num_images_in_input: Number of images to expect in the input
183
+ """
184
+ self.num_images_in_input = num_images_in_input
185
+
186
+ def forward(self, pixel_values: torch.Tensor) -> torch.Tensor:
187
+ """
188
+ Implements the forward pass for the vision backbone.
189
+
190
+ If `self.use_fused_vision_backbone == True`, uses both SigLIP and DINOv2 transformers to extract visual features
191
+ (otherwise uses SigLIP only). Allows multi-image inputs (but only for fused vision backbone).
192
+
193
+ Args:
194
+ pixel_values (torch.Tensor): Pixels for input image(s), (B, C, H, W).
195
+ """
196
+ if self.num_images_in_input == 1:
197
+ if not self.use_fused_vision_backbone:
198
+ return self.featurizer(pixel_values)
199
+
200
+ # Split `pixel_values :: [bsz, 2 * 3, resolution, resolution]` =>> featurize =>> channel stack
201
+ img, img_fused = torch.split(pixel_values, [3, 3], dim=1)
202
+ patches, patches_fused = self.featurizer(img), self.fused_featurizer(img_fused)
203
+
204
+ return torch.cat([patches, patches_fused], dim=2)
205
+
206
+ else:
207
+ assert self.use_fused_vision_backbone, "Multi-image inputs require using fused backbone!"
208
+
209
+ # Split `pixel_values` into individual images (each with 6 channels: 3 for SigLIP + 3 for DINOv2)
210
+ images = torch.split(pixel_values, [6] * self.num_images_in_input, dim=1)
211
+
212
+ # Process each image and collect patches
213
+ all_patches = []
214
+ for img in images:
215
+ # Split each image further into two stacks of channels (each with 3 channels)
216
+ img_regular, img_fused = torch.split(img, [3, 3], dim=1)
217
+
218
+ # Get patches from both SigLIP and DINOv2 vision transformers
219
+ patches = self.featurizer(img_regular)
220
+ patches_fused = self.fused_featurizer(img_fused)
221
+
222
+ # Concatenate SigLIP and DINOv2 patches along the hidden dimension
223
+ combined_patches = torch.cat([patches, patches_fused], dim=2)
224
+ all_patches.append(combined_patches)
225
+
226
+ # Concatenate all patches along the patch dimension
227
+ return torch.cat(all_patches, dim=1)
228
+
229
+
230
+ # === Prismatic Projector (nn.Module) Definitions ===
231
+ class PrismaticProjector(nn.Module):
232
+ def __init__(self, use_fused_vision_backbone: bool, vision_dim: int, llm_dim: int) -> None:
233
+ super().__init__()
234
+ self.use_fused_vision_backbone = use_fused_vision_backbone
235
+ self.vision_dim, self.llm_dim = vision_dim, llm_dim
236
+
237
+ # Switch on `use_fused_vision_backbone` =>> use slightly different MLPs and projection factors!
238
+ if not self.use_fused_vision_backbone:
239
+ self.fc1 = nn.Linear(self.vision_dim, self.llm_dim, bias=True)
240
+ self.fc2 = nn.Linear(self.llm_dim, self.llm_dim, bias=True)
241
+ self.act_fn1 = nn.GELU()
242
+ else:
243
+ initial_projection_dim = 4 * vision_dim
244
+ self.fc1 = nn.Linear(self.vision_dim, initial_projection_dim, bias=True)
245
+ self.fc2 = nn.Linear(initial_projection_dim, self.llm_dim, bias=True)
246
+ self.fc3 = nn.Linear(self.llm_dim, self.llm_dim, bias=True)
247
+ self.act_fn1 = nn.GELU()
248
+ self.act_fn2 = nn.GELU()
249
+
250
+ def forward(self, img_patches: torch.Tensor) -> torch.Tensor:
251
+ if not self.use_fused_vision_backbone:
252
+ projected_features = self.fc1(img_patches)
253
+ projected_features = self.act_fn1(projected_features)
254
+ projected_features = self.fc2(projected_features)
255
+ else:
256
+ projected_features = self.fc1(img_patches)
257
+ projected_features = self.act_fn1(projected_features)
258
+ projected_features = self.fc2(projected_features)
259
+ projected_features = self.act_fn2(projected_features)
260
+ projected_features = self.fc3(projected_features)
261
+
262
+ return projected_features
263
+
264
+
265
+ # === Main HF Class Definitions ===
266
+ @dataclass
267
+ class PrismaticCausalLMOutputWithPast(ModelOutput):
268
+ """Base class for Prismatic casual (visually-conditioned) language model outputs; also exposes visual features."""
269
+
270
+ loss: Optional[torch.FloatTensor] = None
271
+ logits: torch.FloatTensor = None
272
+ past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None
273
+ hidden_states: Optional[Tuple[torch.FloatTensor, ...]] = None
274
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
275
+
276
+ # Additions for VLMs
277
+ projector_features: Optional[torch.FloatTensor] = None
278
+
279
+
280
+ class PrismaticPreTrainedModel(PreTrainedModel):
281
+ config_class: PretrainedConfig = PrismaticConfig
282
+ base_model_prefix: str = "model"
283
+ supports_gradient_checkpointing: bool = True
284
+
285
+ _no_split_modules: ClassVar[List[str]] = ["PrismaticProjector"]
286
+ _skip_keys_device_placement: str = "past_key_values"
287
+ _supports_flash_attn_2: bool = True
288
+
289
+ def _init_weights(self, module: nn.Module) -> None:
290
+ # Important :: this HF ported version is *not* meant for training from scratch; only inference and fine-tuning!
291
+ # => As such, this init_weights code is not correct; if training VLMs from scratch, use the main codebase at
292
+ # https://github.com/TRI-ML/prismatic-vlms
293
+ std = (
294
+ self.config.initializer_range
295
+ if hasattr(self.config, "initializer_range")
296
+ else self.config.text_config.initializer_range
297
+ )
298
+
299
+ if hasattr(module, "class_embedding"):
300
+ module.class_embedding.data.normal_(mean=0.0, std=std)
301
+
302
+ if isinstance(module, (nn.Linear, nn.Conv2d)):
303
+ module.weight.data.normal_(mean=0.0, std=std)
304
+ if module.bias is not None:
305
+ module.bias.data.zero_()
306
+ elif isinstance(module, nn.Embedding):
307
+ module.weight.data.normal_(mean=0.0, std=std)
308
+ if module.padding_idx is not None:
309
+ module.weight.data[module.padding_idx].zero_()
310
+
311
+ @property
312
+ def _supports_sdpa(self) -> bool:
313
+ """Check LLM supports SDPA Attention"""
314
+ return self.language_model._supports_sdpa
315
+
316
+
317
+ class PrismaticForConditionalGeneration(PrismaticPreTrainedModel):
318
+ def __init__(self, config: PrismaticConfig) -> None:
319
+ super().__init__(config)
320
+
321
+ # [Validation] Lightweight Validate on `config` Fields + Dependency Versions
322
+ if config.use_fused_vision_backbone is None:
323
+ raise ValueError("Missing config field `use_fused_vision_backbone`")
324
+
325
+ if timm.__version__ not in {"0.9.10", "0.9.11", "0.9.12", "0.9.16"}:
326
+ raise NotImplementedError(
327
+ "TIMM Version must be >= 0.9.10 and < 1.0.0 (breaking); please raise a GitHub Issue "
328
+ "if you urgently need support for latest TIMM versions."
329
+ )
330
+
331
+ if (transformers.__version__ != "4.40.1") or (tokenizers.__version__ != "0.19.1"):
332
+ logger.warning(
333
+ f"Expected `transformers==4.40.1` and `tokenizers==0.19.1` but got "
334
+ f"`transformers=={transformers.__version__}` and `tokenizers=={tokenizers.__version__}`; "
335
+ f"there might be inference-time regressions due to dependency changes. If in doubt, please"
336
+ f"use the above versions."
337
+ )
338
+
339
+ # Instantiate PrismaticVisionBackbone (w/ Potential Fused Backbone)
340
+ self.vision_backbone = PrismaticVisionBackbone(
341
+ config.use_fused_vision_backbone, config.image_sizes, config.timm_model_ids, config.timm_override_act_layers
342
+ )
343
+
344
+ # Create Multimodal Projector
345
+ self.projector = PrismaticProjector(
346
+ config.use_fused_vision_backbone,
347
+ vision_dim=self.vision_backbone.embed_dim,
348
+ llm_dim=config.text_config.hidden_size,
349
+ )
350
+
351
+ # Instantiate LLM Backbone
352
+ # 检测是否是剪枝模型
353
+ if hasattr(config.text_config, 'is_pruned') and config.text_config.is_pruned:
354
+ logger.info("检测到剪枝模型,使用 PrunedLlamaForCausalLM")
355
+
356
+ # 动态导入 pruned_llama(避免模块名特殊字符问题)
357
+ import importlib.util
358
+ import os
359
+
360
+ # 查找 pruned_llama.py 的路径
361
+ # 优先级1: 从原始模型路径(config._name_or_path)查找
362
+ # 优先级2: 从当前文件所在目录(HF cache)查找
363
+ pruned_llama_path = None
364
+
365
+ # 尝试从原始模型路径查找
366
+ if hasattr(config, '_name_or_path') and config._name_or_path:
367
+ original_model_path = config._name_or_path
368
+ candidate_path = os.path.join(original_model_path, "pruned_llama.py")
369
+ if os.path.exists(candidate_path):
370
+ pruned_llama_path = candidate_path
371
+ logger.info(f"从原始模型路径加载 pruned_llama.py: {pruned_llama_path}")
372
+
373
+ # 如果没找到,尝试从当前目录(HF cache)查找
374
+ if pruned_llama_path is None:
375
+ current_dir = os.path.dirname(os.path.abspath(__file__))
376
+ candidate_path = os.path.join(current_dir, "pruned_llama.py")
377
+ if os.path.exists(candidate_path):
378
+ pruned_llama_path = candidate_path
379
+ logger.info(f"从 cache 目录加载 pruned_llama.py: {pruned_llama_path}")
380
+
381
+ # 如果还是没找到,抛出错误
382
+ if pruned_llama_path is None:
383
+ raise FileNotFoundError(
384
+ f"未找到 pruned_llama.py\n"
385
+ f" 已尝试位置:\n"
386
+ f" 1. 原始模型路径: {config._name_or_path if hasattr(config, '_name_or_path') else 'N/A'}\n"
387
+ f" 2. Cache 目录: {os.path.dirname(os.path.abspath(__file__))}\n"
388
+ f" 请确保 pruned_llama.py 存在于原始模型目录中。"
389
+ )
390
+
391
+ # 动态加载模块
392
+ spec = importlib.util.spec_from_file_location("pruned_llama_module", pruned_llama_path)
393
+ pruned_llama_module = importlib.util.module_from_spec(spec)
394
+ spec.loader.exec_module(pruned_llama_module)
395
+
396
+ PrunedLlamaForCausalLM = pruned_llama_module.PrunedLlamaForCausalLM
397
+ PrunedLlamaConfig = pruned_llama_module.PrunedLlamaConfig
398
+
399
+ # 创建 PrunedLlamaConfig
400
+ pruned_config = PrunedLlamaConfig(**config.text_config.to_dict())
401
+ pruned_config.pruned_num_heads = config.text_config.pruned_num_heads
402
+ pruned_config.pruned_intermediate_sizes = config.text_config.pruned_intermediate_sizes
403
+ pruned_config.is_pruned = True
404
+ pruned_config._attn_implementation = config._attn_implementation
405
+
406
+ # 使用 PrunedLlamaForCausalLM
407
+ self.language_model = PrunedLlamaForCausalLM(pruned_config)
408
+ else:
409
+ # 标准模型
410
+ self.language_model = AutoModelForCausalLM.from_config(
411
+ config.text_config, attn_implementation=config._attn_implementation
412
+ )
413
+ self.vocab_size = config.text_config.vocab_size
414
+ self.pad_token_id = config.pad_token_id
415
+ self.llm_dim = config.text_config.hidden_size
416
+
417
+ # HF Boilerplate =>> initializes weights via `_init_weights()` and sets gradient checkpointing
418
+ self.post_init()
419
+
420
+ # === `PreTrainedModel` Boilerplate ===
421
+ def get_input_embeddings(self) -> nn.Module:
422
+ return self.language_model.get_input_embeddings()
423
+
424
+ def set_input_embeddings(self, value: nn.Module) -> None:
425
+ self.language_model.set_input_embeddings(value)
426
+
427
+ def get_output_embeddings(self) -> nn.Module:
428
+ return self.language_model.get_output_embeddings()
429
+
430
+ def set_output_embeddings(self, new_embeddings: nn.Module) -> None:
431
+ self.language_model.set_output_embeddings(new_embeddings)
432
+
433
+ def get_decoder(self) -> nn.Module:
434
+ return self.language_model.get_decoder()
435
+
436
+ def set_decoder(self, decoder: nn.Module) -> None:
437
+ self.language_model.set_decoder(decoder)
438
+
439
+ def tie_weights(self) -> None:
440
+ self.language_model.tie_weights() # Note: `Llama-2` and `Mistral` don't tie weights (no-op)
441
+
442
+ def resize_token_embeddings(
443
+ self, new_num_tokens: Optional[int] = None, pad_to_multiple_of: Optional[int] = None
444
+ ) -> nn.Embedding:
445
+ updated_embeddings = self.language_model.resize_token_embeddings(new_num_tokens, pad_to_multiple_of)
446
+
447
+ # Update config/instance variables
448
+ self.config.text_config.vocab_size = updated_embeddings.num_embeddings
449
+ self.vocab_size = updated_embeddings.num_embeddings
450
+
451
+ return updated_embeddings
452
+
453
+ def _replace_input_embeddings(self, input_embeddings, all_actions_mask, noisy_action_features):
454
+ """
455
+ Replace embeddings in input_embeddings at positions where all_actions_mask is True
456
+ with embeddings from noisy_action_features, using vectorized operations.
457
+
458
+ Args:
459
+ input_embeddings: Tensor of shape (B, S, D)
460
+ all_actions_mask: Boolean tensor of shape (B, S)
461
+ noisy_action_features: Tensor of shape (B, K, D) where K is the number of True values in mask per sample
462
+
463
+ Returns:
464
+ Modified input_embeddings tensor
465
+ """
466
+ # Clone input to avoid modifying the original tensor
467
+ new_input_embeddings = input_embeddings.clone()
468
+
469
+ # Create a tensor with the same shape of input_embeddings to hold the noisy action features
470
+ repositioned_noisy_action_features = torch.zeros_like(input_embeddings)
471
+
472
+ # Create batch indices for splicing
473
+ batch_indices = torch.arange(input_embeddings.shape[0], device=input_embeddings.device)
474
+ batch_indices = batch_indices.unsqueeze(1).expand(-1, noisy_action_features.shape[1])
475
+
476
+ # Get indices where mask is True for each sample
477
+ masked_indices = torch.stack([torch.where(mask)[0] for mask in all_actions_mask])
478
+
479
+ # Move the noisy action features into their correct positions
480
+ repositioned_noisy_action_features[batch_indices, masked_indices] = noisy_action_features
481
+
482
+ # Combine original input embeddings and noisy action embeddings using the mask
483
+ new_input_embeddings = torch.where(
484
+ all_actions_mask.unsqueeze(-1), repositioned_noisy_action_features, new_input_embeddings
485
+ )
486
+
487
+ return new_input_embeddings
488
+
489
+ def _process_action_masks(self, labels):
490
+ """Helper to get action masks from labels"""
491
+ current_action_mask = get_current_action_mask(labels)
492
+ next_actions_mask = get_next_actions_mask(labels)
493
+ all_actions_mask = current_action_mask | next_actions_mask # (B, seq_len)
494
+ return all_actions_mask
495
+
496
+ def _process_vision_features(self, pixel_values, language_embeddings=None, use_film=False):
497
+ """Process vision features with optional FiLM conditioning"""
498
+ if use_film:
499
+ # FiLM: Infuse language inputs into visual features
500
+ patch_features = self.vision_backbone(pixel_values, language_embeddings) # (bsz, 256 * num_images, D)
501
+ else:
502
+ patch_features = self.vision_backbone(pixel_values) # (bsz, 256 * num_images, D)
503
+
504
+ # Project patch embeddings into language embedding space
505
+ return self.projector(patch_features)
506
+
507
+ def _process_proprio_features(self, projected_patch_embeddings, proprio, proprio_projector):
508
+ """Process proprioceptive features and append to vision features"""
509
+ if proprio_projector is not None and proprio is not None:
510
+ # projected_patch_embeddings: (bsz, num_patches * num_images, llm_dim)
511
+ # proprio: (bsz, proprio_dim) or (propro_dim,)
512
+ proprio = proprio.reshape(projected_patch_embeddings.shape[0], -1) # (bsz, proprio_dim)
513
+ proprio_features = proprio_projector(proprio) # (bsz, llm_dim)
514
+ proprio_features = proprio_features.unsqueeze(dim=1) # (bsz, 1, llm_dim)
515
+ # For simplicity, just append proprio token to the end of projected vision patch tokens
516
+ return torch.cat((projected_patch_embeddings, proprio_features), dim=1)
517
+ return projected_patch_embeddings
518
+
519
+ def _build_multimodal_attention(self, input_embeddings, projected_patch_embeddings, attention_mask):
520
+ """Build multimodal embeddings and attention mask"""
521
+ # Update attention mask
522
+ projected_patch_attention_mask = None
523
+ if attention_mask is not None:
524
+ projected_patch_attention_mask = torch.full(
525
+ (projected_patch_embeddings.shape[0], projected_patch_embeddings.shape[1]),
526
+ fill_value=True,
527
+ dtype=attention_mask.dtype,
528
+ device=attention_mask.device,
529
+ )
530
+
531
+ # Build multimodal embeddings & attention mask; insert embeddings after <BOS> token (1:)
532
+ multimodal_embeddings = torch.cat(
533
+ [input_embeddings[:, :1, :], projected_patch_embeddings, input_embeddings[:, 1:, :]], dim=1
534
+ )
535
+
536
+ multimodal_attention_mask = None
537
+ if attention_mask is not None:
538
+ multimodal_attention_mask = torch.cat(
539
+ [attention_mask[:, :1], projected_patch_attention_mask, attention_mask[:, 1:]], dim=1
540
+ )
541
+
542
+ return multimodal_embeddings, multimodal_attention_mask
543
+
544
+ def _build_multimodal_labels(self, labels, projected_patch_embeddings):
545
+ """Build multimodal labels with IGNORE_INDEX for patch embeddings"""
546
+ if labels is not None:
547
+ projected_patch_labels = torch.full(
548
+ (projected_patch_embeddings.shape[0], projected_patch_embeddings.shape[1]),
549
+ fill_value=IGNORE_INDEX,
550
+ dtype=labels.dtype,
551
+ device=labels.device,
552
+ )
553
+ return torch.cat([labels[:, :1], projected_patch_labels, labels[:, 1:]], dim=1)
554
+ return None
555
+
556
+ # === Core Prismatic VLM `forward()` Logic ===
557
+ def forward(
558
+ self,
559
+ input_ids: Optional[torch.LongTensor] = None,
560
+ attention_mask: Optional[torch.Tensor] = None,
561
+ pixel_values: Optional[torch.FloatTensor] = None,
562
+ labels: Optional[torch.LongTensor] = None,
563
+ inputs_embeds: Optional[torch.FloatTensor] = None,
564
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
565
+ use_cache: Optional[bool] = None,
566
+ output_attentions: Optional[bool] = None,
567
+ output_hidden_states: Optional[bool] = None,
568
+ output_projector_features: Optional[bool] = None,
569
+ return_dict: Optional[bool] = None,
570
+ proprio=None,
571
+ proprio_projector=None,
572
+ noisy_actions=None,
573
+ noisy_action_projector=None,
574
+ diffusion_timestep_embeddings=None,
575
+ use_film: bool = False,
576
+ ) -> Union[Tuple, PrismaticCausalLMOutputWithPast]:
577
+ """Run a forward pass through the VLM, returning a PrismaticCausalLMOutputWithPast instance."""
578
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
579
+ output_hidden_states = (
580
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
581
+ )
582
+ output_projector_features = output_projector_features if output_projector_features is not None else False
583
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
584
+
585
+ # Respect `use_cache` only if not training (even if `gradient_checkpointing` is off)
586
+ use_cache = use_cache and not self.training
587
+
588
+ # Instantiate Placeholder for Projector Features
589
+ projected_patch_embeddings = None
590
+
591
+ # === Handle Generation with Cache (`input_ids.shape[1] == 1`) =>> requires `past_keys_values` ===
592
+ if input_ids.shape[1] == 1:
593
+ assert input_ids.shape[0] == 1, "Generation is only currently supported for batch size of 1!"
594
+ assert past_key_values is not None, "You must provide `past_key_values` during cached generation!"
595
+ assert labels is None, "Unexpected key `labels` provided during cached generation!"
596
+
597
+ language_model_output = self.language_model(
598
+ input_ids=input_ids,
599
+ attention_mask=None,
600
+ position_ids=None,
601
+ past_key_values=past_key_values,
602
+ inputs_embeds=None,
603
+ labels=None,
604
+ use_cache=use_cache,
605
+ output_attentions=output_attentions,
606
+ output_hidden_states=output_hidden_states,
607
+ return_dict=return_dict,
608
+ )
609
+
610
+ # === Handle Unimodal Forward ===
611
+ elif pixel_values is None:
612
+ assert (input_ids is not None) and (inputs_embeds is None), "Missing `input_ids` in language-only forward!"
613
+ assert past_key_values is None, "Unexpected key `past_key_values` provided during language-only forward!"
614
+
615
+ language_model_output = self.language_model(
616
+ input_ids=input_ids,
617
+ attention_mask=attention_mask,
618
+ position_ids=None,
619
+ past_key_values=None,
620
+ inputs_embeds=None,
621
+ labels=labels,
622
+ use_cache=use_cache,
623
+ output_attentions=output_attentions,
624
+ output_hidden_states=output_hidden_states,
625
+ return_dict=return_dict,
626
+ )
627
+
628
+ # === Handle Multimodal Forward ===
629
+ elif (input_ids.shape[0] == pixel_values.shape[0]) or (inputs_embeds.shape[0] == pixel_values.shape[0]):
630
+ assert past_key_values is None, "Unexpected key `past_key_values` provided during multimodal forward!"
631
+
632
+ # Get input embeddings (from language model embeddings)
633
+ input_embeddings = self.get_input_embeddings()(input_ids) # (B, seq_len, D)
634
+
635
+ # Extract action masks
636
+ all_actions_mask = self._process_action_masks(labels)
637
+
638
+ # Extract the language portion of the input embeddings (i.e. remove the action tokens portion)
639
+ language_embeddings = input_embeddings[~all_actions_mask].reshape(
640
+ input_embeddings.shape[0], -1, input_embeddings.shape[2]
641
+ ) # (B, lang_seq_len, llm_dim)
642
+
643
+ # Get visual features
644
+ projected_patch_embeddings = self._process_vision_features(pixel_values, language_embeddings, use_film)
645
+
646
+ # Add proprioceptive state if provided
647
+ projected_patch_embeddings = self._process_proprio_features(
648
+ projected_patch_embeddings, proprio, proprio_projector
649
+ )
650
+
651
+ # [Diffusion] Add diffusion timestep embedding if provided
652
+ if diffusion_timestep_embeddings is not None:
653
+ # For simplicity, just append diffusion timestep embedding to the end of projected vision patch tokens
654
+ projected_patch_embeddings = torch.cat(
655
+ (projected_patch_embeddings, diffusion_timestep_embeddings), dim=1
656
+ )
657
+
658
+ # Process action embeddings
659
+ if noisy_actions is not None:
660
+ # Get mask corresponding to all action tokens
661
+ all_actions_mask = self._process_action_masks(labels)
662
+
663
+ # Reshape noisy actions into individual action tokens
664
+ # noisy_actions: (B, chunk_len, action_dim) -> (B, chunk_len * action_dim, 1)
665
+ B = noisy_actions.shape[0]
666
+ noisy_actions = noisy_actions.reshape(B, -1).unsqueeze(-1)
667
+
668
+ # Project noisy action tokens into language model embedding space
669
+ noisy_action_features = noisy_action_projector(noisy_actions) # (B, chunk_len * action_dim, llm_dim)
670
+
671
+ # Replace embeddings of the action tokens with noisy action embeddings
672
+ input_embeddings = self._replace_input_embeddings(
673
+ input_embeddings, all_actions_mask, noisy_action_features
674
+ )
675
+ else:
676
+ # Replace the embeddings of the action tokens with zeros
677
+ # (Later on, the positional embeddings will be added to them)
678
+ all_actions_mask = all_actions_mask.unsqueeze(-1) # (B, seq_len, 1)
679
+ input_embeddings = input_embeddings * ~all_actions_mask
680
+
681
+ # Build multimodal embeddings & attention mask
682
+ multimodal_embeddings, multimodal_attention_mask = self._build_multimodal_attention(
683
+ input_embeddings, projected_patch_embeddings, attention_mask
684
+ )
685
+
686
+ # Build labels for multimodal sequence if needed
687
+ multimodal_labels = self._build_multimodal_labels(labels, projected_patch_embeddings)
688
+
689
+ # Dispatch to language model
690
+ language_model_output = self.language_model(
691
+ input_ids=None,
692
+ attention_mask=multimodal_attention_mask,
693
+ position_ids=None,
694
+ past_key_values=None,
695
+ inputs_embeds=multimodal_embeddings,
696
+ labels=multimodal_labels,
697
+ use_cache=use_cache,
698
+ output_attentions=output_attentions,
699
+ output_hidden_states=output_hidden_states,
700
+ return_dict=return_dict,
701
+ )
702
+
703
+ # === Otherwise =>> Assume Invalid! ===
704
+ elif (input_ids.shape[0] != pixel_values.shape[0]) or (inputs_embeds.shape[0] != pixel_values.shape[0]):
705
+ raise ValueError("Non-homogenous batch of (text, image) input -- forward() does not support mixed batches!")
706
+
707
+ else:
708
+ raise ValueError(
709
+ "Invalid PrismaticForConditionalGeneration `forward()` call with provided arguments:\n"
710
+ f"=> `input_ids` = {input_ids is not None}\n"
711
+ f"=> `attention_mask` = {attention_mask is not None}\n"
712
+ f"=> `pixel_values` = {pixel_values is not None}\n"
713
+ f"=> `labels` = {labels is not None}\n"
714
+ f"=> `input_embeds` = {inputs_embeds is not None}\n"
715
+ f"=> `past_key_values` = {past_key_values is not None}\n"
716
+ f"=> `use_cache` = {use_cache}"
717
+ )
718
+
719
+ # Unpack `language_model_output` and return PrismaticCausalLMOutputWithPast (or tuple if not `return_dict`)
720
+ if not return_dict:
721
+ if output_projector_features and (projected_patch_embeddings is not None):
722
+ return *language_model_output, projected_patch_embeddings
723
+
724
+ return language_model_output
725
+
726
+ return PrismaticCausalLMOutputWithPast(
727
+ loss=language_model_output.loss,
728
+ logits=language_model_output.logits,
729
+ past_key_values=language_model_output.past_key_values,
730
+ hidden_states=language_model_output.hidden_states,
731
+ attentions=language_model_output.attentions,
732
+ projector_features=projected_patch_embeddings,
733
+ )
734
+
735
+ # === GenerationMixin Methods ===
736
+ def prepare_inputs_for_generation(
737
+ self,
738
+ input_ids: Optional[torch.Tensor] = None,
739
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
740
+ inputs_embeds: Optional[torch.FloatTensor] = None,
741
+ pixel_values: Optional[torch.FloatTensor] = None,
742
+ attention_mask: Optional[torch.Tensor] = None,
743
+ **kwargs: str,
744
+ ) -> Dict[str, torch.Tensor]:
745
+ """Borrowed from `LlamaForCausalLM` and simplified for batch size = 1; mirrors original PrismaticVLM logic."""
746
+ if ((input_ids is not None) and (input_ids.shape[0] > 1)) or (
747
+ (inputs_embeds is not None) and (inputs_embeds.shape[0] > 1)
748
+ ):
749
+ raise ValueError("Generation with batch size > 1 is not currently supported!")
750
+
751
+ # Handle `past_key_values` (cache) =>> assume `input_ids` just has unprocessed tokens
752
+ if past_key_values is not None:
753
+ input_ids = input_ids[:, -1:]
754
+
755
+ # If `input_embeds` are passed, we only want to use them in the 1st generation step
756
+ if inputs_embeds is not None and past_key_values is None:
757
+ model_inputs = {"input_embeds": inputs_embeds}
758
+ else:
759
+ model_inputs = {"input_ids": input_ids}
760
+
761
+ # Make sure `pixel_values` are preserved in `model_inputs`
762
+ model_inputs.update(
763
+ {
764
+ "attention_mask": attention_mask,
765
+ "pixel_values": pixel_values,
766
+ "past_key_values": past_key_values,
767
+ "use_cache": kwargs.get("use_cache"),
768
+ }
769
+ )
770
+
771
+ return model_inputs
772
+
773
+ # Defer to Language Model (all handle this differently, with different return types)
774
+ def _reorder_cache(self, *args, **kwargs) -> Any:
775
+ return self.language_model._reorder_cache(*args, **kwargs)
776
+
777
+
778
+ class OpenVLAForActionPrediction(PrismaticForConditionalGeneration):
779
+ config_class: PretrainedConfig = OpenVLAConfig
780
+
781
+ def __init__(self, config: OpenVLAConfig) -> None:
782
+ super().__init__(config)
783
+ self.norm_stats = config.norm_stats
784
+
785
+ # Compute action bins
786
+ self.bins = np.linspace(-1, 1, config.n_action_bins)
787
+ self.bin_centers = (self.bins[:-1] + self.bins[1:]) / 2.0
788
+
789
+ # Compute vocab size for de-tokenization -- revert added "multiple of"
790
+ self.vocab_size = self.config.text_config.vocab_size - self.config.pad_to_multiple_of
791
+
792
+ def _prepare_input_for_action_prediction(self, input_ids, attention_mask):
793
+ """Prepares input for action prediction by adding necessary tokens"""
794
+ # Add (ACTION_DIM * NUM_ACTIONS_CHUNK) placeholder tokens to input_ids to simulate action tokens
795
+ placeholder_action_token_ids = (
796
+ torch.ones((input_ids.shape[0], ACTION_DIM * NUM_ACTIONS_CHUNK)).to(input_ids.device).to(input_ids.dtype)
797
+ )
798
+ input_ids = torch.cat([input_ids, placeholder_action_token_ids], dim=-1)
799
+
800
+ # Add stop token to sequence (needed in non-causal bi-directional self-attention, as it appears at train time)
801
+ stop_token_id = torch.ones((input_ids.shape[0], 1)).to(input_ids.device).to(input_ids.dtype) * STOP_INDEX
802
+ input_ids = torch.cat([input_ids, stop_token_id], dim=-1)
803
+
804
+ # Extend the attention mask to fit the new shape of input
805
+ # Note: Only batch size == 1 supported right now
806
+ mask_extension = (
807
+ torch.ones((attention_mask.shape[0], input_ids.shape[-1] - attention_mask.shape[-1]))
808
+ .to(attention_mask.device)
809
+ .to(attention_mask.dtype)
810
+ )
811
+ attention_mask = torch.cat([attention_mask, mask_extension], dim=-1)
812
+
813
+ return input_ids, attention_mask
814
+
815
+ def _prepare_labels_for_action_prediction(self, labels, input_ids):
816
+ """Creates labels tensor for action prediction if not provided"""
817
+ # Extend labels tensor with fake action labels
818
+ ARBITRARY_ACTION_TOKEN_IDX = ACTION_TOKEN_BEGIN_IDX + 1
819
+ labels_extension = (
820
+ torch.ones((labels.shape[0], input_ids.shape[-1] - labels.shape[-1])).to(labels.device).to(labels.dtype)
821
+ * ARBITRARY_ACTION_TOKEN_IDX
822
+ )
823
+ labels = torch.cat([labels, labels_extension], dim=-1)
824
+
825
+ # Replace last label token with stop token
826
+ labels[:, -1] = STOP_INDEX
827
+
828
+ return labels
829
+
830
+ def _unnormalize_actions(self, normalized_actions, unnorm_key=None):
831
+ """Unnormalize actions using dataset statistics"""
832
+ action_norm_stats = self.get_action_stats(unnorm_key)
833
+
834
+ if ACTION_PROPRIO_NORMALIZATION_TYPE == NormalizationType.BOUNDS:
835
+ mask = action_norm_stats.get("mask", np.ones_like(action_norm_stats["min"], dtype=bool))
836
+ action_high, action_low = np.array(action_norm_stats["max"]), np.array(action_norm_stats["min"])
837
+ elif ACTION_PROPRIO_NORMALIZATION_TYPE == NormalizationType.BOUNDS_Q99:
838
+ mask = action_norm_stats.get("mask", np.ones_like(action_norm_stats["q01"], dtype=bool))
839
+ action_high, action_low = np.array(action_norm_stats["q99"]), np.array(action_norm_stats["q01"])
840
+ else:
841
+ raise ValueError("Unsupported action/proprio normalization type detected!")
842
+
843
+ actions = np.where(
844
+ mask,
845
+ 0.5 * (normalized_actions + 1) * (action_high - action_low + 1e-8) + action_low,
846
+ normalized_actions,
847
+ )
848
+
849
+ return actions
850
+
851
+ def _run_diffusion_prediction(
852
+ self,
853
+ input_embeddings,
854
+ all_actions_mask,
855
+ noise,
856
+ action_head,
857
+ projected_patch_embeddings,
858
+ labels,
859
+ attention_mask,
860
+ NUM_PATCHES,
861
+ NUM_PROMPT_TOKENS,
862
+ noisy_action_projector,
863
+ ):
864
+ """Run diffusion-based action prediction"""
865
+ # Clone embedding for reuse in each timestep
866
+ orig_projected_patch_embeddings = projected_patch_embeddings.clone()
867
+ curr_noisy_actions = noise
868
+
869
+ # Reverse diffusion: Iteratively denoise to generate action prediction
870
+ for t in action_head.noise_scheduler.timesteps:
871
+ # Get diffusion model's noise prediction (conditioned on VLA latent embedding, current noisy action
872
+ # embedding, and diffusion timestep embedding)
873
+ timesteps = torch.Tensor([t]).to(labels.device)
874
+ diffusion_timestep_embeddings = (
875
+ action_head.time_encoder(timesteps).to(curr_noisy_actions.dtype).to(curr_noisy_actions.device)
876
+ ) # (B, llm_dim)
877
+ diffusion_timestep_embeddings = diffusion_timestep_embeddings.unsqueeze(1) # (B, 1, llm_dim)
878
+
879
+ # [Diffusion] Replace the embeddings of the action tokens with noisy actions
880
+ # (Later on, the positional embeddings will be added to them)
881
+
882
+ # For simplicity, append diffusion timestep embedding to the end of projected vision tokens
883
+ projected_patch_embeddings = torch.cat(
884
+ (orig_projected_patch_embeddings, diffusion_timestep_embeddings), dim=1
885
+ )
886
+
887
+ # Reshape and project noisy actions into language embedding space
888
+ B = curr_noisy_actions.shape[0]
889
+ orig_curr_noisy_actions_shape = curr_noisy_actions.shape
890
+ curr_noisy_actions = curr_noisy_actions.reshape(B, -1).unsqueeze(-1)
891
+ noisy_action_features = noisy_action_projector(curr_noisy_actions)
892
+ curr_noisy_actions = curr_noisy_actions.reshape(orig_curr_noisy_actions_shape)
893
+
894
+ # Replace action token embeddings with noisy action embeddings
895
+ input_embeddings = self._replace_input_embeddings(
896
+ input_embeddings.clone(), all_actions_mask, noisy_action_features
897
+ )
898
+
899
+ # Build multimodal embeddings and attention mask
900
+ multimodal_embeddings, multimodal_attention_mask = self._build_multimodal_attention(
901
+ input_embeddings, projected_patch_embeddings, attention_mask
902
+ )
903
+
904
+ # Forward pass through language model
905
+ language_model_output = self.language_model(
906
+ input_ids=None,
907
+ attention_mask=multimodal_attention_mask,
908
+ position_ids=None,
909
+ past_key_values=None,
910
+ inputs_embeds=multimodal_embeddings,
911
+ labels=None,
912
+ use_cache=None,
913
+ output_attentions=False,
914
+ output_hidden_states=True,
915
+ return_dict=True,
916
+ )
917
+
918
+ # Extract hidden states for action portion of response
919
+ last_hidden_states = language_model_output.hidden_states[-1] # (B, seq_len, D)
920
+ actions_hidden_states = last_hidden_states[
921
+ :,
922
+ NUM_PATCHES + NUM_PROMPT_TOKENS : NUM_PATCHES + NUM_PROMPT_TOKENS + ACTION_DIM * NUM_ACTIONS_CHUNK,
923
+ :,
924
+ ] # (B, act_chunk_len, D)
925
+
926
+ # Predict noise and update noisy actions: x_t -> x_{t-1}
927
+ noise_pred = action_head.predict_noise(actions_hidden_states)
928
+ curr_noisy_actions = action_head.noise_scheduler.step(noise_pred, t, curr_noisy_actions).prev_sample
929
+
930
+ curr_noisy_actions = curr_noisy_actions.reshape(NUM_ACTIONS_CHUNK, ACTION_DIM)
931
+
932
+ # Return final actions
933
+ return curr_noisy_actions.float().cpu().detach().numpy(), actions_hidden_states
934
+
935
+ def _regression_or_discrete_prediction(
936
+ self,
937
+ input_embeddings,
938
+ all_actions_mask,
939
+ projected_patch_embeddings,
940
+ attention_mask,
941
+ labels,
942
+ NUM_PATCHES,
943
+ NUM_PROMPT_TOKENS,
944
+ action_head=None,
945
+ ):
946
+ """Run L1 regression-based continuous action prediction or discrete action tokens prediction."""
947
+ # Zero out action token embeddings
948
+ all_actions_mask = all_actions_mask.unsqueeze(-1) # (B, seq_len, 1)
949
+ input_embeddings = input_embeddings * ~all_actions_mask
950
+
951
+ # Build multimodal embeddings and attention mask
952
+ multimodal_embeddings, multimodal_attention_mask = self._build_multimodal_attention(
953
+ input_embeddings, projected_patch_embeddings, attention_mask
954
+ )
955
+
956
+ # Forward pass through language model
957
+ language_model_output = self.language_model(
958
+ input_ids=None,
959
+ attention_mask=multimodal_attention_mask,
960
+ position_ids=None,
961
+ past_key_values=None,
962
+ inputs_embeds=multimodal_embeddings,
963
+ labels=None,
964
+ use_cache=None,
965
+ output_attentions=False,
966
+ output_hidden_states=True,
967
+ return_dict=True,
968
+ )
969
+
970
+ # Extract hidden states for action tokens
971
+ last_hidden_states = language_model_output.hidden_states[-1] # (B, seq_len, D)
972
+ actions_hidden_states = last_hidden_states[
973
+ :,
974
+ NUM_PATCHES + NUM_PROMPT_TOKENS : NUM_PATCHES + NUM_PROMPT_TOKENS + ACTION_DIM * NUM_ACTIONS_CHUNK,
975
+ :,
976
+ ] # (B, act_chunk_len, D)
977
+
978
+ # Handle different prediction methods
979
+ if action_head is not None:
980
+ # L1 regression prediction
981
+ normalized_actions = action_head.predict_action(actions_hidden_states)
982
+ normalized_actions = normalized_actions.reshape(NUM_ACTIONS_CHUNK, ACTION_DIM)
983
+ normalized_actions = normalized_actions.float().cpu().detach().numpy()
984
+ else:
985
+ # Discrete token-based prediction
986
+ predicted_action_token_ids = (
987
+ language_model_output.logits[
988
+ :,
989
+ NUM_PATCHES + NUM_PROMPT_TOKENS : NUM_PATCHES + NUM_PROMPT_TOKENS + ACTION_DIM * NUM_ACTIONS_CHUNK,
990
+ ]
991
+ .argmax(dim=2)
992
+ .cpu()
993
+ .numpy()
994
+ )
995
+ discretized_actions = self.vocab_size - predicted_action_token_ids
996
+ discretized_actions = np.clip(discretized_actions - 1, a_min=0, a_max=self.bin_centers.shape[0] - 1)
997
+ normalized_actions = self.bin_centers[discretized_actions]
998
+ normalized_actions = normalized_actions.reshape(NUM_ACTIONS_CHUNK, ACTION_DIM)
999
+
1000
+ return normalized_actions, actions_hidden_states
1001
+
1002
+ def predict_action(
1003
+ self,
1004
+ input_ids: Optional[torch.LongTensor] = None,
1005
+ unnorm_key: Optional[str] = None,
1006
+ proprio=None,
1007
+ proprio_projector=None,
1008
+ action_head=None,
1009
+ noisy_action_projector=None,
1010
+ use_film: bool = False,
1011
+ **kwargs: str,
1012
+ ) -> np.ndarray:
1013
+ """Predict actions from input sequence, with options for different prediction methods.
1014
+
1015
+ Args:
1016
+ input_ids: Input token ids
1017
+ unnorm_key: Key for unnormalization statistics
1018
+ proprio: Proprioceptive features
1019
+ proprio_projector: Projector for proprioceptive features
1020
+ action_head: Optional head for L1 regression or diffusion-based prediction
1021
+ noisy_action_projector: Projector for noisy actions in diffusion-based prediction
1022
+ use_film: Whether to use FiLM conditioning
1023
+ **kwargs: Additional arguments including pixel_values and attention_mask
1024
+
1025
+ Returns:
1026
+ Tuple of (unnormalized_actions, action_hidden_states)
1027
+ """
1028
+ # If the special empty token ('') does not already appear after the colon (':') token in the prompt
1029
+ # (after "OUT:" or "ASSISTANT:"), insert it to match the inputs seen at training time
1030
+ if not torch.all(input_ids[:, -1] == 29871):
1031
+ input_ids = torch.cat(
1032
+ (input_ids, torch.unsqueeze(torch.Tensor([29871]).long(), dim=0).to(input_ids.device)), dim=1
1033
+ )
1034
+
1035
+ pixel_values = kwargs["pixel_values"]
1036
+ attention_mask = kwargs["attention_mask"]
1037
+
1038
+ # Create fake labels tensor (needed for action mask)
1039
+ labels = input_ids.clone()
1040
+ labels[:] = IGNORE_INDEX
1041
+
1042
+ # Get number of tokens in prompt (excluding the start token)
1043
+ NUM_PROMPT_TOKENS = input_ids.shape[-1] - 1 # Subtract action tokens and stop token
1044
+
1045
+ # Prepare inputs by adding necessary tokens
1046
+ input_ids, attention_mask = self._prepare_input_for_action_prediction(input_ids, attention_mask)
1047
+
1048
+ # Update labels tensor for action mask computation later
1049
+ labels = self._prepare_labels_for_action_prediction(labels, input_ids)
1050
+
1051
+ # Get input embeddings and action masks
1052
+ input_embeddings = self.get_input_embeddings()(input_ids)
1053
+ all_actions_mask = self._process_action_masks(labels)
1054
+
1055
+ # Extract language embeddings
1056
+ language_embeddings = input_embeddings[~all_actions_mask].reshape(
1057
+ input_embeddings.shape[0], -1, input_embeddings.shape[2]
1058
+ )
1059
+
1060
+ # Process vision features
1061
+ projected_patch_embeddings = self._process_vision_features(pixel_values, language_embeddings, use_film)
1062
+
1063
+ # Add proprioceptive features if provided
1064
+ use_proprio = proprio_projector is not None and proprio is not None
1065
+ if use_proprio:
1066
+ proprio = torch.Tensor(proprio).to(projected_patch_embeddings.device, dtype=projected_patch_embeddings.dtype)
1067
+ projected_patch_embeddings = self._process_proprio_features(
1068
+ projected_patch_embeddings, proprio, proprio_projector
1069
+ )
1070
+
1071
+ # Use diffusion if provided, otherwise use regression or discrete prediction
1072
+ use_diffusion = noisy_action_projector is not None and hasattr(action_head, "noise_scheduler")
1073
+
1074
+ # Calculate number of patches (including proprio token and/or diffusion timestep embedding if present)
1075
+ NUM_PATCHES = self.vision_backbone.get_num_patches() * self.vision_backbone.get_num_images_in_input()
1076
+ if use_proprio:
1077
+ NUM_PATCHES += 1
1078
+ if use_diffusion:
1079
+ NUM_PATCHES += 1
1080
+
1081
+ if use_diffusion:
1082
+ # Sample random noise with shape equal to output action, used as the starting state for reverse diffusion
1083
+ noise = torch.randn(
1084
+ size=(1, NUM_ACTIONS_CHUNK, ACTION_DIM), device=input_embeddings.device, dtype=input_embeddings.dtype
1085
+ )
1086
+
1087
+ # Run diffusion-based prediction
1088
+ normalized_actions, actions_hidden_states = self._run_diffusion_prediction(
1089
+ input_embeddings,
1090
+ all_actions_mask,
1091
+ noise,
1092
+ action_head,
1093
+ projected_patch_embeddings,
1094
+ labels,
1095
+ attention_mask,
1096
+ NUM_PATCHES,
1097
+ NUM_PROMPT_TOKENS,
1098
+ noisy_action_projector,
1099
+ )
1100
+ else:
1101
+ # Run regression or discrete token-based prediction
1102
+ normalized_actions, actions_hidden_states = self._regression_or_discrete_prediction(
1103
+ input_embeddings,
1104
+ all_actions_mask,
1105
+ projected_patch_embeddings,
1106
+ attention_mask,
1107
+ labels,
1108
+ NUM_PATCHES,
1109
+ NUM_PROMPT_TOKENS,
1110
+ action_head,
1111
+ )
1112
+
1113
+ # Unnormalize predicted actions
1114
+ actions = self._unnormalize_actions(normalized_actions, unnorm_key)
1115
+
1116
+ return actions, actions_hidden_states
1117
+
1118
+ @staticmethod
1119
+ def _check_unnorm_key(norm_stats: Dict[str, Dict[str, Any]], unnorm_key: Optional[str]) -> str:
1120
+ """Validate and resolve the unnormalization key for action statistics"""
1121
+ if unnorm_key is None:
1122
+ assert len(norm_stats) == 1, (
1123
+ f"Your model was trained on more than one dataset, "
1124
+ f"please pass a `unnorm_key` from the following options to choose the statistics "
1125
+ f"used for un-normalizing actions: {norm_stats.keys()}"
1126
+ )
1127
+ unnorm_key = next(iter(norm_stats.keys()))
1128
+
1129
+ assert unnorm_key in norm_stats, (
1130
+ f"The `unnorm_key` you chose is not in the set of available dataset statistics, "
1131
+ f"please choose from: {norm_stats.keys()}"
1132
+ )
1133
+ return unnorm_key
1134
+
1135
+ def get_action_dim(self, unnorm_key: Optional[str] = None) -> int:
1136
+ """Get the dimensionality of the policy's action space."""
1137
+ unnorm_key = self._check_unnorm_key(self.norm_stats, unnorm_key)
1138
+ return len(self.norm_stats[unnorm_key]["action"]["min"])
1139
+
1140
+ def get_action_stats(self, unnorm_key: Optional[str] = None) -> Dict[str, Any]:
1141
+ """Get all the logged statistics for the given dataset."""
1142
+ unnorm_key = self._check_unnorm_key(self.norm_stats, unnorm_key)
1143
+ return self.norm_stats[unnorm_key]["action"]
preprocessor_config.json ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoImageProcessor": "processing_prismatic.PrismaticImageProcessor",
4
+ "AutoProcessor": "processing_prismatic.PrismaticProcessor"
5
+ },
6
+ "image_processor_type": "PrismaticImageProcessor",
7
+ "image_resize_strategy": "resize-naive",
8
+ "input_sizes": [
9
+ [
10
+ 3,
11
+ 224,
12
+ 224
13
+ ],
14
+ [
15
+ 3,
16
+ 224,
17
+ 224
18
+ ]
19
+ ],
20
+ "interpolations": [
21
+ "bicubic",
22
+ "bicubic"
23
+ ],
24
+ "means": [
25
+ [
26
+ 0.485,
27
+ 0.456,
28
+ 0.406
29
+ ],
30
+ [
31
+ 0.5,
32
+ 0.5,
33
+ 0.5
34
+ ]
35
+ ],
36
+ "processor_class": "PrismaticProcessor",
37
+ "stds": [
38
+ [
39
+ 0.229,
40
+ 0.224,
41
+ 0.225
42
+ ],
43
+ [
44
+ 0.5,
45
+ 0.5,
46
+ 0.5
47
+ ]
48
+ ],
49
+ "tvf_crop_params": [
50
+ {
51
+ "output_size": [
52
+ 224,
53
+ 224
54
+ ]
55
+ },
56
+ {
57
+ "output_size": [
58
+ 224,
59
+ 224
60
+ ]
61
+ }
62
+ ],
63
+ "tvf_do_letterbox": false,
64
+ "tvf_letterbox_fill": null,
65
+ "tvf_normalize_params": [
66
+ {
67
+ "inplace": false,
68
+ "mean": [
69
+ 0.484375,
70
+ 0.455078125,
71
+ 0.40625
72
+ ],
73
+ "std": [
74
+ 0.228515625,
75
+ 0.2236328125,
76
+ 0.224609375
77
+ ]
78
+ },
79
+ {
80
+ "inplace": false,
81
+ "mean": [
82
+ 0.5,
83
+ 0.5,
84
+ 0.5
85
+ ],
86
+ "std": [
87
+ 0.5,
88
+ 0.5,
89
+ 0.5
90
+ ]
91
+ }
92
+ ],
93
+ "tvf_resize_params": [
94
+ {
95
+ "antialias": true,
96
+ "interpolation": 3,
97
+ "max_size": null,
98
+ "size": [
99
+ 224,
100
+ 224
101
+ ]
102
+ },
103
+ {
104
+ "antialias": true,
105
+ "interpolation": 3,
106
+ "max_size": null,
107
+ "size": [
108
+ 224,
109
+ 224
110
+ ]
111
+ }
112
+ ],
113
+ "use_fused_vision_backbone": true
114
+ }
processing_prismatic.py ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ processing_prismatic.py
3
+
4
+ HuggingFace-style preprocessor definitions for Prismatic VLMs, inheriting from `ProcessorMixin`. Default configuration
5
+ specifies `siglip-224px+7b`.
6
+ """
7
+
8
+ from typing import Any, ClassVar, List, Optional, Tuple, Union
9
+
10
+ import timm.data
11
+ import torch
12
+ import torchvision.transforms.functional as TVF
13
+ from PIL import Image
14
+ from torchvision.transforms import CenterCrop, Compose, Normalize, Resize, ToTensor
15
+ from transformers import PreTrainedTokenizerBase
16
+ from transformers.image_processing_utils import BatchFeature, ImageProcessingMixin
17
+ from transformers.processing_utils import ProcessorMixin
18
+ from transformers.tokenization_utils import PaddingStrategy, PreTokenizedInput, TextInput, TruncationStrategy
19
+ from transformers.utils import TensorType
20
+
21
+
22
+ # === Image Processing ===
23
+ def letterbox_pad_transform(image: Image.Image, padding_fill_value: Tuple[int, int, int]) -> Image.Image:
24
+ """Given a PIL.Image, pad to square by adding a symmetric border around the height/width."""
25
+ (w, h), max_wh = image.size, max(image.size)
26
+ horizontal_pad, vertical_pad = int((max_wh - w) / 2), int((max_wh - h) / 2)
27
+ padding = (horizontal_pad, vertical_pad, horizontal_pad, vertical_pad)
28
+
29
+ return TVF.pad(image, padding, fill=padding_fill_value, padding_mode="constant")
30
+
31
+
32
+ class PrismaticImageProcessor(ImageProcessingMixin):
33
+ model_input_names: ClassVar[List[str]] = ["pixel_values"]
34
+
35
+ def __init__(
36
+ self,
37
+ use_fused_vision_backbone: bool = False,
38
+ image_resize_strategy: str = "letterbox",
39
+ input_sizes: Optional[List[Tuple[int, int, int]]] = None,
40
+ interpolations: Optional[List[str]] = None,
41
+ means: Optional[List[Tuple[float, float, float]]] = None,
42
+ stds: Optional[List[Tuple[float, float, float]]] = None,
43
+ **kwargs: str,
44
+ ) -> None:
45
+ """
46
+ Initialize a PrismaticImageProcessor as a wrapper around a torchvision transform; this transform will be
47
+ created by TIMM, and edited to follow our custom `image_resize_strategy` logic.
48
+
49
+ @param use_fused_vision_backbone: Boolean indicating single or fused (dual) vision backbone
50
+ @param image_resize_strategy: Prismatic image resize strategy in < resize-naive | resize-crop | letterbox >
51
+ @param input_size: [TIMM :: `data_cfg`] Input image size as tuple (channels, width, height)
52
+ @param interpolation: [TIMM :: `data_cfg`] Interpolation as string (default: "bicubic")
53
+ @param mean: [TIMM :: `data_cfg`] Normalization mean as float tuple (or two-tuple if `fused_backbone`)
54
+ @param std: [TIMM :: `data_cfg`] Normalization std as float tuple (or two-tuple if `fused_backbone`)
55
+ """
56
+ self.use_fused_vision_backbone = use_fused_vision_backbone
57
+ self.image_resize_strategy = image_resize_strategy
58
+
59
+ # Handle `None` default values
60
+ input_sizes = [(3, 224, 224)] if input_sizes is None else input_sizes
61
+ means = [(0.5, 0.5, 0.5)] if means is None else means
62
+ stds = [(0.5, 0.5, 0.5)] if stds is None else stds
63
+
64
+ # TIMM `data_cfg` Parameters
65
+ self.input_sizes, self.interpolations, self.means, self.stds = input_sizes, interpolations, means, stds
66
+
67
+ # Grab torchvision transforms via TIMM =>> need to parse for specific "functional" transform values!
68
+ self.tvf_resize_params, self.tvf_crop_params, self.tvf_normalize_params = [], [], []
69
+ self.tvf_do_letterbox, self.tvf_letterbox_fill = False, None
70
+
71
+ for idx in range(len(input_sizes)):
72
+ transform = timm.data.create_transform(
73
+ input_size=self.input_sizes[idx],
74
+ interpolation=self.interpolations[idx],
75
+ mean=self.means[idx],
76
+ std=self.stds[idx],
77
+ crop_pct=1.0, # Set to 1.0 to ignore cropping (initial Resize sets `input_size`)
78
+ crop_mode="center", # Default crop mode -- no-op when `crop_pct == 1.0`
79
+ is_training=False, # No image augmentations when loading the transform!
80
+ )
81
+
82
+ # [Validation] Ensure appropriate transform structure, expected sizes
83
+ if not (
84
+ isinstance(transform, Compose)
85
+ and (len(transform.transforms) == 4)
86
+ and isinstance(transform.transforms[0], Resize)
87
+ and isinstance(transform.transforms[1], CenterCrop)
88
+ and isinstance(transform.transforms[2], ToTensor)
89
+ and isinstance(transform.transforms[3], Normalize)
90
+ and (transform.transforms[0].size == self.input_sizes[idx][-1])
91
+ and (transform.transforms[1].size == self.input_sizes[idx][-2:])
92
+ ):
93
+ raise ValueError(f"Unexpected TIMM image transformation structure/sizes: `{transform}`")
94
+
95
+ # HF Image Processors *must* be JSON-serializable; as such, cannot have torchvision. as an attribute.
96
+ # => Instead, we're going to parse the transform and call "torchvision.transforms.functional" (`tvf`)
97
+ resize_t, crop_t, norm_t = transform.transforms[0], transform.transforms[1], transform.transforms[3]
98
+ self.tvf_resize_params.append(
99
+ {
100
+ "size": resize_t.size,
101
+ "interpolation": TVF.pil_modes_mapping[resize_t.interpolation],
102
+ "max_size": None,
103
+ "antialias": True,
104
+ }
105
+ )
106
+ self.tvf_crop_params.append({"output_size": crop_t.size})
107
+ self.tvf_normalize_params.append(
108
+ {
109
+ "mean": norm_t.mean.float().numpy().tolist(),
110
+ "std": norm_t.std.float().numpy().tolist(),
111
+ "inplace": False,
112
+ }
113
+ )
114
+ self.tvf_do_letterbox, self.tvf_letterbox_fill = False, None
115
+
116
+ # Handle Prismatic `image_resize_strategy`
117
+ if self.image_resize_strategy == "resize-naive":
118
+ self.tvf_resize_params[idx]["size"] = (resize_t.size, resize_t.size)
119
+ elif self.image_resize_strategy == "letterbox":
120
+ self.tvf_do_letterbox, self.tvf_letterbox_fill = True, tuple([int(x * 255) for x in self.means[idx]])
121
+ elif self.image_resize_strategy == "resize-crop":
122
+ pass
123
+ else:
124
+ raise ValueError(f"Image resize strategy `{self.image_resize_strategy}` is not supported!")
125
+
126
+ # Dispatch **kwargs to super()
127
+ super().__init__(**kwargs)
128
+
129
+ def apply_transform(self, img: Image.Image) -> torch.Tensor:
130
+ """Apply `functional` variant of TIMM's Transform = Compose([Resize -> CenterCrop -> ToTensor -> Normalize])"""
131
+ if self.tvf_do_letterbox:
132
+ img = letterbox_pad_transform(img, self.tvf_letterbox_fill)
133
+
134
+ # [Contract] Fused Backbones expect "channel-stacked" inputs; we'll unpack on the model side!
135
+ imgs_t = []
136
+ for idx in range(len(self.input_sizes)):
137
+ img_idx = TVF.resize(img, **self.tvf_resize_params[idx])
138
+ img_idx = TVF.center_crop(img_idx, **self.tvf_crop_params[idx])
139
+ img_idx_t = TVF.to_tensor(img_idx)
140
+ img_idx_t = TVF.normalize(img_idx_t, **self.tvf_normalize_params[idx])
141
+ imgs_t.append(img_idx_t)
142
+
143
+ # [Contract] `imgs_t` is a list of Tensors of shape [3, input_size, input_size]; stack along dim = 0
144
+ img_t = torch.vstack(imgs_t)
145
+
146
+ return img_t
147
+
148
+ def preprocess(
149
+ self,
150
+ images: Union[Image.Image, List[Image.Image]],
151
+ return_tensors: Optional[Union[str, TensorType]] = None,
152
+ **_: str,
153
+ ) -> BatchFeature:
154
+ """
155
+ Preprocess an image (or batch of images); note that unlike the `transformers :: BaseImageProcessor` we
156
+ explicitly only handle PIL.Image.Image instances for simplicity.
157
+
158
+ @param images: A (batch of) PIL.Image.Image instance(s) to preprocess.
159
+ @param return_tensors: BatchFeature default Tensor format (e.g., "pt" for torch); if None, returns np.ndarray
160
+
161
+ @return: Instance of `transformers :: BatchFeature` with a single key "pixel_values"
162
+ """
163
+ if not isinstance(images, list):
164
+ images = [images]
165
+
166
+ # Apply `self.img_transform` to each image (will return list of torch.Tensors); stack into "batched" Tensor
167
+ pixel_values = torch.stack([self.apply_transform(img.convert("RGB")) for img in images])
168
+
169
+ # Return BatchFeature =>> note that for compatibility, constructor expects Dict[str, np.ndarray], so we convert
170
+ return BatchFeature(data={"pixel_values": pixel_values.float().numpy()}, tensor_type=return_tensors)
171
+
172
+ def __call__(self, images: Union[Image.Image, List[Image.Image]], **kwargs) -> BatchFeature:
173
+ return self.preprocess(images, **kwargs)
174
+
175
+
176
+ # === PrismaticProcessor =>> Wraps both ImageProcessor and Tokenizer ===
177
+ # =>> https://github.com/huggingface/transformers/blob/main/src/transformers/models/llava/processing_llava.py
178
+ class PrismaticProcessor(ProcessorMixin):
179
+ attributes: ClassVar[List[str]] = ["image_processor", "tokenizer"]
180
+ image_processor_class: str = "AutoImageProcessor"
181
+ tokenizer_class: str = "AutoTokenizer"
182
+
183
+ def __init__(
184
+ self,
185
+ image_processor: Optional[ImageProcessingMixin] = None,
186
+ tokenizer: Optional[PreTrainedTokenizerBase] = None,
187
+ ) -> None:
188
+ super().__init__(image_processor, tokenizer)
189
+
190
+ def __call__(
191
+ self,
192
+ text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]],
193
+ images: Union[Image.Image, List[Image.Image]],
194
+ padding: Union[bool, str, PaddingStrategy] = False,
195
+ truncation: Optional[Union[bool, str, TruncationStrategy]] = None,
196
+ max_length: Optional[int] = None,
197
+ return_tensors: Optional[Union[str, TensorType]] = TensorType.PYTORCH,
198
+ ) -> BatchFeature:
199
+ """
200
+ Preprocess a given (batch) of text/images for a Prismatic VLM; forwards text to the underlying LLM's tokenizer,
201
+ forwards images to PrismaticImageProcessor.
202
+
203
+ @param text: The (batch) of text to encode; must be a string or list of strings.
204
+ @param images: A (batch of) PIL.Image.Image instance(s) to preprocess.
205
+ @param padding: Sequence padding strategy (if multiple specified) in < True = "longest" | "max_length" | False >
206
+ @param truncation: Truncation strategy for the output sequences; requires `max_length` to be specified
207
+ @param max_length: Maximum length (in tokens) to truncate
208
+ @param return_tensors: Type of return tensors (usually "pt" or TensorType.PYTORCH)
209
+
210
+ @return: BatchFeature with keys for `input_ids`, `attention_mask` and `pixel_values`.
211
+ """
212
+ pixel_values = self.image_processor(images, return_tensors=return_tensors)["pixel_values"]
213
+ text_inputs = self.tokenizer(
214
+ text, return_tensors=return_tensors, padding=padding, truncation=truncation, max_length=max_length
215
+ )
216
+
217
+ # [Validate] Need same number of images and text inputs!
218
+ if pixel_values.shape[0] != text_inputs.input_ids.shape[0]:
219
+ raise ValueError("Batch is malformed; expected same number of images and text inputs!")
220
+
221
+ return BatchFeature(data={**text_inputs, "pixel_values": pixel_values})
222
+
223
+ # === Tokenizer Dispatch Utilities =>> check `PreTrainedTokenizerBase` for documentation ===
224
+ def batch_decode(
225
+ self,
226
+ sequences: Union[List[int], List[List[int]], torch.Tensor, Any], # `Any` = np.ndarray | tf.Tensor
227
+ skip_special_tokens: bool = False,
228
+ clean_up_tokenization_spaces: Optional[bool] = None,
229
+ **kwargs: str,
230
+ ) -> List[str]:
231
+ return self.tokenizer.batch_decode(
232
+ sequences=sequences,
233
+ skip_special_tokens=skip_special_tokens,
234
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
235
+ **kwargs,
236
+ )
237
+
238
+ def decode(
239
+ self,
240
+ token_ids: Union[int, List[int], torch.Tensor, Any], # `Any` = np.ndarray | tf.Tensor
241
+ skip_special_tokens: bool = False,
242
+ clean_up_tokenization_spaces: Optional[bool] = None,
243
+ **kwargs: str,
244
+ ) -> str:
245
+ return self.tokenizer.decode(
246
+ token_ids=token_ids,
247
+ skip_special_tokens=skip_special_tokens,
248
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
249
+ **kwargs,
250
+ )
251
+
252
+ @property
253
+ def model_input_names(self) -> List[str]:
254
+ tokenizer_input_names = self.tokenizer.model_input_names
255
+ image_processor_input_names = self.image_processor.model_input_names
256
+
257
+ return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
processor_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_prismatic.PrismaticProcessor"
4
+ },
5
+ "processor_class": "PrismaticProcessor"
6
+ }
proprio_projector--60000_checkpoint.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da5ffe5dbb1757337d81d078db1e79da9e2789d58ee8dc772ade0d564af18ae6
3
+ size 67275184
pruned_llama.py ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ pruned_llama.py
3
+
4
+ 支持可变维度的 Pruned Llama 模型,用于加载经过 SlimLLM 剪枝的模型。
5
+ 每层可以有不同的 num_heads 和 intermediate_size。
6
+
7
+ 基于 transformers 的 Llama 实现,通过继承并重写关键组件。
8
+ """
9
+
10
+ import torch
11
+ import torch.nn as nn
12
+ from typing import List, Optional, Tuple, Union
13
+ import logging
14
+
15
+ from transformers import LlamaConfig, LlamaPreTrainedModel
16
+ from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast
17
+ from transformers.models.llama.modeling_llama import (
18
+ LlamaRMSNorm,
19
+ LlamaRotaryEmbedding,
20
+ LlamaMLP,
21
+ LlamaAttention,
22
+ LlamaDecoderLayer,
23
+ LlamaModel,
24
+ LlamaForCausalLM,
25
+ apply_rotary_pos_emb,
26
+ repeat_kv,
27
+ )
28
+
29
+ try:
30
+ from transformers.cache_utils import Cache
31
+ except ImportError:
32
+ Cache = None
33
+
34
+ logger = logging.getLogger(__name__)
35
+
36
+
37
+ class PrunedLlamaConfig(LlamaConfig):
38
+ """
39
+ 扩展 LlamaConfig 以支持每层不同的维度
40
+ """
41
+ model_type = "pruned_llama"
42
+
43
+ def __init__(
44
+ self,
45
+ pruned_num_heads: Optional[List[int]] = None,
46
+ pruned_intermediate_sizes: Optional[List[int]] = None,
47
+ **kwargs
48
+ ):
49
+ super().__init__(**kwargs)
50
+
51
+ # 记录每层剪枝后的实际维度
52
+ self.pruned_num_heads = pruned_num_heads or []
53
+ self.pruned_intermediate_sizes = pruned_intermediate_sizes or []
54
+
55
+ # 标记为剪枝模型
56
+ self.is_pruned = True
57
+
58
+ def get_attention_feature_size(self, layer_idx: int) -> int:
59
+ """获取指定层的 attention 特征大小"""
60
+ if layer_idx < len(self.pruned_num_heads):
61
+ num_heads = self.pruned_num_heads[layer_idx]
62
+ head_dim = self.hidden_size // self.num_attention_heads
63
+ return num_heads * head_dim
64
+ return self.hidden_size # 默认返回原始大小
65
+
66
+ def get_intermediate_size(self, layer_idx: int) -> int:
67
+ """获取指定层的 MLP 中间层大小"""
68
+ if layer_idx < len(self.pruned_intermediate_sizes):
69
+ return self.pruned_intermediate_sizes[layer_idx]
70
+ return self.intermediate_size # 默认返回原始大小
71
+
72
+
73
+ class PrunedLlamaAttention(LlamaAttention):
74
+ """
75
+ 剪枝版本的 LlamaAttention,支持每层不同的 attention 特征维度。
76
+ 继承自 LlamaAttention 并使用层特定的维度。
77
+ """
78
+
79
+ def __init__(self, config: PrunedLlamaConfig, layer_idx: Optional[int] = None):
80
+ super().__init__(config, layer_idx)
81
+
82
+ if layer_idx is None:
83
+ raise ValueError("layer_idx must be provided for PrunedLlamaAttention")
84
+
85
+ # 使用每层特定的 attention 特征维度
86
+ attention_feature_size = config.get_attention_feature_size(layer_idx)
87
+
88
+ # 计算剪枝后的实际头数
89
+ self.pruned_num_heads = attention_feature_size // self.head_dim
90
+
91
+ # 保持原始的 key-value 头数比例
92
+ key_value_ratio = self.num_key_value_heads / self.num_heads
93
+ self.pruned_num_key_value_heads = max(1, int(self.pruned_num_heads * key_value_ratio))
94
+
95
+ # 重新计算 key-value 特征大小
96
+ key_value_feature_size = self.pruned_num_key_value_heads * self.head_dim
97
+
98
+ # 计算剪枝后的 key-value groups
99
+ self.pruned_num_key_value_groups = self.pruned_num_heads // self.pruned_num_key_value_heads
100
+
101
+ # 重写线性层以使用特定的特征维度
102
+ self.q_proj = nn.Linear(self.hidden_size, attention_feature_size, bias=config.attention_bias)
103
+ self.k_proj = nn.Linear(self.hidden_size, key_value_feature_size, bias=config.attention_bias)
104
+ self.v_proj = nn.Linear(self.hidden_size, key_value_feature_size, bias=config.attention_bias)
105
+ self.o_proj = nn.Linear(attention_feature_size, self.hidden_size, bias=config.attention_bias)
106
+
107
+ # 重新初始化 rotary embedding(兼容不同版本的 transformers)
108
+ try:
109
+ self.rotary_emb = LlamaRotaryEmbedding(config=self.config)
110
+ except TypeError:
111
+ # 旧版本使用单独的参数
112
+ self.rotary_emb = LlamaRotaryEmbedding(
113
+ self.head_dim,
114
+ max_position_embeddings=self.max_position_embeddings,
115
+ base=self.rope_theta,
116
+ )
117
+
118
+ def forward(
119
+ self,
120
+ hidden_states: torch.Tensor,
121
+ attention_mask: Optional[torch.Tensor] = None,
122
+ position_ids: Optional[torch.LongTensor] = None,
123
+ past_key_value: Optional[Cache] = None,
124
+ output_attentions: bool = False,
125
+ use_cache: bool = False,
126
+ cache_position: Optional[torch.LongTensor] = None,
127
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
128
+ **kwargs,
129
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
130
+ bsz, q_len, _ = hidden_states.size()
131
+
132
+ query_states = self.q_proj(hidden_states)
133
+ key_states = self.k_proj(hidden_states)
134
+ value_states = self.v_proj(hidden_states)
135
+
136
+ # 使用 -1 来推断头数(支持 tensor parallel)
137
+ query_states = query_states.view(bsz, q_len, -1, self.head_dim).transpose(1, 2)
138
+ key_states = key_states.view(bsz, q_len, -1, self.head_dim).transpose(1, 2)
139
+ value_states = value_states.view(bsz, q_len, -1, self.head_dim).transpose(1, 2)
140
+
141
+ # RoPE
142
+ if position_embeddings is None:
143
+ cos, sin = self.rotary_emb(value_states, position_ids)
144
+ else:
145
+ cos, sin = position_embeddings
146
+
147
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
148
+
149
+ if past_key_value is not None:
150
+ # sin 和 cos 对 RoPE 模型特定;cache_position 用于静态缓存
151
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
152
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
153
+
154
+ # Repeat k/v heads if needed
155
+ key_states = repeat_kv(key_states, self.pruned_num_key_value_groups)
156
+ value_states = repeat_kv(value_states, self.pruned_num_key_value_groups)
157
+
158
+ # Causal mask
159
+ causal_mask = attention_mask
160
+ if attention_mask is not None:
161
+ causal_mask = causal_mask[:, :, :, : key_states.shape[-2]]
162
+
163
+ # SDPA with memory-efficient backend 的 bug 修复
164
+ if query_states.device.type == "cuda" and causal_mask is not None:
165
+ query_states = query_states.contiguous()
166
+ key_states = key_states.contiguous()
167
+ value_states = value_states.contiguous()
168
+
169
+ # 使用 scaled_dot_product_attention
170
+ is_causal = True if causal_mask is None and q_len > 1 else False
171
+
172
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
173
+ query_states,
174
+ key_states,
175
+ value_states,
176
+ attn_mask=causal_mask,
177
+ dropout_p=self.attention_dropout if self.training else 0.0,
178
+ is_causal=is_causal,
179
+ )
180
+
181
+ attn_output = attn_output.transpose(1, 2).contiguous()
182
+ attn_output = attn_output.view(bsz, q_len, -1)
183
+ attn_output = self.o_proj(attn_output)
184
+
185
+ return attn_output, None, past_key_value
186
+
187
+
188
+ class PrunedLlamaMLP(LlamaMLP):
189
+ """
190
+ 剪枝版本的 LlamaMLP,支持每层不同的中间层大小。
191
+ 继承自 LlamaMLP 并使用层特定的 intermediate_size。
192
+ """
193
+
194
+ def __init__(self, config: PrunedLlamaConfig, layer_idx: int):
195
+ super().__init__(config)
196
+
197
+ # 使用每层特定的中间层维度
198
+ self.intermediate_size = config.get_intermediate_size(layer_idx)
199
+
200
+ # 重写线性层以使用特定的中间层维度
201
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
202
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
203
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
204
+
205
+
206
+ class PrunedLlamaDecoderLayer(LlamaDecoderLayer):
207
+ """
208
+ 剪枝版本的 LlamaDecoderLayer,每层可以有不同的特征维度。
209
+ 继承自 LlamaDecoderLayer 并使用 PrunedLlamaAttention 和 PrunedLlamaMLP。
210
+ """
211
+
212
+ def __init__(self, config: PrunedLlamaConfig, layer_idx: int):
213
+ super().__init__(config, layer_idx)
214
+
215
+ # 重写 attention 和 mlp 层以使用剪枝版本
216
+ self.self_attn = PrunedLlamaAttention(config=config, layer_idx=layer_idx)
217
+ self.mlp = PrunedLlamaMLP(config=config, layer_idx=layer_idx)
218
+
219
+
220
+ class PrunedLlamaModel(LlamaModel):
221
+ """
222
+ 剪枝版本的 LlamaModel,每层可以有不同的特征维度。
223
+ 继承自 LlamaModel 并使用 PrunedLlamaDecoderLayer。
224
+ """
225
+
226
+ def __init__(self, config: PrunedLlamaConfig):
227
+ super().__init__(config)
228
+
229
+ # 重写 layers 以使用剪枝版本的 decoder layers
230
+ self.layers = nn.ModuleList(
231
+ [PrunedLlamaDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
232
+ )
233
+
234
+ # 重新初始化权重
235
+ self.post_init()
236
+
237
+
238
+ class PrunedLlamaForCausalLM(LlamaForCausalLM):
239
+ """
240
+ 剪枝版本的 LlamaForCausalLM。
241
+ 继承自 LlamaForCausalLM 并使用 PrunedLlamaModel。
242
+ """
243
+ config_class = PrunedLlamaConfig
244
+
245
+ def __init__(self, config: PrunedLlamaConfig):
246
+ super().__init__(config)
247
+
248
+ # 重写 model 以使用剪枝版本
249
+ self.model = PrunedLlamaModel(config)
250
+
251
+ # 重新初始化权重
252
+ self.post_init()
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<PAD>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "32000": {
30
+ "content": "<PAD>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ }
37
+ },
38
+ "auto_map": {
39
+ "AutoProcessor": "processing_prismatic.PrismaticProcessor"
40
+ },
41
+ "bos_token": "<s>",
42
+ "clean_up_tokenization_spaces": false,
43
+ "eos_token": "</s>",
44
+ "legacy": false,
45
+ "model_max_length": 2048,
46
+ "pad_token": "<PAD>",
47
+ "padding_side": "right",
48
+ "processor_class": "PrismaticProcessor",
49
+ "sp_model_kwargs": {},
50
+ "tokenizer_class": "LlamaTokenizer",
51
+ "unk_token": "<unk>",
52
+ "use_default_system_prompt": false
53
+ }