Sombit commited on
Commit
bc967b5
·
verified ·
1 Parent(s): 6d71bd7

Upload config

Browse files
Files changed (3) hide show
  1. README.md +199 -0
  2. config.json +63 -0
  3. prismatic_config.py +307 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoConfig": "prismatic_config.TrajectoryVLAConfig"
4
+ },
5
+ "cheat": false,
6
+ "num_timesteps": 6,
7
+ "prismatic_config": {
8
+ "architectures": [
9
+ "TrajectoryVLA"
10
+ ],
11
+ "auto_map": {
12
+ "AutoModelForVision2Seq": "prismatic_model.TrajectoryVLA"
13
+ },
14
+ "model_type": "prismatic",
15
+ "return_dict": false,
16
+ "torch_dtype": "bfloat16"
17
+ },
18
+ "rotation_components": 9,
19
+ "seperate_control_proj": true,
20
+ "timestep_proj_config": {
21
+ "num_tokens": 3,
22
+ "pos_embed_scale": 8,
23
+ "proj_layers": [
24
+ 128,
25
+ 512,
26
+ 1024
27
+ ],
28
+ "time_delta_sec": 0.1
29
+ },
30
+ "token_proj_config": {
31
+ "control_tokens_layers": [
32
+ 4096,
33
+ 2048,
34
+ 1024
35
+ ],
36
+ "image_tokens_mode": "vit",
37
+ "llm_image_tokens_layers": [],
38
+ "vit_tokens_layers": [
39
+ 2176,
40
+ 1024
41
+ ]
42
+ },
43
+ "token_size": 1024,
44
+ "transformer_config": {
45
+ "decoder_block_config": {
46
+ "dropout": 0.0,
47
+ "feature_size": 1024,
48
+ "head_dim": 64,
49
+ "num_heads": 16
50
+ },
51
+ "encoder_block_config": {
52
+ "feature_size": 1024,
53
+ "head_dim": 64,
54
+ "num_heads": 16
55
+ },
56
+ "num_blocks": 2,
57
+ "pos_embed_config": {
58
+ "embedding_dim": 1024,
59
+ "num_embeddings": 300
60
+ }
61
+ },
62
+ "transformers_version": "4.44.2"
63
+ }
prismatic_config.py ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ configuration_prismatic.py
3
+
4
+ HuggingFace-style configuration definition for Prismatic VLMs, inheriting from `transformers.PretrainedConfig`.
5
+ Default configuration specifies `siglip-224px+7b`.
6
+ """
7
+
8
+ from typing import Any, Dict, List, Optional
9
+ import transformers
10
+ from transformers import PretrainedConfig
11
+ from transformers.models.auto import CONFIG_MAPPING
12
+ import numpy as np
13
+
14
+ # === Utilities for Mapping Prismatic names to HF names ===
15
+ # fmt: off
16
+ VISION_BACKBONE_TO_RESOLUTION: Dict[str, List[int]] = {
17
+ "clip-vit-l": [224], "siglip-vit-so400m": [224], "dinov2-vit-l": [224], "in1k-vit-l": [224],
18
+
19
+ "clip-vit-l-336px": [336],
20
+ "siglip-vit-so400m-384px": [384],
21
+
22
+ "dinoclip-vit-l-336px": [336, 336],
23
+ "dinosiglip-vit-so-224px": [224, 224],
24
+ "dinosiglip-vit-so-384px": [384, 384],
25
+ }
26
+ VISION_BACKBONE_TO_TIMM_ID: Dict[str, List[str]] = {
27
+ "clip-vit-l": ["vit_large_patch14_clip_224.openai"],
28
+ "clip-vit-l-336px": ["vit_large_patch14_clip_336.openai"],
29
+
30
+ "dinov2-vit-l": ["vit_large_patch14_reg4_dinov2.lvd142m"],
31
+ "in1k-vit-l": ["vit_large_patch16_224.augreg_in21k_ft_in1k"],
32
+
33
+ "siglip-vit-so400m": ["vit_so400m_patch14_siglip_224"],
34
+ "siglip-vit-so400m-384px": ["vit_so400m_patch14_siglip_384"],
35
+
36
+ "dinoclip-vit-l-336px": ["vit_large_patch14_reg4_dinov2.lvd142m", "vit_large_patch14_clip_336.openai"],
37
+ "dinosiglip-vit-so-224px": ["vit_large_patch14_reg4_dinov2.lvd142m", "vit_so400m_patch14_siglip_224"],
38
+ "dinosiglip-vit-so-384px": ["vit_large_patch14_reg4_dinov2.lvd142m", "vit_so400m_patch14_siglip_384"],
39
+ }
40
+ TIMM_OVERRIDE_ACT_LAYER: Dict[str, List[Optional[str]]] = {
41
+ "clip-vit-l": ["quick_gelu"], "clip-vit-l-336px": ["quick_gelu"],
42
+ "dinov2-vit-l": [None], "in1k-vit-l": [None],
43
+ "siglip-vit-so400m": [None], "siglip-vit-so400m-384px": [None],
44
+ "dinoclip-vit-l-336px": [None, "quick_gelu"],
45
+ "dinosiglip-vit-so-224px": [None, None], "dinosiglip-vit-so-384px": [None, None]
46
+ }
47
+
48
+ LLM_BACKBONE_TO_HF_PATH = {
49
+ "llama2-7b-pure": "meta-llama/Llama-2-7b-hf", "llama2-13b-pure": "meta-llama/Llama-2-13b-hf",
50
+ "llama2-7b-chat": "meta-llama/Llama-2-7b-chat-hf", "llama2-13b-chat": "meta-llama/Llama-2-13b-chat-hf",
51
+
52
+ "vicuna-v15-7b": "lmsys/vicuna-7b-v1.5", "vicuna-v15-13b": "lmsys/vicuna-13b-v1.5",
53
+
54
+ "mistral-v0.1-7b-pure": "mistralai/Mistral-7B-v0.1",
55
+ "mistral-v0.1-7b-instruct": "mistralai/Mistral-7B-Instruct-v0.1",
56
+
57
+ "phi-2-3b": "microsoft/phi-2",
58
+ }
59
+ LLM_BACKBONE_TO_HF_METACLASS = {
60
+ "llama2-7b-pure": "llama", "llama2-13b-pure": "llama", "llama2-7b-chat": "llama", "llama2-13b-chat": "llama",
61
+ "vicuna-v15-7b": "llama", "vicuna-v15-13b": "llama",
62
+
63
+ "mistral-v0.1-7b-pure": "mistral", "mistral-v0.1-7b-instruct": "mistral",
64
+
65
+ "phi-2-3b": "phi",
66
+ }
67
+
68
+ VALID_VISION_BACKBONES = set(VISION_BACKBONE_TO_RESOLUTION.keys())
69
+ VALID_LLM_BACKBONES = set(LLM_BACKBONE_TO_HF_PATH)
70
+ # fmt: on
71
+
72
+ class WaypointTokenizer:
73
+ """
74
+ Wraps base LLM/VLM tokenizer and overloads least used token as a control token
75
+
76
+ NOTE: By default, assumes a BPE-style tokenizer akin to the LlamaTokenizer,
77
+ where *the least used tokens* appear at the end of the vocabulary!
78
+
79
+ TODO: Adding new token vs overloading? When I call `tokenizer.add_token()` vocab stays the same
80
+ """
81
+
82
+ def __init__(self, tokenizer: transformers.PreTrainedTokenizerBase, num_tokens: int = 10) -> None:
83
+ self.tokenizer = tokenizer
84
+ self.num_tokens = num_tokens
85
+
86
+ def __call__(self, *_) -> str:
87
+ """Get the text token for control"""
88
+ return self.tokenizer.decode(self.control_token_ids)
89
+
90
+ @property
91
+ def control_token_ids(self) -> np.ndarray:
92
+ # Assumes we're overwriting the final tokens of the vocabulary (least used tokens)
93
+ return np.arange(self.num_tokens) + int(self.tokenizer.vocab_size - self.num_tokens)
94
+
95
+ @property
96
+ def num_control_tokens(self) -> int:
97
+ return self.num_tokens
98
+
99
+ class PrismaticConfig(PretrainedConfig):
100
+ model_type: str = "prismatic"
101
+ is_composition: bool = False
102
+
103
+ def __init__(
104
+ self,
105
+ vision_backbone_id: str = "dinosiglip-vit-so-224px",
106
+ llm_backbone_id: str = "llama2-7b-pure",
107
+ arch_specifier: str = "no-align+gelu-mlp", ## TODO: check
108
+ use_fused_vision_backbone: Optional[bool] = None, ## TODO: check
109
+ image_resize_strategy: str = "letterbox",
110
+ text_config: Optional[Dict[str, Any]] = None,
111
+ llm_max_length: int = 2048,
112
+ pad_token_id: int = 32000,
113
+ pad_to_multiple_of: int = 64,
114
+ output_projector_states: bool = False,
115
+ **kwargs: str,
116
+ ) -> None:
117
+ if vision_backbone_id not in VALID_VISION_BACKBONES:
118
+ raise ValueError(f"Vision backbone `{vision_backbone_id}` not in {VALID_VISION_BACKBONES = }")
119
+
120
+ if llm_backbone_id not in VALID_LLM_BACKBONES:
121
+ raise ValueError(f"LLM backbone `{llm_backbone_id}` not in {VALID_LLM_BACKBONES = }")
122
+
123
+ # Set Prismatic Configuration Fields
124
+ self.vision_backbone_id = vision_backbone_id
125
+ self.llm_backbone_id = llm_backbone_id
126
+ self.arch_specifier = arch_specifier
127
+ self.output_projector_states = output_projector_states
128
+
129
+ # [Contract] All vision backbone parameters are lists =>> supports fused backbones with different preprocessing
130
+ self.use_fused_vision_backbone = (
131
+ use_fused_vision_backbone
132
+ if use_fused_vision_backbone is not None
133
+ else any(self.vision_backbone_id.startswith(v) for v in ["dinoclip", "dinosiglip"])
134
+ )
135
+
136
+ self.timm_model_ids = VISION_BACKBONE_TO_TIMM_ID[self.vision_backbone_id]
137
+ self.timm_override_act_layers = TIMM_OVERRIDE_ACT_LAYER[self.vision_backbone_id]
138
+ self.image_sizes = VISION_BACKBONE_TO_RESOLUTION[self.vision_backbone_id]
139
+ self.image_resize_strategy = image_resize_strategy
140
+
141
+ self.hf_llm_id = LLM_BACKBONE_TO_HF_PATH[self.llm_backbone_id]
142
+ self.llm_max_length = llm_max_length
143
+ self.pad_token_id, self.pad_to_multiple_of = pad_token_id, pad_to_multiple_of
144
+
145
+ # [IMPORTANT] HF Utilities actually look for a `text_config` field... we need to use that specific naming!
146
+ self.text_config = (
147
+ CONFIG_MAPPING[LLM_BACKBONE_TO_HF_METACLASS[self.llm_backbone_id]](**text_config)
148
+ if text_config is not None
149
+ else CONFIG_MAPPING[LLM_BACKBONE_TO_HF_METACLASS[self.llm_backbone_id]]()
150
+ )
151
+
152
+ # Dispatch **kwargs to super() =>> note that `pad_token_id` collides, so we pass it in here as well...
153
+ super().__init__(pad_token_id=pad_token_id, **kwargs)
154
+
155
+ # Here we need trajectory_vla config, with
156
+ # prismatic_config fields and then the waypointer fields
157
+
158
+ class TrajectoryVLAConfig(PretrainedConfig):
159
+
160
+ def __init__(
161
+ self,
162
+ prismatic_config = {},
163
+ token_size: int = 1024, # Timestep token size
164
+ cheat: bool = False, # If True, cheat and use action tokens; Works only with OpenVLA checkpoint
165
+ num_timesteps: int = 20, # Number of prediction time steps
166
+ rotation_components: int = 9, # Number of rotation componens: euler -> 3, quaternion -> 4, rotmat -> 9
167
+ num_timestep_tokens : int = 3,
168
+ seperate_control_proj: bool = True, # If True, project control components separately
169
+ timestep_proj_config: Dict[str, Any] = {},
170
+ token_proj_config: Dict[str, Any] = {},
171
+ transformer_config: Dict[str, Any] = {},
172
+ # prismatic_config: PrismaticConfig,
173
+ # waypointer_config: Dict[str, Any],
174
+ # **kwargs: str,
175
+ ):
176
+
177
+ # super().__init__(**prismatic_config)
178
+ self.prismatic_config = PrismaticConfig(**prismatic_config)
179
+
180
+ self.token_size = token_size
181
+ self.cheat = cheat
182
+ self.num_timesteps = num_timesteps
183
+ self.rotation_components = rotation_components
184
+ self.seperate_control_proj = seperate_control_proj
185
+ self.timestep_proj_config = timestep_proj_config
186
+ self.token_proj_config = token_proj_config
187
+ self.transformer_config = transformer_config
188
+ # self.num_timestep_tokens = num_timestep_tokens
189
+
190
+ @property
191
+ def control_components(self) -> int:
192
+ # Number of control dimensions: 3 translation, N rotation, 1 gripper
193
+ return 3 + self.rotation_components + 1
194
+
195
+ @property
196
+ def num_timestep_tokens(self) -> int:
197
+ return self.timestep_proj_config['num_tokens']
198
+ # class WaypointerConfig(ConfigurableModuleConfig):
199
+ # token_size: int = 1024 # Timestep token size
200
+
201
+ # cheat: bool # If True, cheat and use action tokens; Works only with OpenVLA checkpoint
202
+
203
+ # timestep_proj_config: AutoConfig # Timestep tokens
204
+ # token_proj_config: TokenProjectorConfig # LLM output tokens projection and packing
205
+ # transformer_config: AutoConfig # Transformer config
206
+
207
+ # # Output configurations
208
+ # num_timesteps: int = 20 # Number of prediction time steps
209
+ # rotation_components: int = 3 # Number of rotation componens: euler -> 3, quaternion -> 4, rotmat -> 9
210
+ # separate_control_proj: bool = True # If True, project control components separately
211
+
212
+ # @property
213
+ # def control_components(self) -> int:
214
+ # # Number of control dimensions: 3 translation, N rotation, 1 gripper
215
+ # return 3 + self.rotation_components + 1
216
+
217
+ # @property
218
+ # def num_timestep_tokens(self) -> int:
219
+ # return self.timestep_proj_config.num_tokens
220
+
221
+
222
+ class OpenVLAConfig(PrismaticConfig):
223
+ model_type: str = "openvla"
224
+
225
+ def __init__(
226
+ self,
227
+ norm_stats: Optional[Dict[str, Dict[str, Dict[str, Dict[str, List[float]]]]]] = None,
228
+ n_action_bins: int = 256,
229
+ **kwargs: str,
230
+ ) -> None:
231
+ self.norm_stats, self.n_action_bins = norm_stats, n_action_bins
232
+
233
+ super().__init__(**kwargs)
234
+
235
+ if __name__ == "__main__" :
236
+ # yaml_file = 'barrel/pipes/vlams/configs/waypoints/waypointer_multistep_fractal.yaml'
237
+
238
+ prismatic_config = PrismaticConfig()
239
+ print(prismatic_config)
240
+
241
+ prismatic_config_dict = {
242
+ "vision_backbone_id":"dinosiglip-vit-so-224px",
243
+ # "llm_backbone_id":"llama2-7b-pure",meta-llama/Llama-2-7b-hf
244
+ "llm_backbone_id": "meta-llama/Llama-2-7b-hf",
245
+
246
+ "arch_specifier": "no-align+gelu-mlp", ## TODO: check
247
+ "use_fused_vision_backbone" :None, ## TODO: check
248
+ "image_resize_strategy" : "letterbox",
249
+ "text_config" : None,
250
+ "llm_max_length" : 2048,
251
+ "pad_token_id" :32000,
252
+ "pad_to_multiple_of" : 64,
253
+ "output_projector_states" : False,
254
+ }
255
+ token_proj_config = {
256
+ "vit_tokens_layers": [2176, 1024],
257
+ "control_tokens_layers": [4096, 2048, 1024],
258
+ "image_tokens_mode": 'vit',
259
+ }
260
+ timestep_proj_config = {
261
+ "pos_embed_scale": 1.0,
262
+ "proj_layers": [1024],
263
+ "time_delta_sec": 0.1,
264
+ "num_tokens":3
265
+ }
266
+
267
+ TrajectoryVlaConfig = {
268
+ "prismatic_config":prismatic_config_dict,
269
+ "token_size": 1024,
270
+ "cheat": False,
271
+ "num_timesteps": 20,
272
+ "rotation_components": 3,
273
+ "seperate_control_proj": True,
274
+ "timestep_proj_config": {},
275
+ "token_proj_config": {},
276
+ "transformer_config": {},
277
+ }
278
+
279
+ TrajectoryVLAConfig = TrajectoryVLAConfig( **TrajectoryVlaConfig)
280
+ print(TrajectoryVLAConfig)
281
+
282
+ class WaypointTokenizer:
283
+ """
284
+ Wraps base LLM/VLM tokenizer and overloads least used token as a control token
285
+
286
+ NOTE: By default, assumes a BPE-style tokenizer akin to the LlamaTokenizer,
287
+ where *the least used tokens* appear at the end of the vocabulary!
288
+
289
+ TODO: Adding new token vs overloading? When I call `tokenizer.add_token()` vocab stays the same
290
+ """
291
+
292
+ def __init__(self, tokenizer: transformers.PreTrainedTokenizerBase, num_tokens: int = 10) -> None:
293
+ self.tokenizer = tokenizer
294
+ self.num_tokens = num_tokens
295
+
296
+ def __call__(self, *_) -> str:
297
+ """Get the text token for control"""
298
+ return self.tokenizer.decode(self.control_token_ids)
299
+
300
+ @property
301
+ def control_token_ids(self) -> np.ndarray:
302
+ # Assumes we're overwriting the final tokens of the vocabulary (least used tokens)
303
+ return np.arange(self.num_tokens) + int(self.tokenizer.vocab_size - self.num_tokens)
304
+
305
+ @property
306
+ def num_control_tokens(self) -> int:
307
+ return self.num_tokens