luotingdan commited on
Commit
049cfe1
·
1 Parent(s): ac3d9d9

add model deploy

Browse files
README.md CHANGED
@@ -1,8 +1,9 @@
1
  ---
2
  license: apache-2.0
3
  base_model:
4
- - stepfun-ai/Step3-VL-10B-Base
5
  ---
 
6
  <div align="center">
7
 
8
  <div align="center" style="display: flex; justify-content: center; align-items: center;">
@@ -35,11 +36,10 @@ The success of STEP3-VL-10B is driven by two key strategic designs:
35
 
36
  ## 📥 Model Zoo
37
 
38
- | Model Name | Type | Hugging Face | ModelScope |
39
- |:-----------|:-----|:------------:|:----------:|
40
  | **STEP3-VL-10B-Base** | Base | [🤗 Download](https://huggingface.co/stepfun-ai/Step3-VL-10B-Base) | [🤖 Download](https://modelscope.cn/models/stepfun-ai/Step3-VL-10B-Base) |
41
- | **STEP3-VL-10B** | Chat | [🤗 Download](https://huggingface.co/stepfun-ai/Step3-VL-10B) | [🤖 Download](https://modelscope.cn/models/stepfun-ai/Step3-VL-10B) |
42
-
43
 
44
  ## 📊 Performance
45
 
@@ -47,17 +47,17 @@ STEP3-VL-10B delivers best-in-class performance across major multimodal benchmar
47
 
48
  ### Comparison with Larger Models (10×–20× Larger)
49
 
50
- | Benchmark | STEP3-VL-10B (SeRe) | STEP3-VL-10B (PaCoRe) | GLM-4.6V (106B-A12B) | Qwen3-VL (235B-A22B) | Gemini-2.5-Pro | Seed-1.5-VL |
51
- |:----------|:-------------------:|:---------------------:|:--------------------:|:--------------------:|:--------------:|:-----------:|
52
- | **MMMU** | 78.11 | **80.11** | 75.20 | 78.70 | 83.89 | 79.11 |
53
- | **MathVista** | 83.97 | **85.50** | 83.51 | 85.10 | 83.88 | 85.60 |
54
- | **MathVision** | 70.81 | **75.95** | 63.50 | 72.10 | 73.30 | 68.70 |
55
- | **MMBench (EN)** | 92.05 | 92.38 | 92.75 | 92.70 | **93.19** | 92.11 |
56
- | **MMStar** | 77.48 | 77.64 | 75.30 | 76.80 | **79.18** | 77.91 |
57
- | **OCRBench** | 86.75 | **89.00** | 86.20 | 87.30 | 85.90 | 85.20 |
58
- | **AIME 2025** | 87.66 | **94.43** | 71.88 | 83.59 | 83.96 | 64.06 |
59
- | **HMMT 2025** | 78.18 | **92.14** | 57.29 | 67.71 | 65.68 | 51.30 |
60
- | **LiveCodeBench** | 75.77 | **76.43** | 48.71 | 69.45 | 72.01 | 57.10 |
61
 
62
  <!-- > **Note:** **SeRe** (Sequential Reasoning) uses a max length of 64K tokens; **PaCoRe** (Parallel Coordinated Reasoning) synthesizes 16 SeRe rollouts with a max length of 128K tokens. -->
63
 
@@ -66,35 +66,35 @@ STEP3-VL-10B delivers best-in-class performance across major multimodal benchmar
66
  > **SeRe (Sequential Reasoning):** The standard inference mode using sequential generation (Chain-of-Thought) with a max length of 64K tokens.
67
  >
68
  > **PaCoRe (Parallel Coordinated Reasoning):** An advanced mode that scales test-time compute. It aggregates evidence from **16 parallel rollouts** to synthesize a final answer, utilizing a max context length of 128K tokens.
69
- >
70
- > *Unless otherwise stated, scores below refer to the standard SeRe mode. Higher scores achieved via PaCoRe are explicitly marked.*
71
 
72
  ### Comparison with Open-Source Models (7B–10B)
73
 
74
- | Category | Benchmark | STEP3-VL-10B | GLM-4.6V-Flash (9B) | Qwen3-VL-Thinking (8B) | InternVL-3.5 (8B) | MiMo-VL-RL-2508 (7B) |
75
- |:---------|:----------|:------------:|:-------------------:|:----------------------:|:-----------------:|:--------------------:|
76
- | **STEM Reasoning** | MMMU | **78.11** | 71.17 | 73.53 | 71.69 | 71.14 |
77
- | | MathVision | **70.81** | 54.05 | 59.60 | 52.05 | 59.65 |
78
- | | MathVista | **83.97** | 82.85 | 78.50 | 76.78 | 79.86 |
79
- | | PhyX | **59.45** | 52.28 | 57.67 | 50.51 | 56.00 |
80
- | **Recognition** | MMBench (EN) | **92.05** | 91.04 | 90.55 | 88.20 | 89.91 |
81
- | | MMStar | **77.48** | 74.26 | 73.58 | 69.83 | 72.93 |
82
- | | ReMI | **67.29** | 60.75 | 57.17 | 52.65 | 63.13 |
83
- | **OCR & Document** | OCRBench | **86.75** | 85.97 | 82.85 | 83.70 | 85.40 |
84
- | | AI2D | **89.35** | 88.93 | 83.32 | 82.34 | 84.96 |
85
- | **GUI Grounding** | ScreenSpot-V2 | 92.61 | 92.14 | **93.60** | 84.02 | 90.82 |
86
- | | ScreenSpot-Pro | **51.55** | 45.68 | 46.60 | 15.39 | 34.84 |
87
- | | OSWorld-G | **59.02** | 54.71 | 56.70 | 31.91 | 50.54 |
88
- | **Spatial** | BLINK | **66.79** | 64.90 | 62.78 | 55.40 | 62.57 |
89
- | | All-Angles-Bench | **57.21** | 53.24 | 45.88 | 45.29 | 51.62 |
90
- | **Code** | HumanEval-V | **66.05** | 29.26 | 26.94 | 24.31 | 31.96 |
91
 
92
  ### Key Capabilities
93
 
94
- * **STEM Reasoning:** Achieves **94.43%** on AIME 2025 and **75.95%** on MathVision (with PaCoRe), demonstrating exceptional complex reasoning capabilities that outperform models 10×–20× larger.
95
- * **Visual Perception:** Records **92.05%** on MMBench and **80.11%** on MMMU, establishing strong general visual understanding and multimodal reasoning.
96
- * **GUI & OCR:** Delivers state-of-the-art performance on ScreenSpot-V2 (**92.61%**), ScreenSpot-Pro (**51.55%**), and OCRBench (**86.75%**), optimized for agentic and document understanding tasks.
97
- * **Spatial Understanding:** Demonstrates emergent spatial awareness with **66.79%** on BLINK and **57.21%** on All-Angles-Bench, establishing strong potential for embodied intelligence applications.
98
 
99
  ## 🏗️ Architecture & Training
100
 
@@ -118,9 +118,56 @@ STEP3-VL-10B delivers best-in-class performance across major multimodal benchmar
118
  - **RLHF:** 300 iterations (Task: open-ended generation).
119
  - **PaCoRe Training:** 500 iterations (Context length: 64K max sequence).
120
 
121
-
122
  ## 🛠️ Quick Start
123
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
  ### Requirements
125
 
126
  To run STEP3-VL-10B efficiently, we recommend setting up a Python environment (>=3.10) with **vLLM**:
@@ -162,7 +209,6 @@ outputs = llm.chat(messages=messages, sampling_params=sampling_params)
162
  print(f"Output: {outputs[0].outputs[0].text}")
163
  ```
164
 
165
-
166
  ## 📜 Citation
167
 
168
  If you find this project useful in your research, please cite our technical report:
@@ -173,4 +219,4 @@ If you find this project useful in your research, please cite our technical repo
173
 
174
  ## 📄 License
175
 
176
- This project is open-sourced under the [Apache 2.0 License](https://www.google.com/search?q=LICENSE).
 
1
  ---
2
  license: apache-2.0
3
  base_model:
4
+ - stepfun-ai/Step3-VL-10B-Base
5
  ---
6
+
7
  <div align="center">
8
 
9
  <div align="center" style="display: flex; justify-content: center; align-items: center;">
 
36
 
37
  ## 📥 Model Zoo
38
 
39
+ | Model Name | Type | Hugging Face | ModelScope |
40
+ | :-------------------- | :--- | :----------------------------------------------------------------: | :----------------------------------------------------------------------: |
41
  | **STEP3-VL-10B-Base** | Base | [🤗 Download](https://huggingface.co/stepfun-ai/Step3-VL-10B-Base) | [🤖 Download](https://modelscope.cn/models/stepfun-ai/Step3-VL-10B-Base) |
42
+ | **STEP3-VL-10B** | Chat | [🤗 Download](https://huggingface.co/stepfun-ai/Step3-VL-10B) | [🤖 Download](https://modelscope.cn/models/stepfun-ai/Step3-VL-10B) |
 
43
 
44
  ## 📊 Performance
45
 
 
47
 
48
  ### Comparison with Larger Models (10×–20× Larger)
49
 
50
+ | Benchmark | STEP3-VL-10B (SeRe) | STEP3-VL-10B (PaCoRe) | GLM-4.6V (106B-A12B) | Qwen3-VL (235B-A22B) | Gemini-2.5-Pro | Seed-1.5-VL |
51
+ | :---------------- | :-----------------: | :-------------------: | :------------------: | :------------------: | :------------: | :---------: |
52
+ | **MMMU** | 78.11 | **80.11** | 75.20 | 78.70 | 83.89 | 79.11 |
53
+ | **MathVista** | 83.97 | **85.50** | 83.51 | 85.10 | 83.88 | 85.60 |
54
+ | **MathVision** | 70.81 | **75.95** | 63.50 | 72.10 | 73.30 | 68.70 |
55
+ | **MMBench (EN)** | 92.05 | 92.38 | 92.75 | 92.70 | **93.19** | 92.11 |
56
+ | **MMStar** | 77.48 | 77.64 | 75.30 | 76.80 | **79.18** | 77.91 |
57
+ | **OCRBench** | 86.75 | **89.00** | 86.20 | 87.30 | 85.90 | 85.20 |
58
+ | **AIME 2025** | 87.66 | **94.43** | 71.88 | 83.59 | 83.96 | 64.06 |
59
+ | **HMMT 2025** | 78.18 | **92.14** | 57.29 | 67.71 | 65.68 | 51.30 |
60
+ | **LiveCodeBench** | 75.77 | **76.43** | 48.71 | 69.45 | 72.01 | 57.10 |
61
 
62
  <!-- > **Note:** **SeRe** (Sequential Reasoning) uses a max length of 64K tokens; **PaCoRe** (Parallel Coordinated Reasoning) synthesizes 16 SeRe rollouts with a max length of 128K tokens. -->
63
 
 
66
  > **SeRe (Sequential Reasoning):** The standard inference mode using sequential generation (Chain-of-Thought) with a max length of 64K tokens.
67
  >
68
  > **PaCoRe (Parallel Coordinated Reasoning):** An advanced mode that scales test-time compute. It aggregates evidence from **16 parallel rollouts** to synthesize a final answer, utilizing a max context length of 128K tokens.
69
+ >
70
+ > _Unless otherwise stated, scores below refer to the standard SeRe mode. Higher scores achieved via PaCoRe are explicitly marked._
71
 
72
  ### Comparison with Open-Source Models (7B–10B)
73
 
74
+ | Category | Benchmark | STEP3-VL-10B | GLM-4.6V-Flash (9B) | Qwen3-VL-Thinking (8B) | InternVL-3.5 (8B) | MiMo-VL-RL-2508 (7B) |
75
+ | :----------------- | :--------------- | :----------: | :-----------------: | :--------------------: | :---------------: | :------------------: |
76
+ | **STEM Reasoning** | MMMU | **78.11** | 71.17 | 73.53 | 71.69 | 71.14 |
77
+ | | MathVision | **70.81** | 54.05 | 59.60 | 52.05 | 59.65 |
78
+ | | MathVista | **83.97** | 82.85 | 78.50 | 76.78 | 79.86 |
79
+ | | PhyX | **59.45** | 52.28 | 57.67 | 50.51 | 56.00 |
80
+ | **Recognition** | MMBench (EN) | **92.05** | 91.04 | 90.55 | 88.20 | 89.91 |
81
+ | | MMStar | **77.48** | 74.26 | 73.58 | 69.83 | 72.93 |
82
+ | | ReMI | **67.29** | 60.75 | 57.17 | 52.65 | 63.13 |
83
+ | **OCR & Document** | OCRBench | **86.75** | 85.97 | 82.85 | 83.70 | 85.40 |
84
+ | | AI2D | **89.35** | 88.93 | 83.32 | 82.34 | 84.96 |
85
+ | **GUI Grounding** | ScreenSpot-V2 | 92.61 | 92.14 | **93.60** | 84.02 | 90.82 |
86
+ | | ScreenSpot-Pro | **51.55** | 45.68 | 46.60 | 15.39 | 34.84 |
87
+ | | OSWorld-G | **59.02** | 54.71 | 56.70 | 31.91 | 50.54 |
88
+ | **Spatial** | BLINK | **66.79** | 64.90 | 62.78 | 55.40 | 62.57 |
89
+ | | All-Angles-Bench | **57.21** | 53.24 | 45.88 | 45.29 | 51.62 |
90
+ | **Code** | HumanEval-V | **66.05** | 29.26 | 26.94 | 24.31 | 31.96 |
91
 
92
  ### Key Capabilities
93
 
94
+ - **STEM Reasoning:** Achieves **94.43%** on AIME 2025 and **75.95%** on MathVision (with PaCoRe), demonstrating exceptional complex reasoning capabilities that outperform models 10×–20× larger.
95
+ - **Visual Perception:** Records **92.05%** on MMBench and **80.11%** on MMMU, establishing strong general visual understanding and multimodal reasoning.
96
+ - **GUI & OCR:** Delivers state-of-the-art performance on ScreenSpot-V2 (**92.61%**), ScreenSpot-Pro (**51.55%**), and OCRBench (**86.75%**), optimized for agentic and document understanding tasks.
97
+ - **Spatial Understanding:** Demonstrates emergent spatial awareness with **66.79%** on BLINK and **57.21%** on All-Angles-Bench, establishing strong potential for embodied intelligence applications.
98
 
99
  ## 🏗️ Architecture & Training
100
 
 
118
  - **RLHF:** 300 iterations (Task: open-ended generation).
119
  - **PaCoRe Training:** 500 iterations (Context length: 64K max sequence).
120
 
 
121
  ## 🛠️ Quick Start
122
 
123
+ ### Inference with Hugging Face Transformers
124
+
125
+ We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.1.0, and transformers=4.57.0 as the development environment.We currently only support bf16 inference, and multi-patch for image preprocessing is supported by default. This behavior is aligned with vllm and sglang.
126
+
127
+ ```python
128
+ from transformers import AutoProcessor, AutoModelForCausalLM
129
+
130
+
131
+ key_mapping = {
132
+ "^vision_model": "model.vision_model",
133
+ r"^model(?!\.(language_model|vision_model))": "model.language_model",
134
+ "vit_large_projector": "model.vit_large_projector",
135
+ }
136
+
137
+ model_path = "stepfun-ai/Step3-VL-10B"
138
+
139
+ processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
140
+
141
+ messages = [
142
+ {
143
+ "role": "user",
144
+ "content": [
145
+ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
146
+ {"type": "text", "text": "What's in this picture?"}
147
+ ]
148
+ },
149
+ ]
150
+
151
+ model = AutoModelForCausalLM.from_pretrained(
152
+ model_path,
153
+ trust_remote_code=True,
154
+ device_map="auto",
155
+ torch_dtype="auto",
156
+ key_mapping=key_mapping).eval()
157
+
158
+
159
+ inputs = processor.apply_chat_template(
160
+ messages, add_generation_prompt=True, tokenize=True,
161
+ return_dict=True, return_tensors="pt"
162
+ ).to(model.device)
163
+
164
+
165
+ generate_ids = model.generate(**inputs, max_new_tokens=1024, do_sample=False)
166
+ decoded = processor.decode(generate_ids[0, inputs["input_ids"].shape[-1] :], skip_special_tokens=True)
167
+
168
+ print(decoded)
169
+ ```
170
+
171
  ### Requirements
172
 
173
  To run STEP3-VL-10B efficiently, we recommend setting up a Python environment (>=3.10) with **vLLM**:
 
209
  print(f"Output: {outputs[0].outputs[0].text}")
210
  ```
211
 
 
212
  ## 📜 Citation
213
 
214
  If you find this project useful in your research, please cite our technical report:
 
219
 
220
  ## 📄 License
221
 
222
+ This project is open-sourced under the [Apache 2.0 License](https://www.google.com/search?q=LICENSE).
configuration_step_vl.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Any, Optional, Union
2
+
3
+ from transformers.configuration_utils import PretrainedConfig
4
+ from transformers import Qwen3Config
5
+
6
+
7
+ class StepRoboticsVisionEncoderConfig(PretrainedConfig):
8
+
9
+ def __init__(
10
+ self,
11
+ width=1536,
12
+ layers=47,
13
+ heads=16,
14
+ num_channels=3,
15
+ image_size=728,
16
+ mlp_ratio = 8960/1536,
17
+ patch_size=14,
18
+ hidden_act="quick_gelu",
19
+ layer_norm_eps=1e-5,
20
+ ues_cls_token=False,
21
+ use_ln_pre=True,
22
+ use_ln_post=False,
23
+ use_abs_posemb=True,
24
+ use_rope2d=True,
25
+ ls_init_value=0.1,
26
+ **kwargs,
27
+ ):
28
+ self.width = width
29
+ self.layers = layers
30
+ self.heads = heads
31
+ self.num_channels = num_channels
32
+ self.patch_size = patch_size
33
+ self.image_size = image_size
34
+ self.mlp_ratio = mlp_ratio
35
+ self.layer_norm_eps = layer_norm_eps
36
+ self.hidden_act = hidden_act
37
+ self.ues_cls_token = ues_cls_token
38
+ self.use_ln_pre = use_ln_pre
39
+ self.ls_init_value = ls_init_value
40
+ self.use_ln_post = use_ln_post
41
+ self.use_abs_posemb = use_abs_posemb
42
+ self.use_rope2d = use_rope2d
43
+ super().__init__(**kwargs)
44
+
45
+
46
+
47
+ class StepRoboticsConfig(PretrainedConfig):
48
+ model_type = "step_robotics"
49
+ architectures = ["StepVLForConditionalGeneration"]
50
+
51
+ def __init__(
52
+ self,
53
+ vision_config: Optional[Union[dict, StepRoboticsVisionEncoderConfig]] = None,
54
+ text_config: Optional[Union[dict, Qwen3Config]] = None,
55
+ understand_projector_stride: int = 2,
56
+ projector_bias: bool = False,
57
+ image_token_id: int = 151679,
58
+ **kwargs,
59
+ ) -> None:
60
+ if vision_config is None:
61
+ vision_config = StepRoboticsVisionEncoderConfig()
62
+ elif isinstance(vision_config, dict):
63
+ vision_config = StepRoboticsVisionEncoderConfig(**vision_config)
64
+ self.vision_config = vision_config
65
+
66
+ if text_config is None:
67
+ text_config = Qwen3Config()
68
+ elif isinstance(text_config, dict):
69
+ text_config = Qwen3Config(**text_config)
70
+ self.text_config = text_config
71
+
72
+ self.understand_projector_stride = understand_projector_stride
73
+ self.projector_bias = projector_bias
74
+ self.hidden_size = text_config.hidden_size
75
+ self.image_token_id = image_token_id
76
+ # Help Auto classes find the correct implementation when saving/loading.
77
+ super().__init__(**kwargs)
modeling_step_vl.py ADDED
@@ -0,0 +1,568 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The LLAMA4 and HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ from dataclasses import dataclass
16
+ from typing import Callable, Optional, Tuple, Union
17
+ from PIL import Image
18
+
19
+ import torch
20
+ import torch.nn as nn
21
+ import torch.nn.functional as F
22
+ from transformers import Qwen3Model
23
+ from transformers.cache_utils import Cache, DynamicCache
24
+ from transformers.generation import GenerationMixin
25
+ from transformers.modeling_outputs import BaseModelOutputWithPast, ModelOutput
26
+ from transformers.modeling_utils import PreTrainedModel
27
+ from transformers.processing_utils import Unpack
28
+ from transformers.utils import TransformersKwargs, can_return_tuple, logging
29
+
30
+ from typing import Any, Literal, Optional, TypedDict, Union
31
+
32
+ from configuration_step_vl import StepRoboticsConfig
33
+ from vision_encoder import StepRoboticsVisionEncoder
34
+ logger = logging.get_logger(__name__)
35
+
36
+ class StepVLImagePixelInputs(TypedDict):
37
+ type: Literal["pixel_values"]
38
+ pixel_values: torch.Tensor
39
+ patch_pixel_values: Optional[torch.Tensor]
40
+ num_patches: list[int]
41
+
42
+
43
+ class StepVLImageEmbeddingInputs(TypedDict):
44
+ type: Literal["image_embeds"]
45
+ image_embeds: torch.Tensor
46
+
47
+
48
+ StepVLImageInputs = Union[StepVLImagePixelInputs,
49
+ StepVLImageEmbeddingInputs]
50
+
51
+
52
+ @dataclass
53
+ class StepVLCausalLMOutputWithPast(ModelOutput):
54
+ r"""
55
+ loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
56
+ Language modeling loss (for next-token prediction).
57
+ logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
58
+ Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
59
+ past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
60
+ Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
61
+ `(batch_size, num_heads, sequence_length, embed_size_per_head)`)
62
+ Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
63
+ `past_key_values` input) to speed up sequential decoding.
64
+ """
65
+
66
+ loss: Optional[torch.FloatTensor] = None
67
+ last_hidden_state: Optional[torch.FloatTensor] = None
68
+ logits: torch.FloatTensor = None
69
+ past_key_values: Optional[list[torch.FloatTensor]] = None
70
+ hidden_states: Optional[tuple[torch.FloatTensor]] = None
71
+ attentions: Optional[tuple[torch.FloatTensor]] = None
72
+ image_hidden_states: Optional[torch.FloatTensor] = None
73
+
74
+ def _flatten_embeddings(embeddings) -> torch.Tensor:
75
+ """
76
+ Recursively flattens and concatenates NestedTensors on all but the last
77
+ dimension.
78
+ """
79
+
80
+ if isinstance(embeddings, torch.Tensor):
81
+ # Flatten all but the last dimension.
82
+ return embeddings.flatten(0, -2)
83
+
84
+ return torch.cat(tuple(_flatten_embeddings(t) for t in embeddings))
85
+
86
+ def _embedding_count_expression(embeddings) -> str:
87
+ """
88
+ Constructs a debugging representation of the number of embeddings in the
89
+ NestedTensors.
90
+ """
91
+
92
+ if isinstance(embeddings, torch.Tensor):
93
+ return " x ".join([str(dim) for dim in embeddings.shape[:-1]])
94
+
95
+ return " + ".join(
96
+ _embedding_count_expression(inner) for inner in embeddings)
97
+
98
+ def _merge_multimodal_embeddings(
99
+ inputs_embeds: torch.Tensor,
100
+ is_multimodal: torch.Tensor,
101
+ multimodal_embeddings,
102
+ ) -> torch.Tensor:
103
+ """
104
+ Merge ``multimodal_embeddings`` into ``inputs_embeds`` by overwriting the
105
+ positions in ``inputs_embeds`` corresponding to placeholder tokens in
106
+ ``input_ids``.
107
+ Note:
108
+ This updates ``inputs_embeds`` in place.
109
+ """
110
+ num_expected_tokens = is_multimodal.sum().item()
111
+ assert isinstance(num_expected_tokens, int)
112
+
113
+ flattened = _flatten_embeddings(multimodal_embeddings)
114
+ if flattened.shape[0] != num_expected_tokens:
115
+ expr = _embedding_count_expression(multimodal_embeddings)
116
+ raise ValueError(
117
+ f"Attempted to assign {expr} = {flattened.shape[0]} "
118
+ f"multimodal tokens to {num_expected_tokens} placeholders")
119
+
120
+ is_multimodal = is_multimodal.to(inputs_embeds.device)
121
+ flattened = flattened.to(inputs_embeds.device)
122
+ inputs_embeds[is_multimodal] = flattened
123
+ return inputs_embeds
124
+
125
+ def merge_multimodal_embeddings(
126
+ input_ids: torch.Tensor,
127
+ inputs_embeds: torch.Tensor,
128
+ multimodal_embeddings,
129
+ placeholder_token_id: Union[int, list[int]],
130
+ ) -> torch.Tensor:
131
+ """
132
+ Merge ``multimodal_embeddings`` into ``inputs_embeds`` by overwriting the
133
+ positions in ``inputs_embeds`` corresponding to placeholder tokens in
134
+ ``input_ids``.
135
+
136
+ ``placeholder_token_id`` can be a list of token ids (e.g, token ids
137
+ of img_start, img_break, and img_end tokens) when needed: This means
138
+ the order of these tokens in the ``input_ids`` MUST MATCH the order of
139
+ their embeddings in ``multimodal_embeddings`` since we need to
140
+ slice-merge instead of individually scattering.
141
+ For example, if input_ids is "TTTTTSIIIBIIIBIIIETTT", where
142
+ - T is text token
143
+ - S is image start token
144
+ - I is image embedding token
145
+ - B is image break token
146
+ - E is image end token.
147
+
148
+ Then the image embeddings (that correspond to I's) from vision encoder
149
+ must be padded with embeddings of S, B, and E in the same order of
150
+ input_ids for a correct embedding merge.
151
+ Note:
152
+ This updates ``inputs_embeds`` in place.
153
+ """
154
+ if isinstance(placeholder_token_id, list):
155
+ placeholder_token_id = torch.tensor(placeholder_token_id,
156
+ device=input_ids.device)
157
+ return _merge_multimodal_embeddings(
158
+ inputs_embeds,
159
+ torch.isin(input_ids, placeholder_token_id),
160
+ multimodal_embeddings,
161
+ )
162
+
163
+ return _merge_multimodal_embeddings(
164
+ inputs_embeds,
165
+ (input_ids == placeholder_token_id),
166
+ multimodal_embeddings,
167
+ )
168
+
169
+ class StepRoboticsPreTrainedModel(PreTrainedModel):
170
+ # Link this model family to its configuration class so PreTrainedModel.from_pretrained
171
+ # can load the config instead of failing with a NoneType error.
172
+ config_class = StepRoboticsConfig
173
+ supports_gradient_checkpointing = True
174
+ _skip_keys_device_placement = ["past_key_values"]
175
+ _supports_flash_attn = False
176
+ _supports_sdpa = True
177
+ _supports_flex_attn = True
178
+ _supports_static_cache = True
179
+ _supports_attention_backend = True
180
+
181
+
182
+ class StepRoboticsModel(StepRoboticsPreTrainedModel, GenerationMixin):
183
+ config: StepRoboticsConfig
184
+ base_model_prefix = ""
185
+ def __init__(self, config: StepRoboticsConfig):
186
+ super().__init__(config)
187
+ self.vision_model = StepRoboticsVisionEncoder(config.vision_config)
188
+ self.language_model = Qwen3Model(config.text_config)
189
+ self.vocab_size = config.text_config.vocab_size
190
+ self.vit_large_projector = nn.Linear(
191
+ config.vision_config.width * 4,
192
+ config.text_config.hidden_size,
193
+ bias=config.projector_bias)
194
+ self.image_placeholder_token_id = config.image_token_id
195
+
196
+ # Initialize weights and apply final processing
197
+ self.post_init()
198
+
199
+ def get_input_embeddings(
200
+ self,
201
+ input_ids: torch.Tensor,
202
+ multimodal_embeddings = None,
203
+ ) -> torch.Tensor:
204
+ input_ids = input_ids.squeeze(0)
205
+ if multimodal_embeddings is None:
206
+ inputs_embeds = self.language_model.embed_tokens(input_ids)
207
+ else:
208
+ is_text = input_ids != self.config.image_token_id
209
+ text_ids = input_ids[is_text]
210
+ text_embeds = self.language_model.embed_tokens(text_ids)
211
+
212
+ inputs_embeds = torch.empty(input_ids.shape[0],
213
+ text_embeds.shape[-1],
214
+ dtype=text_embeds.dtype,
215
+ device=text_embeds.device)
216
+ inputs_embeds[is_text] = text_embeds
217
+ inputs_embeds = merge_multimodal_embeddings(
218
+ input_ids, inputs_embeds, multimodal_embeddings,
219
+ self.config.image_token_id)
220
+ inputs_embeds = inputs_embeds.unsqueeze(0)
221
+ return inputs_embeds
222
+
223
+
224
+ def set_input_embeddings(self, value):
225
+ return self.language_model.set_input_embeddings(value)
226
+
227
+ def set_decoder(self, decoder):
228
+ self.language_model = decoder
229
+
230
+ def get_decoder(self):
231
+ return self.language_model
232
+
233
+ def _parse_and_validate_image_input(
234
+ self, **kwargs: object) -> Optional[StepVLImageInputs]:
235
+ pixel_values = kwargs.pop("pixel_values", None)
236
+ patch_pixel_values = kwargs.pop("patch_pixel_values", None)
237
+ num_patches = kwargs.pop("num_patches", None)
238
+ image_embeds = kwargs.pop("image_embeds", None)
239
+
240
+ if pixel_values is None and image_embeds is None:
241
+ return None
242
+
243
+ if pixel_values is not None:
244
+ # pixel_values = flatten_bn(pixel_values, concat=True)
245
+ if pixel_values.dim() >= 3:
246
+ pixel_values = pixel_values.view(-1, *pixel_values.shape[-3:])
247
+ if patch_pixel_values is not None:
248
+ # patch_pixel_values = flatten_bn(patch_pixel_values,
249
+ # concat=True)
250
+ patch_pixel_values = patch_pixel_values.view(
251
+ -1, *patch_pixel_values.shape[-3:])
252
+ # Handle empty patch_pixel_values by setting to None
253
+ if patch_pixel_values.shape[0] == 0:
254
+ patch_pixel_values = None
255
+
256
+ return StepVLImagePixelInputs(
257
+ type="pixel_values",
258
+ pixel_values=pixel_values.to(self.dtype).to(self.device),
259
+ patch_pixel_values=patch_pixel_values.to(self.dtype).to(
260
+ self.device) if patch_pixel_values is not None else None,
261
+ num_patches=num_patches,
262
+ )
263
+
264
+ if image_embeds is not None:
265
+ if image_embeds.dim() == 2 or image_embeds.dim() >= 3:
266
+ image_embeds = image_embeds.view(-1, image_embeds.shape[-1])
267
+ else:
268
+ raise ValueError(
269
+ f"Unexpected shape for image_embeds: {image_embeds.shape}")
270
+
271
+ return StepVLImageEmbeddingInputs(
272
+ type="image_embeds",
273
+ image_embeds=image_embeds.to(self.dtype).to(self.device),
274
+ )
275
+ return None
276
+
277
+ def _process_image_features(self,
278
+ image_features: torch.Tensor) -> torch.Tensor:
279
+ B, P = image_features.shape[:2]
280
+ HW = int(P ** 0.5)
281
+ image_features = image_features.permute(0, 2, 1).view(B, -1, HW, HW)
282
+ image_features = self.vision_model.vit_downsampler1(image_features)
283
+ image_features = self.vision_model.vit_downsampler2(image_features)
284
+
285
+ B, C, HW, HW = image_features.shape
286
+ image_features = image_features.view(B, -1, HW * HW).permute(0, 2, 1)
287
+ image_features = self.vit_large_projector(image_features)
288
+ return image_features
289
+
290
+ def _get_vision_model_output(self,
291
+ input_tensor: torch.Tensor) -> torch.Tensor:
292
+ return self.vision_model(input_tensor)
293
+
294
+ def _get_pooled_vision_model_output(
295
+ self, input_tensor: torch.Tensor) -> torch.Tensor:
296
+ return self.vision_model.pool(input_tensor)
297
+
298
+ def _process_image_input(
299
+ self, image_input: StepVLImageInputs) -> tuple[torch.Tensor, ...]:
300
+
301
+ if image_input["type"] == "image_embeds":
302
+ image_features = image_input["image_embeds"]
303
+ else:
304
+ image_features = self._get_vision_model_output(
305
+ image_input["pixel_values"])
306
+ patch_image_features = self._get_vision_model_output(
307
+ image_input["patch_pixel_values"]
308
+ ) if image_input["patch_pixel_values"] is not None else None
309
+ num_patches = image_input["num_patches"]
310
+
311
+ image_features = self._process_image_features(image_features)
312
+ patch_image_features = self._process_image_features(
313
+ patch_image_features) if patch_image_features is not None else None
314
+
315
+ merged_image_features = []
316
+ cur_patch_idx = 0
317
+ for i, num_patch in enumerate(num_patches):
318
+ cur_feature = []
319
+ if num_patch > 0:
320
+ patch_slice = patch_image_features[
321
+ cur_patch_idx:cur_patch_idx + num_patch]
322
+ cur_feature.append(patch_slice.view(-1, patch_slice.shape[-1]))
323
+ cur_feature.append(image_features[i].view(
324
+ -1, image_features.shape[-1]))
325
+ cur_patch_idx += num_patch
326
+ merged_image_features.append(
327
+ torch.cat(cur_feature) if len(cur_feature) >
328
+ 1 else cur_feature[0])
329
+
330
+ return merged_image_features
331
+
332
+ def get_multimodal_embeddings(self, **kwargs):
333
+ image_input = self._parse_and_validate_image_input(**kwargs)
334
+ if image_input is None:
335
+ return None
336
+ vision_embeddings = self._process_image_input(image_input)
337
+ return vision_embeddings
338
+
339
+ @can_return_tuple
340
+ def forward(
341
+ self,
342
+ input_ids: torch.LongTensor = None,
343
+ attention_mask: Optional[torch.Tensor] = None,
344
+ position_ids: Optional[torch.LongTensor] = None,
345
+ past_key_values: Optional[Union[Cache, list[torch.FloatTensor]]] = None,
346
+ inputs_embeds: Optional[torch.FloatTensor] = None,
347
+ labels: Optional[torch.LongTensor] = None,
348
+ use_cache: Optional[bool] = None,
349
+ output_attentions: Optional[bool] = None,
350
+ output_hidden_states: Optional[bool] = None,
351
+ return_dict: Optional[bool] = None,
352
+ cache_position: Optional[torch.LongTensor] = None,
353
+ logits_to_keep: Union[int, torch.Tensor] = 0,
354
+ images: Optional[list[Image.Image]] = None,
355
+ **kwargs: Unpack[TransformersKwargs],
356
+ ) -> Union[tuple, StepVLCausalLMOutputWithPast]:
357
+ r"""
358
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
359
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
360
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
361
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
362
+ Example:
363
+ ```python
364
+ >>> from transformers import AutoTokenizer, Llama4ForCausalLM
365
+ >>> model = Llama4ForCausalLM.from_pretrained("meta-llama4/Llama4-2-7b-hf")
366
+ >>> tokenizer = AutoTokenizer.from_pretrained("meta-llama4/Llama4-2-7b-hf")
367
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
368
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
369
+ >>> # Generate
370
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
371
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
372
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
373
+ ```"""
374
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
375
+ output_hidden_states = (
376
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
377
+ )
378
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
379
+
380
+ if inputs_embeds is None:
381
+ input_ids = input_ids
382
+ vision_embeddings = self.get_multimodal_embeddings(**kwargs)
383
+ inputs_embeds = self.get_input_embeddings(input_ids,
384
+ vision_embeddings)
385
+ input_ids = None
386
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
387
+ outputs = self.language_model(
388
+ input_ids=None,
389
+ position_ids=position_ids,
390
+ attention_mask=attention_mask,
391
+ past_key_values=past_key_values,
392
+ inputs_embeds=inputs_embeds,
393
+ use_cache=use_cache,
394
+ output_attentions=output_attentions,
395
+ output_hidden_states=output_hidden_states,
396
+ return_dict=True,
397
+ cache_position=cache_position,
398
+ **kwargs,
399
+ )
400
+
401
+ output = StepVLCausalLMOutputWithPast(
402
+ last_hidden_state=outputs.last_hidden_state,
403
+ past_key_values=outputs.past_key_values,
404
+ attentions=outputs.attentions,
405
+
406
+ )
407
+ return output if return_dict else output.to_tuple()
408
+
409
+
410
+
411
+ class Step3VL10BForCausalLM(StepRoboticsPreTrainedModel, GenerationMixin):
412
+ _checkpoint_conversion_mapping = {
413
+ "^vision_model": "model.vision_model",
414
+ r"^model(?!\.(language_model|vision_model))": "model.language_model",
415
+ "^vit_large_projector": "model.vit_large_projector"
416
+ }
417
+ _tied_weights_keys = ["lm_head.weight"]
418
+ config: StepRoboticsConfig
419
+
420
+ def __init__(self, config: StepRoboticsConfig):
421
+ super().__init__(config)
422
+ self.model = StepRoboticsModel(config)
423
+ self.lm_head = nn.Linear(config.hidden_size, config.text_config.vocab_size, bias=False)
424
+
425
+ self.post_init()
426
+
427
+ def get_input_embeddings(self):
428
+ return self.model.get_input_embeddings()
429
+
430
+ def set_input_embeddings(self, value):
431
+ self.model.set_input_embeddings(value)
432
+
433
+ def get_output_embeddings(self):
434
+ return self.model.get_output_embeddings()
435
+
436
+ def set_output_embeddings(self, new_embeddings):
437
+ self.model.set_output_embeddings(new_embeddings)
438
+
439
+ def set_decoder(self, decoder):
440
+ self.model.set_decoder(decoder)
441
+
442
+ def get_decoder(self):
443
+ return self.model.get_decoder()
444
+
445
+ @property
446
+ def language_model(self):
447
+ return self.model.language_model
448
+
449
+ @property
450
+ def visual(self):
451
+ return self.model.visual
452
+
453
+ def forward(
454
+ self,
455
+ input_ids: torch.LongTensor = None,
456
+ num_patches = None,
457
+ patch_pixel_values = None,
458
+ patch_newline_mask = None,
459
+ attention_mask: Optional[torch.Tensor] = None,
460
+ position_ids: Optional[torch.LongTensor] = None,
461
+ past_key_values: Optional[Cache] = None,
462
+ inputs_embeds: Optional[torch.FloatTensor] = None,
463
+ labels: Optional[torch.LongTensor] = None,
464
+ use_cache: Optional[bool] = None,
465
+ output_attentions: Optional[bool] = None,
466
+ output_hidden_states: Optional[bool] = None,
467
+ return_dict: Optional[bool] = None,
468
+ cache_position: Optional[torch.LongTensor] = None,
469
+ **kwargs: Unpack[TransformersKwargs],
470
+ ) -> Union[tuple, StepVLCausalLMOutputWithPast]:
471
+ r"""
472
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
473
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
474
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
475
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
476
+ Example:
477
+ ```python
478
+ >>> from PIL import Image
479
+ >>> import requests
480
+ >>> from transformers import AutoProcessor, LlavaForConditionalGeneration
481
+ >>> model = LlavaForConditionalGeneration.from_pretrained("llava-hf/llava-1.5-7b-hf")
482
+ >>> processor = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf")
483
+ >>> prompt = "USER: <image>\nWhat's the content of the image? ASSISTANT:"
484
+ >>> url = "https://www.ilankelman.org/stopsigns/australia.jpg"
485
+ >>> image = Image.open(requests.get(url, stream=True).raw)
486
+ >>> inputs = processor(images=image, text=prompt, return_tensors="pt")
487
+ >>> # Generate
488
+ >>> generate_ids = model.generate(**inputs, max_new_tokens=15)
489
+ >>> processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
490
+ "USER: \nWhat's the content of the image? ASSISTANT: The image features a busy city street with a stop sign prominently displayed"
491
+ ```"""
492
+
493
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
494
+ output_hidden_states = (
495
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
496
+ )
497
+
498
+ outputs = self.model(
499
+ input_ids=input_ids,
500
+ num_patches = num_patches,
501
+ patch_pixel_values = patch_pixel_values,
502
+ patch_newline_mask=patch_newline_mask,
503
+ position_ids=position_ids,
504
+ attention_mask=attention_mask,
505
+ past_key_values=past_key_values,
506
+ inputs_embeds=inputs_embeds,
507
+ use_cache=use_cache,
508
+ output_attentions=output_attentions,
509
+ output_hidden_states=output_hidden_states,
510
+ return_dict=return_dict,
511
+ cache_position=cache_position,
512
+ **kwargs,
513
+ )
514
+
515
+ hidden_states = outputs.last_hidden_state
516
+ logits = self.lm_head(hidden_states)
517
+
518
+ los = None
519
+ if labels is not None:
520
+ loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size)
521
+
522
+ return StepVLCausalLMOutputWithPast(
523
+ logits=logits,
524
+ )
525
+
526
+ def prepare_inputs_for_generation(
527
+ self,
528
+ input_ids,
529
+ past_key_values=None,
530
+ inputs_embeds=None,
531
+ pixel_values=None,
532
+ attention_mask=None,
533
+ cache_position=None,
534
+ logits_to_keep=None,
535
+ **kwargs,
536
+ ):
537
+ # Overwritten -- in specific circumstances we don't want to forward image inputs to the model
538
+
539
+ model_inputs = super().prepare_inputs_for_generation(
540
+ input_ids,
541
+ past_key_values=past_key_values,
542
+ inputs_embeds=inputs_embeds,
543
+ attention_mask=attention_mask,
544
+ cache_position=cache_position,
545
+ logits_to_keep=logits_to_keep,
546
+ **kwargs,
547
+ )
548
+
549
+ if cache_position[0] == 0:
550
+ # If we're in cached decoding stage, pixel values should be None because input ids do not contain special image token anymore
551
+ # Otherwise we need pixel values to be passed to model
552
+ model_inputs["pixel_values"] = pixel_values
553
+
554
+ return model_inputs
555
+
556
+ def _fix_state_dict_key_on_load(self, key: str) -> tuple[str, bool]:
557
+ if key.startswith("language_model."):
558
+ return key[len("language_model."):], True
559
+
560
+ return key, False
561
+
562
+
563
+ # Register config/model so Auto classes can instantiate this implementation.
564
+ from transformers.models.auto.configuration_auto import AutoConfig
565
+ from transformers.models.auto.modeling_auto import AutoModelForCausalLM
566
+
567
+ AutoConfig.register("step_robotics", StepRoboticsConfig)
568
+ AutoModelForCausalLM.register(StepRoboticsConfig, Step3VL10BForCausalLM)
processing_step3.py ADDED
@@ -0,0 +1,464 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import BaseImageProcessor, ImageProcessingMixin
2
+ from transformers.processing_utils import ImagesKwargs, MultiModalData, ProcessingKwargs, ProcessorMixin, Unpack, VideosKwargs
3
+ import math
4
+ from typing import Iterable, Optional, Tuple, List, TypedDict, Literal, Union, overload
5
+
6
+ from PIL import Image
7
+ import torch
8
+ import numpy as np
9
+ import torchvision
10
+ from torch import nn
11
+ from torch.nn import functional as F, LayerNorm
12
+ from torchvision.transforms.functional import InterpolationMode
13
+ from transformers.activations import ACT2FN
14
+ from torchvision import transforms
15
+ from torchvision.transforms.functional import InterpolationMode
16
+ from transformers.feature_extraction_utils import BatchFeature, TensorType
17
+ from transformers.image_utils import ImageInput
18
+ from transformers.processing_utils import ProcessingKwargs, ProcessorMixin, Unpack
19
+ from math import ceil
20
+ from itertools import product
21
+
22
+
23
+
24
+ MAX_IMAGE_SIZE: int = 3024
25
+
26
+ class Step3VLImagePixelInputs(TypedDict):
27
+ type: Literal["pixel_values"]
28
+ pixel_values: torch.Tensor
29
+ patch_pixel_values: Optional[torch.Tensor]
30
+ num_patches: list[int]
31
+
32
+
33
+ class Step3VLImageEmbeddingInputs(TypedDict):
34
+ type: Literal["image_embeds"]
35
+ image_embeds: torch.Tensor
36
+
37
+
38
+ ImageWithPatches = tuple[Image.Image, list[Image.Image], list[int] | None]
39
+
40
+
41
+ class GPUToTensor(torch.nn.Module):
42
+
43
+ def forward(self, raw_image: Union[np.ndarray,
44
+ Image.Image]) -> torch.Tensor:
45
+ if isinstance(raw_image, Image.Image):
46
+ return transforms.ToTensor()(raw_image)
47
+ if raw_image.ndim == 2:
48
+ raw_image = raw_image[:, :, None].repeat(3, -1)
49
+ if torch.cuda.is_available():
50
+ device = torch.device("cuda")
51
+ else:
52
+ device = torch.device("cpu")
53
+ image_tensor = torch.from_numpy(raw_image).to(device)
54
+ image_tensor = torch.permute(image_tensor, (2, 0, 1)).contiguous()
55
+ if image_tensor.dtype == torch.uint8:
56
+ image_tensor = image_tensor.to(torch.float32).div(255)
57
+ return image_tensor
58
+
59
+ class Step3VisionProcessor(BaseImageProcessor):
60
+
61
+ def __init__(self, size, interpolation_mode="bicubic", patch_size=None):
62
+ mean = [0.48145466, 0.4578275, 0.40821073]
63
+ std = [0.26862954, 0.26130258, 0.27577711]
64
+ patch_size = patch_size if patch_size is not None else size
65
+
66
+ self.transform = transforms.Compose([
67
+ GPUToTensor(),
68
+ transforms.Normalize(mean, std),
69
+ transforms.Resize(
70
+ (size, size),
71
+ interpolation=InterpolationMode.BICUBIC if interpolation_mode
72
+ == "bicubic" else InterpolationMode.BILINEAR,
73
+ antialias=True),
74
+ ])
75
+
76
+ self.patch_transform = transforms.Compose([
77
+ GPUToTensor(),
78
+ transforms.Normalize(mean, std),
79
+ transforms.Resize(
80
+ (patch_size, patch_size),
81
+ interpolation=InterpolationMode.BICUBIC if interpolation_mode
82
+ == "bicubic" else InterpolationMode.BILINEAR,
83
+ antialias=True),
84
+ ]) if patch_size is not None else None
85
+
86
+ def __call__(self, image, is_patch=False):
87
+ if is_patch:
88
+ return {"pixel_values": self.patch_transform(image).unsqueeze(0)}
89
+ else:
90
+ return {"pixel_values": self.transform(image).unsqueeze(0)}
91
+
92
+ class ImagePatcher:
93
+ def determine_window_size(self, long: int, short: int) -> int:
94
+ if long <= 728:
95
+ return short if long / short > 1.5 else 0
96
+ return min(short, 504) if long / short > 4 else 504
97
+ def slide_window(
98
+ self,
99
+ width: int,
100
+ height: int,
101
+ sizes: list[tuple[int, int]],
102
+ steps: list[tuple[int, int]],
103
+ img_rate_thr: float = 0.6,
104
+ ) -> tuple[list[tuple[int, int, int, int]], tuple[int, int]]:
105
+ assert 1 >= img_rate_thr >= 0, "The `in_rate_thr` should lie in 0~1"
106
+ windows = []
107
+ # Sliding windows.
108
+ for size, step in zip(sizes, steps):
109
+ size_w, size_h = size
110
+ step_w, step_h = step
111
+
112
+ x_num = 1 if width <= size_w else ceil((width - size_w) / step_w +
113
+ 1)
114
+ x_start = [step_w * i for i in range(x_num)]
115
+ if len(x_start) > 1 and x_start[-1] + size_w > width:
116
+ x_start[-1] = width - size_w
117
+
118
+ y_num = 1 if height <= size_h else ceil((height - size_h) /
119
+ step_h + 1)
120
+ y_start = [step_h * i for i in range(y_num)]
121
+ if len(y_start) > 1 and y_start[-1] + size_h > height:
122
+ y_start[-1] = height - size_h
123
+
124
+ start = np.array(list(product(y_start, x_start)), dtype=int)
125
+ start[:, [0, 1]] = start[:, [1, 0]]
126
+ windows.append(np.concatenate([start, start + size], axis=1))
127
+ windows = np.concatenate(windows, axis=0)
128
+
129
+ return [(int(box[0]), int(box[1]), int(box[2] - box[0]),
130
+ int(box[3] - box[1])) for box in windows], (x_num, y_num)
131
+
132
+ def square_pad(self, img: Image.Image) -> Image.Image:
133
+ w, h = img.size
134
+ if w == h:
135
+ return img
136
+ size = max(w, h)
137
+ padded = Image.new(img.mode, (size, size), 0)
138
+ padded.paste(img, (0, 0))
139
+ return padded
140
+
141
+ def get_image_size_for_padding(self, img_width: int,
142
+ img_height: int) -> tuple[int, int]:
143
+ ratio = img_width / img_height
144
+ if min(img_height, img_width) < 32 and (ratio > 4 or ratio < 1 / 4):
145
+ new_size = max(img_height, img_width)
146
+ return new_size, new_size
147
+ return img_width, img_height
148
+
149
+ def get_image_size_for_preprocess(self, img_width: int,
150
+ img_height: int) -> tuple[int, int]:
151
+
152
+ if max(img_height, img_width) > MAX_IMAGE_SIZE:
153
+ scale_factor = MAX_IMAGE_SIZE / max(img_height, img_width)
154
+ img_width = int(img_width * scale_factor)
155
+ img_height = int(img_height * scale_factor)
156
+ return img_width, img_height
157
+
158
+ def get_image_size_for_crop(self, img_width: int, img_height: int,
159
+ window_size: int):
160
+ w_ratio = img_width / window_size
161
+ h_ratio = img_height / window_size
162
+
163
+ if w_ratio < 1:
164
+ width_new = img_width
165
+ else:
166
+ decimal_w = w_ratio - img_width // window_size
167
+ w_ratio = int(w_ratio) + 1 if decimal_w > 0.2 else int(w_ratio)
168
+ width_new = window_size * w_ratio
169
+ if h_ratio < 1:
170
+ height_new = img_height
171
+ else:
172
+ decimal_h = h_ratio - img_height // window_size
173
+ h_ratio = int(h_ratio) + 1 if decimal_h > 0.2 else int(h_ratio)
174
+ height_new = window_size * h_ratio
175
+ return int(width_new), int(height_new)
176
+
177
+ def patch_crop(self, img: Image.Image, i: int, j: int, th: int, tw: int):
178
+ target = img.crop((j, i, j + tw, i + th))
179
+ return target
180
+
181
+ def get_num_patches(self, img_width: int,
182
+ img_height: int) -> tuple[int, int]:
183
+ img_width, img_height = self.get_image_size_for_padding(
184
+ img_width, img_height)
185
+ img_width, img_height = self.get_image_size_for_preprocess(
186
+ img_width, img_height)
187
+ window_size = self.determine_window_size(max(img_height, img_width),
188
+ min(img_height, img_width))
189
+ if window_size == 0:
190
+ return 0, 0
191
+ else:
192
+ img_width, img_height = self.get_image_size_for_crop(
193
+ img_width, img_height, window_size)
194
+ center_list, (x_num, y_num) = self.slide_window(
195
+ img_width, img_height, [(window_size, window_size)],
196
+ [(window_size, window_size)])
197
+ full_rows = (len(center_list) - 1) // x_num + 1
198
+ if len(center_list) > 0 and len(center_list) % x_num == 0:
199
+ full_rows -= 1
200
+ return len(center_list), full_rows
201
+
202
+ def __call__(
203
+ self, img: Image.Image
204
+ ) -> tuple[Image.Image, list[Image.Image], list[bool] | None]:
205
+ img_width, img_height = img.size
206
+ new_img_width, new_img_height = self.get_image_size_for_padding(
207
+ img_width, img_height)
208
+ if new_img_width != img_width or new_img_height != img_height:
209
+ img = self.square_pad(img)
210
+ img_width, img_height = img.size
211
+
212
+ new_img_width, new_img_height = self.get_image_size_for_preprocess(
213
+ img_width, img_height)
214
+ img = img.resize((new_img_width, new_img_height),
215
+ Image.Resampling.BILINEAR)
216
+ window_size = self.determine_window_size(
217
+ max(new_img_height, new_img_width),
218
+ min(new_img_height, new_img_width))
219
+
220
+ if window_size == 0:
221
+ return img, [], None
222
+ else:
223
+ new_img_width, new_img_height = self.get_image_size_for_crop(
224
+ new_img_width, new_img_height, window_size)
225
+ if (new_img_width, new_img_height) != (img_width, img_height):
226
+ img_for_crop = img.resize((new_img_width, new_img_height),
227
+ Image.Resampling.BILINEAR)
228
+ else:
229
+ img_for_crop = img
230
+
231
+ patches = []
232
+ newlines = []
233
+ center_list, (x_num, y_num) = self.slide_window(
234
+ new_img_width, new_img_height, [(window_size, window_size)],
235
+ [(window_size, window_size)])
236
+ for patch_id, center_lf_point in enumerate(center_list):
237
+ x, y, patch_w, patch_h = center_lf_point
238
+ big_patch = self.patch_crop(img_for_crop, y, x, patch_h,
239
+ patch_w)
240
+ patches.append(big_patch)
241
+ if (patch_id + 1) % x_num == 0:
242
+ newlines.append(patch_id)
243
+
244
+ if newlines and newlines[-1] == len(patches) - 1:
245
+ newlines.pop()
246
+
247
+ return img, patches, [i in newlines for i in range(len(patches))] if len(patches) > 0 else None
248
+
249
+
250
+
251
+
252
+ class Step3VLProcessor(ProcessorMixin):
253
+ # Align ProcessorMixin with our custom components.
254
+ # We only have an image processor (not a feature extractor) plus a tokenizer.
255
+ attributes = ["tokenizer"]
256
+ tokenizer_class = "AutoTokenizer"
257
+
258
+ def __init__(
259
+ self,
260
+ tokenizer=None,
261
+ chat_template=None,
262
+ **kwargs
263
+ ) -> None:
264
+ self.image_size = 728
265
+ self.patch_size = 504
266
+
267
+ self.image_preprocessor = Step3VisionProcessor(self.image_size,
268
+ "bilinear",
269
+ self.patch_size)
270
+
271
+ self.num_image_feature_size = 169
272
+ self.num_patch_feature_size = 81
273
+ self.image_token = "<im_patch>"
274
+ self.image_feature_placeholder = (self.image_token *
275
+ self.num_image_feature_size)
276
+ self.patch_feature_placeholder = (self.image_token *
277
+ self.num_patch_feature_size)
278
+ super().__init__(tokenizer=tokenizer, chat_template=chat_template, **kwargs)
279
+ self.patcher = ImagePatcher()
280
+
281
+ @property
282
+ def image_token_id(self) -> int:
283
+ return self.tokenizer.get_vocab()[self.image_token]
284
+
285
+ def get_num_image_tokens(self, img_width: int, img_height: int) -> int:
286
+ num_patches, num_newlines = self.patcher.get_num_patches(
287
+ img_width, img_height)
288
+
289
+ return num_patches * (
290
+ self.num_patch_feature_size +
291
+ 2) + self.num_image_feature_size + 2 + num_newlines
292
+
293
+ def _split_images(self,
294
+ images: list[Image.Image]) -> list[ImageWithPatches]:
295
+ result = []
296
+ for img in images:
297
+ result.append(self.patcher(img))
298
+ return result
299
+
300
+ def _convert_images_to_pixel_values(
301
+ self,
302
+ images: list[Image.Image],
303
+ is_patch: bool = False,
304
+ ) -> list[torch.Tensor]:
305
+ return [
306
+ self.image_preprocessor(img, is_patch=is_patch)["pixel_values"]
307
+ for img in images
308
+ ]
309
+
310
+ def _get_patch_repl(
311
+ self,
312
+ num_patches: int,
313
+ patch_newline_mask: list[bool] | None,
314
+ ) -> tuple[str, list[int]]:
315
+ text = ""
316
+ token_ids = []
317
+ for i in range(num_patches):
318
+ assert len(patch_newline_mask) == num_patches
319
+ text += f"<patch_start>{self.patch_feature_placeholder}<patch_end>"
320
+ token_ids.extend(
321
+ [self.tokenizer.convert_tokens_to_ids("<patch_start>")] +
322
+ [self.image_token_id] * self.num_patch_feature_size +
323
+ [self.tokenizer.convert_tokens_to_ids("<patch_end>")])
324
+ if patch_newline_mask and patch_newline_mask[i]:
325
+ text += "<patch_newline>"
326
+ token_ids.append(
327
+ self.tokenizer.convert_tokens_to_ids("<patch_newline>"))
328
+ return text, token_ids
329
+
330
+ def _get_image_repl(
331
+ self,
332
+ num_images: int,
333
+ ) -> tuple[str, list[int]]:
334
+ text = f"<im_start>{self.image_feature_placeholder}<im_end>"
335
+ token_ids = [
336
+ self.tokenizer.convert_tokens_to_ids("<im_start>")
337
+ ] + [self.image_token_id] * self.num_image_feature_size + [
338
+ self.tokenizer.convert_tokens_to_ids("<im_end>")
339
+ ]
340
+ return text * num_images, token_ids * num_images
341
+
342
+ def _get_image_repl_features(
343
+ self,
344
+ num_images: int,
345
+ num_patches: int,
346
+ patch_new_line_idx: Optional[list[bool]],
347
+ ) -> tuple[str, list[int]]:
348
+ if num_patches > 0:
349
+ patch_repl, patch_repl_ids = self._get_patch_repl(
350
+ num_patches, patch_new_line_idx)
351
+ else:
352
+ patch_repl = ""
353
+ patch_repl_ids = []
354
+ image_repl, image_repl_ids = self._get_image_repl(num_images)
355
+ return patch_repl + image_repl, patch_repl_ids + image_repl_ids
356
+
357
+ def replace_placeholder(self, text: str, placeholder: str,
358
+ repls: list[str]) -> str:
359
+ parts = text.split(placeholder)
360
+
361
+ if len(parts) - 1 != len(repls):
362
+ raise ValueError(
363
+ "The number of placeholders does not match the number of replacements." # noqa: E501
364
+ )
365
+
366
+ result = [parts[0]]
367
+ for i, repl in enumerate(repls):
368
+ result.append(repl)
369
+ result.append(parts[i + 1])
370
+
371
+ return "".join(result)
372
+
373
+ def __call__(
374
+ self,
375
+ text: Optional[Union[str, list[str]]] = None,
376
+ images: ImageInput | None = None,
377
+ return_tensors: Optional[Union[str, TensorType]] = None,
378
+ **kwargs,
379
+ ) -> BatchFeature:
380
+
381
+ if images is not None:
382
+ images = self.image_preprocessor.fetch_images(images)
383
+ if text is None:
384
+ text = []
385
+ if not isinstance(text, list):
386
+ text = [text]
387
+ if images is None:
388
+ images = []
389
+ elif not isinstance(images, list):
390
+ images = [images]
391
+ elif isinstance(images[0], list):
392
+ images = images[0]
393
+
394
+ if len(images) == 0:
395
+ image_inputs = {}
396
+ text_inputs = self.tokenizer(text)
397
+ else:
398
+ splitted_images_data = self._split_images(images)
399
+ pixel_values_lst = []
400
+ patch_pixel_values_lst = []
401
+ patch_newline_mask_lst = []
402
+ image_repl_str_lst = []
403
+ image_repl_ids_lst = []
404
+ num_patches = []
405
+ for raw_img, img_patches, patch_newline_mask in splitted_images_data: # noqa: E501
406
+ pixel_values_lst.extend(
407
+ self._convert_images_to_pixel_values([raw_img]))
408
+
409
+ if len(img_patches) > 0:
410
+ patch_pixel_values_lst.extend(
411
+ self._convert_images_to_pixel_values(img_patches,
412
+ is_patch=True))
413
+ num_patches.append(len(img_patches))
414
+
415
+ image_repl_str, image_repl_ids = self._get_image_repl_features(
416
+ 1, len(img_patches), patch_newline_mask)
417
+ image_repl_str_lst.append(image_repl_str)
418
+ image_repl_ids_lst.extend(image_repl_ids)
419
+
420
+ if patch_newline_mask is not None:
421
+ patch_newline_mask_lst.extend(patch_newline_mask)
422
+
423
+ image_inputs = {
424
+ "pixel_values": torch.cat(pixel_values_lst),
425
+ "num_patches": num_patches,
426
+ }
427
+ if patch_pixel_values_lst:
428
+ image_inputs["patch_pixel_values"] = torch.cat(
429
+ patch_pixel_values_lst)
430
+ if patch_newline_mask_lst:
431
+ image_inputs["patch_newline_mask"] = torch.tensor(
432
+ patch_newline_mask_lst, dtype=torch.bool)
433
+
434
+ text = [
435
+ self.replace_placeholder(t, self.image_token,
436
+ image_repl_str_lst) for t in text
437
+ ]
438
+ text_inputs = self.tokenizer(text)
439
+
440
+ return BatchFeature(
441
+ {
442
+ **text_inputs,
443
+ **image_inputs,
444
+ },
445
+ tensor_type=return_tensors,
446
+ )
447
+
448
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.batch_decode with CLIP->Gemma
449
+ def batch_decode(self, *args, **kwargs):
450
+ """
451
+ This method forwards all its arguments to GemmaTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please
452
+ refer to the docstring of this method for more information.
453
+ """
454
+ return self.tokenizer.batch_decode(*args, **kwargs)
455
+
456
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.decode with CLIP->Gemma
457
+ def decode(self, *args, **kwargs):
458
+ """
459
+ This method forwards all its arguments to GemmaTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to
460
+ the docstring of this method for more information.
461
+ """
462
+ return self.tokenizer.decode(*args, **kwargs)
463
+
464
+ __all__ = ["Step3VLProcessor"]
processor_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_step3.Step3VLProcessor"
4
+ }
5
+ }
6
+
vision_encoder.py ADDED
@@ -0,0 +1,468 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Literal, Optional, Tuple, Union
2
+
3
+ import torch
4
+ import torch.nn as nn
5
+ import torch.nn.functional as F
6
+ from einops import rearrange, repeat
7
+ from transformers.activations import ACT2FN
8
+
9
+ from configuration_step_vl import StepRoboticsVisionEncoderConfig
10
+
11
+
12
+ def rotate_half(x: torch.Tensor) -> torch.Tensor:
13
+ """Rotate last dimension halves (used by RoPE)."""
14
+ x = rearrange(x, "... (d r) -> ... d r", r=2)
15
+ x1, x2 = x.unbind(dim=-1)
16
+ x = torch.stack((-x2, x1), dim=-1)
17
+ return rearrange(x, "... d r -> ... (d r)")
18
+
19
+
20
+ def apply_rotary_emb(freqs: torch.Tensor,
21
+ t: torch.Tensor,
22
+ start_index: int = 0,
23
+ scale: float = 1.0,
24
+ seq_dim: int = -2) -> torch.Tensor:
25
+ """Apply 2D rotary embeddings to queries / keys."""
26
+ dtype = t.dtype
27
+
28
+ if t.ndim == 3:
29
+ seq_len = t.shape[seq_dim]
30
+ freqs = freqs[-seq_len:]
31
+
32
+ rot_dim = freqs.shape[-1]
33
+ end_index = start_index + rot_dim
34
+ assert rot_dim <= t.shape[-1], (
35
+ f"feature dimension {t.shape[-1]} is too small for rot_dim {rot_dim}")
36
+
37
+ t_left, t, t_right = (
38
+ t[..., :start_index],
39
+ t[..., start_index:end_index],
40
+ t[..., end_index:],
41
+ )
42
+ t = (t * freqs.cos() * scale) + (rotate_half(t) * freqs.sin() * scale)
43
+ out = torch.cat((t_left, t, t_right), dim=-1)
44
+ return out.type(dtype)
45
+
46
+
47
+ class EncoderRope2D(nn.Module):
48
+ """Cacheable 2D rotary positional embedding."""
49
+
50
+ def __init__(
51
+ self,
52
+ dim: int,
53
+ max_grid_height: int,
54
+ max_grid_width: int,
55
+ use_cls_token: bool = False,
56
+ freqs_for: Literal["lang", "pixel", "constant"] = "lang",
57
+ theta: Union[int, float] = 10000,
58
+ max_freq: int = 10,
59
+ num_freqs: int = 1,
60
+ theta_rescale_factor: float = 1.0,
61
+ ):
62
+ super().__init__()
63
+ self.dim = dim
64
+ self.max_grid_height = max_grid_height
65
+ self.max_grid_width = max_grid_width
66
+ self.use_cls_token = use_cls_token
67
+ self.theta = theta * theta_rescale_factor**(dim / (dim - 2))
68
+ self.freqs_for = freqs_for
69
+ self.max_freq = max_freq
70
+ self.num_freqs = num_freqs
71
+ cache = self._compute_2d_freqs()
72
+ self.register_buffer("freqs_cache", cache, persistent=False)
73
+
74
+ def _compute_inv_freq(self, base: Union[int, float],
75
+ dim: int) -> torch.Tensor:
76
+ if self.freqs_for == "lang":
77
+ freqs = 1.0 / (base**(
78
+ torch.arange(0, dim, 2)[:(dim // 2)].float() / dim))
79
+ elif self.freqs_for == "pixel":
80
+ freqs = torch.linspace(1.0, self.max_freq / 2, dim // 2) * torch.pi
81
+ elif self.freqs_for == "constant":
82
+ freqs = torch.ones(self.num_freqs).float()
83
+ else:
84
+ raise ValueError(f"Unsupported freqs_for value: {self.freqs_for}")
85
+ return freqs
86
+
87
+ def _compute_freqs(self, t: torch.Tensor, inv_freq: torch.Tensor):
88
+ freqs = torch.einsum("..., f -> ... f", t.type(inv_freq.dtype),
89
+ inv_freq)
90
+ freqs = repeat(freqs, "... n -> ... (n r)", r=2)
91
+ return freqs
92
+
93
+ def _compute_2d_freqs(self) -> torch.Tensor:
94
+ grid_h_range = torch.arange(self.max_grid_height, dtype=torch.float)
95
+ grid_w_range = torch.arange(self.max_grid_width, dtype=torch.float)
96
+ if self.use_cls_token:
97
+ grid_h_range += 1
98
+ grid_w_range += 1
99
+ inv_freq = self._compute_inv_freq(self.theta, self.dim // 2)
100
+ freqs_h = self._compute_freqs(grid_h_range, inv_freq)[:, None].expand(
101
+ self.max_grid_height, self.max_grid_width, -1)
102
+ freqs_w = self._compute_freqs(grid_w_range, inv_freq)[None, :].expand(
103
+ self.max_grid_height, self.max_grid_width, -1)
104
+ freqs = torch.cat([freqs_w, freqs_h], dim=-1).reshape(
105
+ self.max_grid_height * self.max_grid_width, -1)
106
+ if self.use_cls_token:
107
+ freqs = torch.cat([torch.zeros(1, freqs.shape[-1]), freqs], dim=0)
108
+ freqs = freqs[None, None, ...]
109
+ return freqs
110
+
111
+ def forward(self, q: torch.Tensor, k: torch.Tensor,
112
+ grid_hw: tuple[int, int]):
113
+ # If grid matches cached shape we reuse directly to avoid recomputation.
114
+ if grid_hw[0] != self.max_grid_height or grid_hw[1] != self.max_grid_width:
115
+ rows = torch.arange(grid_hw[0], device=q.device).view(-1, 1)
116
+ cols = torch.arange(grid_hw[1], device=q.device).view(1, -1)
117
+ positions = (rows * self.max_grid_width + cols).reshape(-1).to(
118
+ torch.long)
119
+ if self.use_cls_token:
120
+ positions = torch.cat(
121
+ [torch.zeros(1, device=q.device), positions + 1], dim=0)
122
+ freqs = self.freqs_cache.index_select(2, positions)
123
+ else:
124
+ freqs = self.freqs_cache
125
+ q = apply_rotary_emb(freqs, q)
126
+ k = apply_rotary_emb(freqs, k)
127
+ return q, k
128
+
129
+
130
+ class EncoderLayerScale(nn.Module):
131
+ """Per-channel residual scaling used when ls_init_value is set."""
132
+
133
+ def __init__(self, dim: int, init_values: float):
134
+ super().__init__()
135
+ self.gamma = nn.Parameter(torch.full((dim,), init_values))
136
+
137
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: # (B, L, D)
138
+ return hidden_states * self.gamma
139
+
140
+
141
+ class EncoderMLP(nn.Module):
142
+ """Feed-forward network used inside each transformer block."""
143
+
144
+ def __init__(self, hidden_size: int, intermediate_size: int,
145
+ hidden_act: str):
146
+ super().__init__()
147
+ self.c_fc = nn.Linear(hidden_size, intermediate_size, bias=True)
148
+ self.act_fn = ACT2FN[hidden_act]
149
+ self.c_proj = nn.Linear(intermediate_size, hidden_size, bias=True)
150
+
151
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
152
+
153
+ hidden_states = self.c_proj(self.act_fn(self.c_fc(hidden_states)))
154
+ return hidden_states
155
+
156
+
157
+ class EncoderVisionAttention(nn.Module):
158
+ """Multi-head self attention with optional 2D RoPE."""
159
+
160
+ def __init__(
161
+ self,
162
+ hidden_size: int,
163
+ num_heads: int,
164
+ max_grid_height: int,
165
+ max_grid_width: int,
166
+ use_cls_token: bool = False,
167
+ use_rope2d: bool = True,
168
+ rope_theta: Union[int, float] = 10000,
169
+ rope_max_freq: int = 10,
170
+ rope_num_freqs: int = 1,
171
+ rope_theta_rescale_factor: float = 1.0,
172
+ rope_freqs_for: Literal["lang", "pixel", "constant"] = "lang",
173
+ ):
174
+ super().__init__()
175
+ if hidden_size % num_heads != 0:
176
+ raise ValueError(
177
+ f"hidden_size ({hidden_size}) must be divisible by num_heads ({num_heads})."
178
+ )
179
+ self.num_heads = num_heads
180
+ self.head_dim = hidden_size // num_heads
181
+ self.scale = self.head_dim**-0.5
182
+ self.in_proj_weight = nn.Parameter(torch.zeros(hidden_size * 3, hidden_size))
183
+ self.in_proj_bias = nn.Parameter(torch.zeros(hidden_size * 3))
184
+ self.out_proj = nn.Linear(hidden_size, hidden_size, bias=True)
185
+
186
+ self.rope = None
187
+ if use_rope2d:
188
+ self.rope = EncoderRope2D(
189
+ dim=self.head_dim,
190
+ max_grid_height=max_grid_height,
191
+ max_grid_width=max_grid_width,
192
+ use_cls_token=use_cls_token,
193
+ theta=rope_theta,
194
+ max_freq=rope_max_freq,
195
+ num_freqs=rope_num_freqs,
196
+ theta_rescale_factor=rope_theta_rescale_factor,
197
+ freqs_for=rope_freqs_for,
198
+ )
199
+
200
+ def forward(self, hidden_states: torch.Tensor, grid_hw: tuple[int, int]) -> torch.Tensor:
201
+ bsz, seq_len, _ = hidden_states.shape
202
+ qkv = F.linear(
203
+ hidden_states,
204
+ self.in_proj_weight,
205
+ self.in_proj_bias,
206
+ )
207
+ q, k, v = qkv.chunk(3, dim=-1)
208
+
209
+ q = q.view(bsz, seq_len, self.num_heads,
210
+ self.head_dim).transpose(1, 2)
211
+ k = k.view(bsz, seq_len, self.num_heads,
212
+ self.head_dim).transpose(1, 2)
213
+ if self.rope is not None:
214
+ q, k = self.rope(q, k, grid_hw=grid_hw)
215
+ v = v.view(bsz, seq_len, self.num_heads,
216
+ self.head_dim).transpose(1, 2)
217
+
218
+ attn_output = F.scaled_dot_product_attention(
219
+ q, k, v, is_causal=False, scale=self.scale)
220
+ attn_output = attn_output.transpose(1, 2).reshape(
221
+ bsz, seq_len, self.num_heads * self.head_dim)
222
+ return self.out_proj(attn_output)
223
+
224
+
225
+ class EncoderVisionBlock(nn.Module):
226
+ """A single Vision Transformer block (self-attention + MLP)."""
227
+
228
+ def __init__(
229
+ self,
230
+ hidden_size: int,
231
+ num_heads: int,
232
+ mlp_ratio: float,
233
+ hidden_act: str,
234
+ layer_norm_eps: float,
235
+ ls_init_value: Optional[float] = None,
236
+ max_grid_height: Optional[int] = None,
237
+ max_grid_width: Optional[int] = None,
238
+ use_cls_token: bool = False,
239
+ use_rope2d: bool = True,
240
+ rope_kwargs: Optional[dict] = None,
241
+ ):
242
+ super().__init__()
243
+ rope_kwargs = rope_kwargs or {}
244
+ self.attn = EncoderVisionAttention(
245
+ hidden_size,
246
+ num_heads,
247
+ max_grid_height=max_grid_height,
248
+ max_grid_width=max_grid_width,
249
+ use_cls_token=use_cls_token,
250
+ use_rope2d=use_rope2d,
251
+ **rope_kwargs,
252
+ )
253
+ self.ln_1 = nn.LayerNorm(hidden_size, eps=layer_norm_eps)
254
+ self.ln_2 = nn.LayerNorm(hidden_size, eps=layer_norm_eps)
255
+
256
+ intermediate = int(hidden_size * mlp_ratio)
257
+ self.mlp = EncoderMLP(hidden_size, intermediate, hidden_act)
258
+
259
+ self.ls_1 = EncoderLayerScale(hidden_size, ls_init_value)
260
+ self.ls_2 = EncoderLayerScale(hidden_size, ls_init_value)
261
+
262
+ def forward(self, hidden_states: torch.Tensor,
263
+ grid_hw: tuple[int, int]) -> torch.Tensor:
264
+ # breakpoint()
265
+ residual = hidden_states
266
+ hidden_states = self.ln_1(hidden_states)
267
+ hidden_states = self.attn(hidden_states, grid_hw=grid_hw)
268
+ hidden_states = residual + self.ls_1(hidden_states)
269
+
270
+ residual = hidden_states
271
+ hidden_states = self.ln_2(hidden_states)
272
+ hidden_states = self.mlp(hidden_states)
273
+ hidden_states = residual + self.ls_2(hidden_states)
274
+ return hidden_states
275
+
276
+
277
+ class EncoderVisionTransformer(nn.Module):
278
+ """Stack of encoder blocks parameterised by Step35VisionEncoderConfig."""
279
+
280
+ def __init__(
281
+ self,
282
+ embed_dim: int,
283
+ depth: int,
284
+ num_heads: int,
285
+ mlp_ratio: float,
286
+ hidden_act: str,
287
+ layer_norm_eps: float,
288
+ ls_init_value: Optional[float] = None,
289
+ max_grid_height: Optional[int] = None,
290
+ max_grid_width: Optional[int] = None,
291
+ use_cls_token: bool = False,
292
+ use_rope2d: bool = True,
293
+ rope_kwargs: Optional[dict] = None,
294
+ ):
295
+ super().__init__()
296
+ self.layers = depth
297
+ rope_kwargs = rope_kwargs or {}
298
+ self.resblocks = nn.ModuleList([
299
+ EncoderVisionBlock(embed_dim, num_heads, mlp_ratio, hidden_act,
300
+ layer_norm_eps,
301
+ max_grid_height=max_grid_height,
302
+ max_grid_width=max_grid_width,
303
+ use_cls_token=use_cls_token,
304
+ use_rope2d=use_rope2d,
305
+ ls_init_value=ls_init_value,
306
+ rope_kwargs=rope_kwargs)
307
+ for _ in range(depth)
308
+ ])
309
+
310
+ def forward(self,
311
+ hidden_states: torch.Tensor,
312
+ grid_hw: tuple[int, int],
313
+ layer_idx: int = -1) -> torch.Tensor:
314
+
315
+ stop_idx = (self.layers + layer_idx) % self.layers
316
+ for idx, block in enumerate(self.resblocks):
317
+ hidden_states = block(hidden_states, grid_hw=grid_hw)
318
+ if idx == stop_idx:
319
+ break
320
+ return hidden_states
321
+
322
+
323
+ class StepRoboticsVisionEncoder(nn.Module):
324
+ """
325
+ Vision encoder built from StepRoboticsVisionEncoderConfig.
326
+
327
+ The encoder performs patch embedding followed by a stack of transformer
328
+ blocks. Only the config fields defined in StepRoboticsVisionEncoderConfig (and
329
+ StepRoboticVLConfig.vision_config) are expected.
330
+ """
331
+
332
+ def __init__(self, config: StepRoboticsVisionEncoderConfig):
333
+ super().__init__()
334
+ self.config = config
335
+
336
+ # Align commonly used attributes so downstream code (e.g. StepRoboticVL)
337
+ # can access them without extra renaming.
338
+ self.hidden_size = config.width
339
+ self.num_heads = config.heads
340
+ self.num_hidden_layers = config.layers
341
+ self.patch_size = config.patch_size
342
+ self.image_size = config.image_size
343
+ self.use_cls_token = getattr(config, "use_cls_token", False)
344
+ self.use_rope2d = getattr(config, "use_rope2d", True)
345
+ self.use_abs_posemb = getattr(config, "use_abs_posemb", True)
346
+ self.layer_norm_eps = config.layer_norm_eps
347
+ self.mlp_ratio = getattr(config, "mlp_ratio", 8960 / 1536)
348
+ self.ls_init_value = getattr(config, "ls_init_value", None)
349
+ self.hidden_act = config.hidden_act
350
+ self.use_ln_pre = getattr(config, "use_ln_pre", False)
351
+ self.use_ln_post = getattr(config, "use_ln_post", True)
352
+
353
+ # Patch embedding.
354
+ self.conv1 = nn.Conv2d(in_channels=config.num_channels,
355
+ out_channels=self.hidden_size,
356
+ kernel_size=self.patch_size,
357
+ stride=self.patch_size,
358
+ bias=False)
359
+
360
+ self.ln_pre = nn.LayerNorm(self.hidden_size, eps=self.layer_norm_eps) if self.use_ln_pre else nn.Identity()
361
+ self.ln_post = nn.LayerNorm(self.hidden_size, eps=self.layer_norm_eps) if self.use_ln_post else nn.Identity()
362
+
363
+ grid_size = self.image_size // self.patch_size
364
+ self.base_grid = (grid_size, grid_size)
365
+
366
+ if self.use_cls_token:
367
+ self.class_embedding = nn.Parameter(
368
+ torch.randn(self.hidden_size) * (self.hidden_size**-0.5))
369
+ else:
370
+ self.class_embedding = None
371
+
372
+ if self.use_abs_posemb:
373
+ self.posemb_grid_size = self.image_size // self.patch_size
374
+ self.positional_embedding = nn.Parameter(
375
+ (self.hidden_size**-0.5) * torch.randn(
376
+ int(self.use_cls_token) + self.posemb_grid_size**2,
377
+ self.hidden_size,
378
+ ))
379
+
380
+ self.transformer = EncoderVisionTransformer(
381
+ embed_dim=self.hidden_size,
382
+ depth=self.num_hidden_layers,
383
+ num_heads=self.num_heads,
384
+ mlp_ratio=self.mlp_ratio,
385
+ hidden_act=self.hidden_act,
386
+ layer_norm_eps=self.layer_norm_eps,
387
+ ls_init_value=self.ls_init_value,
388
+ max_grid_height=self.base_grid[0],
389
+ max_grid_width=self.base_grid[1],
390
+ use_cls_token=self.use_cls_token,
391
+ use_rope2d=self.use_rope2d,
392
+ rope_kwargs={
393
+ "rope_theta": getattr(config, "rope_theta", 10000),
394
+ "rope_max_freq": getattr(config, "rope_max_freq", 10),
395
+ "rope_num_freqs": getattr(config, "rope_num_freqs", 1),
396
+ "rope_theta_rescale_factor":
397
+ getattr(config, "rope_theta_rescale_factor", 1.0),
398
+ "rope_freqs_for": getattr(config, "rope_freqs_for", "lang"),
399
+ },
400
+ )
401
+ self.vit_downsampler1 = nn.Conv2d(self.hidden_size,
402
+ self.hidden_size * 2,
403
+ kernel_size=3,
404
+ stride=2,
405
+ padding=1)
406
+ self.vit_downsampler2 = nn.Conv2d(self.hidden_size * 2,
407
+ self.hidden_size * 4,
408
+ kernel_size=3,
409
+ stride=2,
410
+ padding=1)
411
+
412
+
413
+ def sample_abs_posemb(self, grid_h: int, grid_w: int):
414
+ if self.posemb_grid_size == grid_h and self.posemb_grid_size == grid_w:
415
+ return self.positional_embedding[None, ...]
416
+
417
+ pos_embed = self.positional_embedding
418
+ if self.use_cls_token:
419
+ cls_token_embed, pos_embed = pos_embed[:1], pos_embed[1:]
420
+
421
+ pos_embed = (pos_embed.reshape(1, self.posemb_grid_size,
422
+ self.posemb_grid_size,
423
+ -1).permute(0, 3, 1, 2).contiguous())
424
+ pos_embed = F.interpolate(pos_embed,
425
+ size=(grid_h, grid_w),
426
+ mode="bilinear",
427
+ align_corners=False)
428
+ pos_embed = pos_embed.permute(0, 2, 3, 1).reshape(-1, self.hidden_size)
429
+
430
+ if self.use_cls_token:
431
+ pos_embed = torch.cat([cls_token_embed, pos_embed], dim=0)
432
+
433
+ return pos_embed[None, ...]
434
+
435
+ def forward(self,
436
+ pixel_values: torch.Tensor,
437
+ layer_idx: int = -1,
438
+ strip_cls_token: bool = False) -> torch.Tensor:
439
+ """
440
+ Args:
441
+ pixel_values: Image tensor of shape (B, C, H, W).
442
+ layer_idx: Negative indices stop after a given block (e.g., -1 uses all blocks).
443
+ strip_cls_token: If True and cls token is used, remove it from output.
444
+ """
445
+ bsz, _, height, width = pixel_values.shape
446
+ grid_h, grid_w = height // self.patch_size, width // self.patch_size
447
+
448
+ hidden_state = self.conv1(pixel_values) # (B, D, Gh, Gw)
449
+ hidden_state = hidden_state.flatten(2).transpose(1, 2) # (B, Gh*Gw, D)
450
+
451
+ if self.use_cls_token:
452
+ cls_token = self.class_embedding.view(1, 1,
453
+ -1).expand(bsz, -1, -1)
454
+ hidden_state = torch.cat([cls_token, hidden_state], dim=1)
455
+
456
+ if self.use_abs_posemb:
457
+ pos_emb = self.sample_abs_posemb(grid_h, grid_w)
458
+ hidden_state = hidden_state + pos_emb
459
+ hidden_state = self.ln_pre(hidden_state)
460
+ hidden_state = self.transformer(hidden_state, grid_hw=(grid_h, grid_w), layer_idx=layer_idx)
461
+
462
+ if self.use_ln_post:
463
+ hidden_state = self.ln_post(hidden_state)
464
+
465
+ if strip_cls_token and self.use_cls_token:
466
+ hidden_state = hidden_state[:, 1:, :]
467
+
468
+ return hidden_state