Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,176 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
| 4 |
+
<div align="center">
|
| 5 |
+
|
| 6 |
+
<div align="center">
|
| 7 |
+
<h1 style="border-bottom: none;">
|
| 8 |
+
<img src="figures/stepfun.svg" width="30" style="vertical-align: bottom; margin-right: 10px;" />
|
| 9 |
+
STEP3-VL-10B
|
| 10 |
+
</h1>
|
| 11 |
+
</div>
|
| 12 |
+
|
| 13 |
+
[](https://huggingface.co/collections/stepfun-ai/Step3-VL-10B)
|
| 14 |
+
[](https://modelscope.cn/collections/stepfun-ai/Step3-VL-10B)
|
| 15 |
+
[]()
|
| 16 |
+
[]()
|
| 17 |
+
|
| 18 |
+
[**Introduction**](#introduction) | [**Performance**](#performance) | [**Quick Start**](#quick-start) | [**Citation**](#citation)
|
| 19 |
+
|
| 20 |
+
</div>
|
| 21 |
+
|
| 22 |
+
## π Introduction
|
| 23 |
+
|
| 24 |
+
**STEP3-VL-10B** is a lightweight open-source foundation model designed to redefine the trade-off between compact efficiency and frontier-level multimodal intelligence. Despite its compact **10B parameter footprint**, STEP3-VL-10B excels in **visual perception**, **complex reasoning**, and **human-centric alignment**. It consistently outperforms models under the 10B scale and rivals or surpasses significantly larger open-weights models (**10Γβ20Γ its size**), such as GLM-4.6V (106B-A12B), Qwen3-VL-Thinking (235B-A22B), and top-tier proprietary flagships like Gemini 2.5 Pro and Seed-1.5-VL.
|
| 25 |
+
|
| 26 |
+
<div align="center">
|
| 27 |
+
<img src="figures/performance.png" alt="Performance Comparison" width="800"/>
|
| 28 |
+
<p><i>Figure 1: Performance comparison of STEP3-VL-10B against SOTA multimodal foundation models. SeRe: Sequential Reasoning; PaCoRe: Parallel Coordinated Reasoning.</i></p>
|
| 29 |
+
</div>
|
| 30 |
+
|
| 31 |
+
The success of STEP3-VL-10B is driven by two key strategic designs:
|
| 32 |
+
|
| 33 |
+
1. **Unified Pre-training on High-Quality Multimodal Corpus:** A single-stage, fully unfrozen training strategy on a 1.2T token multimodal corpus, focusing on two foundational capabilities: **reasoning** (e.g., general knowledge and education-centric tasks) and **perception** (e.g., grounding, counting, OCR, and GUI interactions). By jointly optimizing the Perception Encoder and the Qwen3-8B decoder, STEP3-VL-10B establishes intrinsic vision-language synergy.
|
| 34 |
+
2. **Scaled Multimodal Reinforcement Learning and Parallel Reasoning:** Frontier capabilities are unlocked through a rigorous post-training pipeline comprising two-stage supervised finetuning (SFT) and **over 1,400 iterations of RL** with both verifiable rewards (RLVR) and human feedback (RLHF). Beyond sequential reasoning, we adopt **Parallel Coordinated Reasoning (PaCoRe)**, which allocates test-time compute to aggregate evidence from parallel visual exploration.
|
| 35 |
+
|
| 36 |
+
## π₯ Model Zoo
|
| 37 |
+
|
| 38 |
+
| Model Name | Type | Hugging Face | ModelScope |
|
| 39 |
+
|:-----------|:-----|:------------:|:----------:|
|
| 40 |
+
| **STEP3-VL-10B-Base** | Base | [π€ Download](https://huggingface.co/stepfun-ai/Step3-VL-10B-Base) | [π€ Download](https://modelscope.cn/models/stepfun-ai/Step3-VL-10B-Base) |
|
| 41 |
+
| **STEP3-VL-10B** | Chat | [π€ Download](https://huggingface.co/stepfun-ai/Step3-VL-10B) | [π€ Download](https://modelscope.cn/models/stepfun-ai/Step3-VL-10B) |
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
## π Performance
|
| 45 |
+
|
| 46 |
+
STEP3-VL-10B delivers best-in-class performance across major multimodal benchmarks, establishing a new performance standard for compact models. The results demonstrate that STEP3-VL-10B is the **most powerful open-source model in the 10B parameter class**.
|
| 47 |
+
|
| 48 |
+
### Comparison with Larger Models (10Γβ20Γ Larger)
|
| 49 |
+
|
| 50 |
+
| Benchmark | STEP3-VL-10B (SeRe) | STEP3-VL-10B (PaCoRe) | GLM-4.6V (106B-A12B) | Qwen3-VL (235B-A22B) | Gemini-2.5-Pro | Seed-1.5-VL |
|
| 51 |
+
|:----------|:-------------------:|:---------------------:|:--------------------:|:--------------------:|:--------------:|:-----------:|
|
| 52 |
+
| **MMMU** | 78.11 | **80.11** | 75.20 | 78.70 | 83.89 | 79.11 |
|
| 53 |
+
| **MathVista** | 83.97 | **85.50** | 83.51 | 85.10 | 83.88 | 85.60 |
|
| 54 |
+
| **MathVision** | 70.81 | **75.95** | 63.50 | 72.10 | 73.30 | 68.70 |
|
| 55 |
+
| **MMBench (EN)** | 92.05 | 92.38 | 92.75 | 92.70 | **93.19** | 92.11 |
|
| 56 |
+
| **MMStar** | 77.48 | 77.64 | 75.30 | 76.80 | **79.18** | 77.91 |
|
| 57 |
+
| **OCRBench** | 86.75 | **89.00** | 86.20 | 87.30 | 85.90 | 85.20 |
|
| 58 |
+
| **AIME 2025** | 87.66 | **94.43** | 71.88 | 83.59 | 83.96 | 64.06 |
|
| 59 |
+
| **HMMT 2025** | 78.18 | **92.14** | 57.29 | 67.71 | 65.68 | 51.30 |
|
| 60 |
+
| **LiveCodeBench** | 75.77 | **76.43** | 48.71 | 69.45 | 72.01 | 57.10 |
|
| 61 |
+
|
| 62 |
+
<!-- > **Note:** **SeRe** (Sequential Reasoning) uses a max length of 64K tokens; **PaCoRe** (Parallel Coordinated Reasoning) synthesizes 16 SeRe rollouts with a max length of 128K tokens. -->
|
| 63 |
+
|
| 64 |
+
> **Note on Inference Modes:**
|
| 65 |
+
>
|
| 66 |
+
> **SeRe (Sequential Reasoning):** The standard inference mode using sequential generation (Chain-of-Thought) with a max length of 64K tokens.
|
| 67 |
+
>
|
| 68 |
+
> **PaCoRe (Parallel Coordinated Reasoning):** An advanced mode that scales test-time compute. It aggregates evidence from **16 parallel rollouts** to synthesize a final answer, utilizing a max context length of 128K tokens.
|
| 69 |
+
>
|
| 70 |
+
> *Unless otherwise stated, scores below refer to the standard SeRe mode. Higher scores achieved via PaCoRe are explicitly marked.*
|
| 71 |
+
|
| 72 |
+
### Comparison with Open-Source Models (7Bβ10B)
|
| 73 |
+
|
| 74 |
+
| Category | Benchmark | STEP3-VL-10B | GLM-4.6V-Flash (9B) | Qwen3-VL-Thinking (8B) | InternVL-3.5 (8B) | MiMo-VL-RL-2508 (7B) |
|
| 75 |
+
|:---------|:----------|:------------:|:-------------------:|:----------------------:|:-----------------:|:--------------------:|
|
| 76 |
+
| **STEM Reasoning** | MMMU | **78.11** | 71.17 | 73.53 | 71.69 | 71.14 |
|
| 77 |
+
| | MathVision | **70.81** | 54.05 | 59.60 | 52.05 | 59.65 |
|
| 78 |
+
| | MathVista | **83.97** | 82.85 | 78.50 | 76.78 | 79.86 |
|
| 79 |
+
| | PhyX | **59.45** | 52.28 | 57.67 | 50.51 | 56.00 |
|
| 80 |
+
| **Recognition** | MMBench (EN) | **92.05** | 91.04 | 90.55 | 88.20 | 89.91 |
|
| 81 |
+
| | MMStar | **77.48** | 74.26 | 73.58 | 69.83 | 72.93 |
|
| 82 |
+
| | ReMI | **67.29** | 60.75 | 57.17 | 52.65 | 63.13 |
|
| 83 |
+
| **OCR & Document** | OCRBench | **86.75** | 85.97 | 82.85 | 83.70 | 85.40 |
|
| 84 |
+
| | AI2D | **89.35** | 88.93 | 83.32 | 82.34 | 84.96 |
|
| 85 |
+
| **GUI Grounding** | ScreenSpot-V2 | 92.61 | 92.14 | **93.60** | 84.02 | 90.82 |
|
| 86 |
+
| | ScreenSpot-Pro | **51.55** | 45.68 | 46.60 | 15.39 | 34.84 |
|
| 87 |
+
| | OSWorld-G | **59.02** | 54.71 | 56.70 | 31.91 | 50.54 |
|
| 88 |
+
| **Spatial** | BLINK | **66.79** | 64.90 | 62.78 | 55.40 | 62.57 |
|
| 89 |
+
| | All-Angles-Bench | **57.21** | 53.24 | 45.88 | 45.29 | 51.62 |
|
| 90 |
+
| **Code** | HumanEval-V | **66.05** | 29.26 | 26.94 | 24.31 | 31.96 |
|
| 91 |
+
|
| 92 |
+
### Key Capabilities
|
| 93 |
+
|
| 94 |
+
* **STEM Reasoning:** Achieves **94.43%** on AIME 2025 and **75.95%** on MathVision (with PaCoRe), demonstrating exceptional complex reasoning capabilities that outperform models 10Γβ20Γ larger.
|
| 95 |
+
* **Visual Perception:** Records **92.05%** on MMBench and **80.11%** on MMMU, establishing strong general visual understanding and multimodal reasoning.
|
| 96 |
+
* **GUI & OCR:** Delivers state-of-the-art performance on ScreenSpot-V2 (**92.61%**), ScreenSpot-Pro (**51.55%**), and OCRBench (**86.75%**), optimized for agentic and document understanding tasks.
|
| 97 |
+
* **Spatial Understanding:** Demonstrates emergent spatial awareness with **66.79%** on BLINK and **57.21%** on All-Angles-Bench, establishing strong potential for embodied intelligence applications.
|
| 98 |
+
|
| 99 |
+
## ποΈ Architecture & Training
|
| 100 |
+
|
| 101 |
+
### Architecture
|
| 102 |
+
|
| 103 |
+
- **Visual Encoder:** PE-lang (Language-Optimized Perception Encoder), 1.8B parameters.
|
| 104 |
+
- **Decoder:** Qwen3-8B.
|
| 105 |
+
- **Projector:** Two consecutive stride-2 layers (resulting in 16Γ spatial downsampling).
|
| 106 |
+
- **Resolution:** Multi-crop strategy consisting of a 728Γ728 global view and multiple 504Γ504 local crops.
|
| 107 |
+
|
| 108 |
+
### Training Pipeline
|
| 109 |
+
|
| 110 |
+
- **Pre-training:** Single-stage, fully unfrozen strategy using AdamW optimizer (Total: 1.2T tokens, 370K iterations).
|
| 111 |
+
- Phase 1: 900B tokens.
|
| 112 |
+
- Phase 2: 300B tokens.
|
| 113 |
+
- **Supervised Finetuning (SFT):** Two-stage approach (Total: ~226B tokens).
|
| 114 |
+
- Stage 1: 9:1 text-to-multimodal ratio (~190B tokens).
|
| 115 |
+
- Stage 2: 1:1 text-to-multimodal ratio (~36B tokens).
|
| 116 |
+
- **Reinforcement Learning:** Total >1,400 iterations.
|
| 117 |
+
- **RLVR:** 600 iterations (Tasks: mathematics, geometry, physics, perception, grounding).
|
| 118 |
+
- **RLHF:** 300 iterations (Task: open-ended generation).
|
| 119 |
+
- **PaCoRe Training:** 500 iterations (Context length: 64K max sequence).
|
| 120 |
+
|
| 121 |
+
|
| 122 |
+
## π οΈ Quick Start
|
| 123 |
+
|
| 124 |
+
### Requirements
|
| 125 |
+
|
| 126 |
+
To run STEP3-VL-10B efficiently, we recommend setting up a Python environment (>=3.10) with **vLLM**:
|
| 127 |
+
|
| 128 |
+
```bash
|
| 129 |
+
pip install vllm>=0.6.3
|
| 130 |
+
```
|
| 131 |
+
|
| 132 |
+
### vLLM Inference Example
|
| 133 |
+
|
| 134 |
+
Below is a minimal example to load the model and generate a response using vLLM's chat API.
|
| 135 |
+
|
| 136 |
+
```python
|
| 137 |
+
from vllm import LLM, SamplingParams
|
| 138 |
+
|
| 139 |
+
# 1. Load the model
|
| 140 |
+
# Ensure you have ~24GB VRAM for BF16 inference
|
| 141 |
+
llm = LLM(
|
| 142 |
+
model="stepfun-ai/Step3-VL-10B",
|
| 143 |
+
trust_remote_code=True,
|
| 144 |
+
gpu_memory_utilization=0.95
|
| 145 |
+
)
|
| 146 |
+
|
| 147 |
+
# 2. Prepare input (Supports local paths or URLs)
|
| 148 |
+
messages = [
|
| 149 |
+
{
|
| 150 |
+
"role": "user",
|
| 151 |
+
"content": [
|
| 152 |
+
{"type": "image", "image": "[https://modelscope.oss-cn-beijing.aliyuncs.com/resource/demo.jpg](https://modelscope.oss-cn-beijing.aliyuncs.com/resource/demo.jpg)"},
|
| 153 |
+
{"type": "text", "text": "Describe this image in detail."}
|
| 154 |
+
]
|
| 155 |
+
}
|
| 156 |
+
]
|
| 157 |
+
|
| 158 |
+
# 3. Generate
|
| 159 |
+
sampling_params = SamplingParams(temperature=0.1, max_tokens=1024)
|
| 160 |
+
outputs = llm.chat(messages=messages, sampling_params=sampling_params)
|
| 161 |
+
|
| 162 |
+
print(f"Output: {outputs[0].outputs[0].text}")
|
| 163 |
+
```
|
| 164 |
+
|
| 165 |
+
|
| 166 |
+
## π Citation
|
| 167 |
+
|
| 168 |
+
If you find this project useful in your research, please cite our technical report:
|
| 169 |
+
|
| 170 |
+
```tex
|
| 171 |
+
|
| 172 |
+
```
|
| 173 |
+
|
| 174 |
+
## π License
|
| 175 |
+
|
| 176 |
+
This project is open-sourced under the [Apache 2.0 License](https://www.google.com/search?q=LICENSE).
|