Update README.md
Browse files
README.md
CHANGED
|
@@ -15,34 +15,88 @@ pipeline_tag: image-text-to-text
|
|
| 15 |
We introduce Skywork-R1V, a multimodal reasoning model that extends the R1-series text models to visual modalities through a near-lossless transfer method. Using a lightweight visual projector, Skywork-R1V enables seamless multimodal adaptation without requiring retraining of either the base language model or vision encoder. To enhance visual-text alignment, we developed a hybrid optimization strategy combining Iterative Supervised Fine-Tuning (SFT) with Group Relative Policy Optimization (GRPO), significantly improving cross-modal integration. Additionally, we created an adaptive-length Chain-of-Thought distillation approach for generating reasoning data, which dynamically optimizes reasoning chain lengths to improve inference efficiency and prevent overthinking. The model achieves good performance on key multimodal reasoning benchmarks, scoring 69 on MMMU and 67.5 on MathVista, comparable to leading closed-source models like Gemini 2.0 and Kimi-k1.5. It also maintains strong textual reasoning capabilities, achieving impressive scores of 72.0 on AIME and 94.0 on MATH500.
|
| 16 |
|
| 17 |
|
| 18 |
-
## 2.
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
- Vision Encoder: Uses Vision Transformer (ViT) as the visual backbone to process image inputs.
|
| 23 |
-
- Visual Projector: A lightweight MLP (multilayer perceptron) adapter that serves as the bridge between the vision and language components.
|
| 24 |
-
- Language Model: Utilizes R1-distilled-Qwen-32B as the reasoning-capable language model backbone.
|
| 25 |
-
|
| 26 |
-
The model follows a connection pattern of Vision Encoder → MLP Adapter → Language Model, where the MLP adapter aligns the output space of the vision encoder with the input space of the language model. This design allows for efficient transfer of reasoning capabilities from text to multimodal domains without requiring extensive retraining of either the vision encoder or language model.
|
| 27 |
-
|
| 28 |
-
****Key Designs****
|
| 29 |
-
- **Advanced Multimodal Reasoning**
|
| 30 |
-
Excels in complex reasoning across textual and visual modalities.
|
| 31 |
-
- **Iterative Training Strategies**
|
| 32 |
-
Employs iterative supervision and grpo to refine model alignment and performance.
|
| 33 |
-
- **Adaptive length Chain-of-Thought**
|
| 34 |
-
Dynamically adjusts reasoning length to enhance inference efficiency and accuracy.
|
| 35 |
-
- **Scalable Performance**
|
| 36 |
-
Benchmarked to rival proprietary models across mathematics, coding, and multimodal tasks.
|
| 37 |
|
| 38 |
-
## 3. Evaluation
|
| 39 |
|
|
|
|
|
|
|
|
|
|
| 40 |
<div align="center">
|
| 41 |
-
<
|
| 42 |
</div>
|
| 43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
<div align="center">
|
| 45 |
-
<b>Evaluation results of LLMs and VLMs</b>
|
| 46 |
</div>
|
| 47 |
<table>
|
| 48 |
<thead>
|
|
@@ -175,73 +229,9 @@ The model follows a connection pattern of Vision Encoder → MLP Adapter → Lan
|
|
| 175 |
</table>
|
| 176 |
|
| 177 |
<div align="center">
|
| 178 |
-
<
|
| 179 |
</div>
|
| 180 |
|
| 181 |
-
<table align="center">
|
| 182 |
-
<thead>
|
| 183 |
-
<tr>
|
| 184 |
-
<th></th>
|
| 185 |
-
<th align="center"><strong>Benchmark</strong></th>
|
| 186 |
-
<th align="center"><strong>LLM</strong></th>
|
| 187 |
-
<th align="center" colspan="4"><strong>VLM</strong></th>
|
| 188 |
-
</tr>
|
| 189 |
-
<tr>
|
| 190 |
-
<th></th>
|
| 191 |
-
<th></th>
|
| 192 |
-
<th align="center"><strong>QwQ-32B-Preview</strong></th>
|
| 193 |
-
<th align="center"><strong>InternVL-2.5-38B</strong></th>
|
| 194 |
-
<th align="center"><strong>VILA 1.5-40B</strong></th>
|
| 195 |
-
<th align="center"><strong>InternVL2-40B</strong></th>
|
| 196 |
-
<th align="center"><strong>Skywork-R1V-38B</strong></th>
|
| 197 |
-
</tr>
|
| 198 |
-
</thead>
|
| 199 |
-
<tbody>
|
| 200 |
-
<tr>
|
| 201 |
-
<td rowspan="3">Reasoning</td>
|
| 202 |
-
<td>MATH-500</td>
|
| 203 |
-
<td align="center">90.6</td>
|
| 204 |
-
<td align="center">-</td>
|
| 205 |
-
<td align="center">-</td>
|
| 206 |
-
<td align="center">-</td>
|
| 207 |
-
<td align="center"><strong>94.0</strong></td>
|
| 208 |
-
</tr>
|
| 209 |
-
<tr>
|
| 210 |
-
<td>AIME 2024</td>
|
| 211 |
-
<td align="center">50.0</td>
|
| 212 |
-
<td align="center">-</td>
|
| 213 |
-
<td align="center">-</td>
|
| 214 |
-
<td align="center">-</td>
|
| 215 |
-
<td align="center"><strong>72.0</strong></td>
|
| 216 |
-
</tr>
|
| 217 |
-
<tr>
|
| 218 |
-
<td>GPQA</td>
|
| 219 |
-
<td align="center">54.5</td>
|
| 220 |
-
<td align="center">-</td>
|
| 221 |
-
<td align="center">-</td>
|
| 222 |
-
<td align="center">-</td>
|
| 223 |
-
<td align="center"><strong>61.6</strong></td>
|
| 224 |
-
</tr>
|
| 225 |
-
<tr>
|
| 226 |
-
<td rowspan="3">Vision</td>
|
| 227 |
-
<td>MathVista(mini)</td>
|
| 228 |
-
<td align="center">-</td>
|
| 229 |
-
<td align="center">71.9</td>
|
| 230 |
-
<td align="center">49.5</td>
|
| 231 |
-
<td align="center">63.7</td>
|
| 232 |
-
<td align="center">67.5</td>
|
| 233 |
-
</tr>
|
| 234 |
-
<tr>
|
| 235 |
-
<td>MMMU(Val)</td>
|
| 236 |
-
<td align="center">-</td>
|
| 237 |
-
<td align="center">63.9</td>
|
| 238 |
-
<td align="center">55.1</td>
|
| 239 |
-
<td align="center">55.2</td>
|
| 240 |
-
<td align="center"><strong>69.0</strong></td>
|
| 241 |
-
</tr>
|
| 242 |
-
</tbody>
|
| 243 |
-
</table>
|
| 244 |
-
|
| 245 |
|
| 246 |
## 4. Skywork-R1V Family
|
| 247 |
|
|
|
|
| 15 |
We introduce Skywork-R1V, a multimodal reasoning model that extends the R1-series text models to visual modalities through a near-lossless transfer method. Using a lightweight visual projector, Skywork-R1V enables seamless multimodal adaptation without requiring retraining of either the base language model or vision encoder. To enhance visual-text alignment, we developed a hybrid optimization strategy combining Iterative Supervised Fine-Tuning (SFT) with Group Relative Policy Optimization (GRPO), significantly improving cross-modal integration. Additionally, we created an adaptive-length Chain-of-Thought distillation approach for generating reasoning data, which dynamically optimizes reasoning chain lengths to improve inference efficiency and prevent overthinking. The model achieves good performance on key multimodal reasoning benchmarks, scoring 69 on MMMU and 67.5 on MathVista, comparable to leading closed-source models like Gemini 2.0 and Kimi-k1.5. It also maintains strong textual reasoning capabilities, achieving impressive scores of 72.0 on AIME and 94.0 on MATH500.
|
| 16 |
|
| 17 |
|
| 18 |
+
## 2. Feature
|
| 19 |
+
- **Visual Chain-of-Thought**: Enables multi-step logical reasoning on visual inputs, breaking down complex image-based problems into manageable steps.
|
| 20 |
+
- **Mathematical & Scientific Analysis**: Capable of solving visual math problems and interpreting scientific/medical imagery with high precision.
|
| 21 |
+
- **Cross-Modal Understanding**: Seamlessly integrates text and images for richer, context-aware comprehension.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
|
|
|
| 23 |
|
| 24 |
+
## 3. Evaluation
|
| 25 |
+
<br>
|
| 26 |
+
<br>
|
| 27 |
<div align="center">
|
| 28 |
+
<b>Comparison with Larger-Scale Open-Source and Closed-Source Models</b>
|
| 29 |
</div>
|
| 30 |
|
| 31 |
+
<table align="center">
|
| 32 |
+
<thead>
|
| 33 |
+
<tr>
|
| 34 |
+
<th></th>
|
| 35 |
+
<th align="center"><strong>Benchmark</strong></th>
|
| 36 |
+
<th align="center"><strong>LLM</strong></th>
|
| 37 |
+
<th align="center" colspan="4"><strong>VLM</strong></th>
|
| 38 |
+
</tr>
|
| 39 |
+
<tr>
|
| 40 |
+
<th></th>
|
| 41 |
+
<th></th>
|
| 42 |
+
<th align="center"><strong>QwQ-32B-Preview</strong></th>
|
| 43 |
+
<th align="center"><strong>InternVL-2.5-38B</strong></th>
|
| 44 |
+
<th align="center"><strong>VILA 1.5-40B</strong></th>
|
| 45 |
+
<th align="center"><strong>InternVL2-40B</strong></th>
|
| 46 |
+
<th align="center"><strong>Skywork-R1V-38B</strong></th>
|
| 47 |
+
</tr>
|
| 48 |
+
</thead>
|
| 49 |
+
<tbody>
|
| 50 |
+
<tr>
|
| 51 |
+
<td rowspan="3">Reasoning</td>
|
| 52 |
+
<td>MATH-500</td>
|
| 53 |
+
<td align="center">90.6</td>
|
| 54 |
+
<td align="center">-</td>
|
| 55 |
+
<td align="center">-</td>
|
| 56 |
+
<td align="center">-</td>
|
| 57 |
+
<td align="center"><strong>94.0</strong></td>
|
| 58 |
+
</tr>
|
| 59 |
+
<tr>
|
| 60 |
+
<td>AIME 2024</td>
|
| 61 |
+
<td align="center">50.0</td>
|
| 62 |
+
<td align="center">-</td>
|
| 63 |
+
<td align="center">-</td>
|
| 64 |
+
<td align="center">-</td>
|
| 65 |
+
<td align="center"><strong>72.0</strong></td>
|
| 66 |
+
</tr>
|
| 67 |
+
<tr>
|
| 68 |
+
<td>GPQA</td>
|
| 69 |
+
<td align="center">54.5</td>
|
| 70 |
+
<td align="center">-</td>
|
| 71 |
+
<td align="center">-</td>
|
| 72 |
+
<td align="center">-</td>
|
| 73 |
+
<td align="center"><strong>61.6</strong></td>
|
| 74 |
+
</tr>
|
| 75 |
+
<tr>
|
| 76 |
+
<td rowspan="3">Vision</td>
|
| 77 |
+
<td>MathVista(mini)</td>
|
| 78 |
+
<td align="center">-</td>
|
| 79 |
+
<td align="center">71.9</td>
|
| 80 |
+
<td align="center">49.5</td>
|
| 81 |
+
<td align="center">63.7</td>
|
| 82 |
+
<td align="center">67.5</td>
|
| 83 |
+
</tr>
|
| 84 |
+
<tr>
|
| 85 |
+
<td>MMMU(Val)</td>
|
| 86 |
+
<td align="center">-</td>
|
| 87 |
+
<td align="center">63.9</td>
|
| 88 |
+
<td align="center">55.1</td>
|
| 89 |
+
<td align="center">55.2</td>
|
| 90 |
+
<td align="center"><strong>69.0</strong></td>
|
| 91 |
+
</tr>
|
| 92 |
+
</tbody>
|
| 93 |
+
</table>
|
| 94 |
+
|
| 95 |
+
|
| 96 |
+
<br>
|
| 97 |
+
<br>
|
| 98 |
<div align="center">
|
| 99 |
+
<b>Evaluation results of state-of-the-art LLMs and VLMs</b>
|
| 100 |
</div>
|
| 101 |
<table>
|
| 102 |
<thead>
|
|
|
|
| 229 |
</table>
|
| 230 |
|
| 231 |
<div align="center">
|
| 232 |
+
<img src="eval.jpeg" width="80%" alt="skywork_r1v_eval" />
|
| 233 |
</div>
|
| 234 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 235 |
|
| 236 |
## 4. Skywork-R1V Family
|
| 237 |
|