Update README.md
Browse files
README.md
CHANGED
|
@@ -7,6 +7,9 @@ pipeline_tag: image-text-to-text
|
|
| 7 |
<img src="logo.jpeg" alt="Introduction Image" width="400" height="400">
|
| 8 |
</div>
|
| 9 |
|
|
|
|
|
|
|
|
|
|
| 10 |
## 1. Introduction
|
| 11 |
|
| 12 |
We introduce Skywork-R1V, a multimodal reasoning model that extends the R1-series text models to visual modalities through a near-lossless transfer method. Using a lightweight visual projector, Skywork-R1V enables seamless multimodal adaptation without requiring retraining of either the base language model or vision encoder. To enhance visual-text alignment, we developed a hybrid optimization strategy combining Iterative Supervised Fine-Tuning (SFT) with Group Relative Policy Optimization (GRPO), significantly improving cross-modal integration. Additionally, we created an adaptive-length Chain-of-Thought distillation approach for generating reasoning data, which dynamically optimizes reasoning chain lengths to improve inference efficiency and prevent overthinking. The model achieves good performance on key multimodal reasoning benchmarks, scoring 69 on MMMU and 67.5 on MathVista, comparable to leading closed-source models like Gemini 2.0 and Kimi-k1.5. It also maintains strong textual reasoning capabilities, achieving impressive scores of 72.0 on AIME and 94.0 on MATH500.
|
|
@@ -35,7 +38,7 @@ The model follows a connection pattern of Vision Encoder → MLP Adapter → Lan
|
|
| 35 |
## 3. Evaluation
|
| 36 |
|
| 37 |
<div align="center">
|
| 38 |
-
<img src="eval.jpeg" width="
|
| 39 |
</div>
|
| 40 |
|
| 41 |
<div align="center">
|
|
@@ -286,33 +289,6 @@ print(f'User: {question}\nAssistant: {response}')
|
|
| 286 |
question = '<image>\nSelect the correct option from this question.'
|
| 287 |
response = model.chat(tokenizer, pixel_values, question, generation_config)
|
| 288 |
print(f'User: {question}\nAssistant: {response}')
|
| 289 |
-
|
| 290 |
-
# single-image multi-round conversation (单图多轮对话)
|
| 291 |
-
question = '<image>\nSelect the correct option from this question.'
|
| 292 |
-
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
|
| 293 |
-
print(f'User: {question}\nAssistant: {response}')
|
| 294 |
-
|
| 295 |
-
question = 'What if the height in the question is changed to 0.5?'
|
| 296 |
-
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
|
| 297 |
-
print(f'User: {question}\nAssistant: {response}')
|
| 298 |
-
|
| 299 |
-
# multi-image multi-round conversation, separate images (多图多轮对话,独立图像)
|
| 300 |
-
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
|
| 301 |
-
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
|
| 302 |
-
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
|
| 303 |
-
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
|
| 304 |
-
|
| 305 |
-
question = '<image>\n<image>\nSelect the correct option from this question.'
|
| 306 |
-
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
|
| 307 |
-
num_patches_list=num_patches_list,
|
| 308 |
-
history=None, return_history=True)
|
| 309 |
-
print(f'User: {question}\nAssistant: {response}')
|
| 310 |
-
|
| 311 |
-
question = 'What if the height in the question is changed to 0.5?'
|
| 312 |
-
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
|
| 313 |
-
num_patches_list=num_patches_list,
|
| 314 |
-
history=history, return_history=True)
|
| 315 |
-
print(f'User: {question}\nAssistant: {response}')
|
| 316 |
```
|
| 317 |
|
| 318 |
---
|
|
|
|
| 7 |
<img src="logo.jpeg" alt="Introduction Image" width="400" height="400">
|
| 8 |
</div>
|
| 9 |
|
| 10 |
+
## 🌐 [Homepage](#) | 📖 [Paper](https://github.com/SkyworkAI/Skywork-R1V/blob/main/Skywork_R1V.pdf) | 💻 [GitHub](https://github.com/SkyworkAI/Skywork-R1V)
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
## 1. Introduction
|
| 14 |
|
| 15 |
We introduce Skywork-R1V, a multimodal reasoning model that extends the R1-series text models to visual modalities through a near-lossless transfer method. Using a lightweight visual projector, Skywork-R1V enables seamless multimodal adaptation without requiring retraining of either the base language model or vision encoder. To enhance visual-text alignment, we developed a hybrid optimization strategy combining Iterative Supervised Fine-Tuning (SFT) with Group Relative Policy Optimization (GRPO), significantly improving cross-modal integration. Additionally, we created an adaptive-length Chain-of-Thought distillation approach for generating reasoning data, which dynamically optimizes reasoning chain lengths to improve inference efficiency and prevent overthinking. The model achieves good performance on key multimodal reasoning benchmarks, scoring 69 on MMMU and 67.5 on MathVista, comparable to leading closed-source models like Gemini 2.0 and Kimi-k1.5. It also maintains strong textual reasoning capabilities, achieving impressive scores of 72.0 on AIME and 94.0 on MATH500.
|
|
|
|
| 38 |
## 3. Evaluation
|
| 39 |
|
| 40 |
<div align="center">
|
| 41 |
+
<img src="eval.jpeg" width="800" height="600" alt="skywork_r1v_eval" />
|
| 42 |
</div>
|
| 43 |
|
| 44 |
<div align="center">
|
|
|
|
| 289 |
question = '<image>\nSelect the correct option from this question.'
|
| 290 |
response = model.chat(tokenizer, pixel_values, question, generation_config)
|
| 291 |
print(f'User: {question}\nAssistant: {response}')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 292 |
```
|
| 293 |
|
| 294 |
---
|