Skywork-R1V-38B / README.md
shawn0wang's picture
Update README.md
0d73d3b verified
|
raw
history blame
12.8 kB

Skywork-R1V

Introduction Image

1. Introduction

We introduce Skywork-R1V, a multimodal reasoning model that extends the R1-series text models to visual modalities through a near-lossless transfer method. Using a lightweight visual projector, Skywork-R1V enables seamless multimodal adaptation without requiring retraining of either the base language model or vision encoder. To enhance visual-text alignment, we developed a hybrid optimization strategy combining Iterative Supervised Fine-Tuning (SFT) with Group Relative Policy Optimization (GRPO), significantly improving cross-modal integration. Additionally, we created an adaptive-length Chain-of-Thought distillation approach for generating reasoning data, which dynamically optimizes reasoning chain lengths to improve inference efficiency and prevent overthinking. The model achieves good performance on key multimodal reasoning benchmarks, scoring 69 on MMMU and 67.5 on MathVista, comparable to leading closed-source models like Gemini 2.0 and Kimi-k1.5. It also maintains strong textual reasoning capabilities, achieving impressive scores of 72.0 on AIME and 94.0 on MATH500.

2. Model Summary

Architecture: Skywork-R1V employs a modular architecture that efficiently combines vision and language capabilities:

  • Vision Encoder: Uses Vision Transformer (ViT) as the visual backbone to process image inputs.
  • Visual Projector: A lightweight MLP (multilayer perceptron) adapter that serves as the bridge between the vision and language components.
  • Language Model: Utilizes R1-distilled-Qwen-32B as the reasoning-capable language model backbone.

The model follows a connection pattern of Vision Encoder → MLP Adapter → Language Model, where the MLP adapter aligns the output space of the vision encoder with the input space of the language model. This design allows for efficient transfer of reasoning capabilities from text to multimodal domains without requiring extensive retraining of either the vision encoder or language model.

Key Designs

  • Advanced Multimodal Reasoning
    Excels in complex reasoning across textual and visual modalities.
  • Iterative Training Strategies
    Employs iterative supervision and grpo to refine model alignment and performance.
  • Adaptive length Chain-of-Thought
    Dynamically adjusts reasoning length to enhance inference efficiency and accuracy.
  • Scalable Performance
    Benchmarked to rival proprietary models across mathematics, coding, and multimodal tasks.

3. Evaluation

skywork_r1v_eval
Evaluation results of LLMs and VLMs
Vision Reasoning Vision
MATH-500 AIME 2024 GPQA MathVista(mini) MMMU(Val)
pass@1 pass@1 pass@1 pass@1 pass@1
Qwen2.5-72B-Instruct 80.0 23.3 49.0 - -
Deepseek V3 90.2 39.2 59.1 - -
Deepseek R1 97.3 79.8 71.5 - -
Claude 3.5 Sonnet 78.3 16.0 65.0 65.3 66.4
GPT-4o 74.6 9.3 49.9 63.8 69.1
Kimi k1.5 96.2 77.5 - 74.9 70.0
Qwen2.5-VL-72B-Instruct - - - 74.8 70.2
LLaVA-Onevision-72B - - - 67.5 56.8
InternVL2-Llama3-76B - - - 65.5 62.7
InternVL2.5-78B - - - 72.3 70.1
Skywork-R1V-38B 94.0 72.0 61.6 67.5 69.0
Comparison with Larger-Scale Open-Source and Closed-Source Models
Benchmark LLM VLM
QwQ-32B-Preview InternVL-2.5-38B VILA 1.5-40B InternVL2-40B Skywork-R1V-38B
Reasoning MATH-500 90.6 - - - 94.0
AIME 2024 50.0 - - - 72.0
GPQA 54.5 - - - 61.6
Vision MathVista(mini) - 71.9 49.5 63.7 67.5
MMMU(Val) - 63.9 55.1 55.2 69.0

4. Skywork-R1V Family

Model Name Vision Encoder Language Model HF Link
Skywork-R1V-38B InternViT-6B-448px-V2_5 deepseek-ai/DeepSeek-R1-Distill-Qwen-32B 🤗 Link
Skywork-R1V-38B-qwq InternViT-6B-448px-V2_5 Qwen/QwQ-32B -

5. Usage

from transformers import AutoTokenizer, AutoConfig, AutoModel, CLIPImageProcessor
from utils_ import split_model, load_image
import sys, os
import torch


path = 'Skywork/Skywork-R1V-38B'
image_path = "/path/to/image"


device_map, visible_devices = split_model(path)
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
model = AutoModel.from_pretrained(
    path,
    torch_dtype=torch.bfloat16,
    load_in_8bit=False,
    low_cpu_mem_usage=True,
    use_flash_attn=True,
    trust_remote_code=True,
    device_map=device_map).eval()
    
generation_config = dict(max_new_tokens=64000, do_sample=True, temperature=0.6, top_p=0.95, repetition_penalty=1.05)
pixel_values = load_image(image_path, max_num=12).to(torch.bfloat16).cuda()

# pure-text conversation (纯文本对话)
question = 'If all cats can fly, and Tom is a cat, can Tom fly?'
response = model.chat(tokenizer, None, question, generation_config, history=None)
print(f'User: {question}\nAssistant: {response}')

# single-image single-round conversation (单图单轮对话)
question = '<image>\nSelect the correct option from this question.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}\nAssistant: {response}')

# single-image multi-round conversation (单图多轮对话)
question = '<image>\nSelect the correct option from this question.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')

question = 'What if the height in the question is changed to 0.5?'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')

# multi-image multi-round conversation, separate images (多图多轮对话,独立图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]

question = '<image>\n<image>\nSelect the correct option from this question.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
                               num_patches_list=num_patches_list,
                               history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')

question = 'What if the height in the question is changed to 0.5?'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
                               num_patches_list=num_patches_list,
                               history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')

6. Citation

If you use Skywork-R1V in your research, please cite:

@article{skywork2025r1v,
  title     = {Skywork R1V: Bridging Vision and Language for Advanced Multimodal Reasoning},
  author    = {Yi Peng, Chris, Xiaokun Wang, Yichen Wei, Jiangbo Pei, Weijie Qiu, Ai Jian, Yunzhuo Hao, Jiachun Pan, Tianyidan Xie, Li Ge, Rongxian Zhuang, Xuchen Song, Yang Liu, Yahui Zhou},
  year      = {2025},
  journal   = {arXiv preprint arXiv:XXXX.XXXXX},
  url       = {https://github.com/skywork-ai/Skywork-R1V}
}

This project is released under an open-source license.