jiangbop commited on
Commit
99aecb4
Β·
verified Β·
1 Parent(s): a6987b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -3
README.md CHANGED
@@ -1,3 +1,113 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ ---
5
+ pipeline_tag: image-text-to-text
6
+ library_name: transformers
7
+ license: mit
8
+ ---
9
+ # Skywork-R1V2
10
+
11
+ <div align="center">
12
+ <img src="skywork-logo.png" alt="Skywork Logo" width="500" height="400">
13
+ </div>
14
+
15
+ ## πŸ“– [R1V3 Report](https://github.com/SkyworkAI/Skywork-R1V/Skywork_R1V3) | πŸ’» [GitHub](https://github.com/SkyworkAI/Skywork-R1V)
16
+ <p align="center">
17
+ <a href="https://github.com/SkyworkAI/Skywork-R1V/stargazers">
18
+ <img src="https://img.shields.io/github/stars/SkyworkAI/Skywork-R1V" alt="GitHub Stars" />
19
+ </a>
20
+ <a href="https://github.com/SkyworkAI/Skywork-R1V/fork">
21
+ <img src="https://img.shields.io/github/forks/SkyworkAI/Skywork-R1V" alt="GitHub Forks" />
22
+ </a>
23
+ </p>
24
+
25
+
26
+ ## 1. Model Introduction
27
+
28
+ **Skywork-R1V3-38B** is the **latest and most powerful open-source multimodal reasoning model** in the Skywork series, pushing the boundaries of cross-modal and cross-disciplinary intelligence.
29
+ With elaborate RL algorithm in the post-training stage, R1V3 significantly enhances multimodal reasoning ablity and achieves **open-source state-of-the-art (SOTA)** performance across multiple benchmarks.
30
+
31
+ ### 🌟 Key Results
32
+ - **MMMU:** 76.0% β€” *Open-source SOTA, approaching human experts (76.2)*
33
+ - **EMMA-Mini(CoT):** 40.3 β€” *Best in open source*
34
+ - **MMK12:** 78.5 β€” *Best in open source*
35
+ - **Physics Reasoning:** PhyX-MC-TM (52.8), SeePhys (31.5) β€” *Best in open source*
36
+ - **Logic Reasoning:** MME-Reasoning (42.8) β€” *Beats Claude-4-Sonnet*, VisuLogic (28.5) β€” *Best in open source*
37
+ - **Math Benchmarks:** MathVista (77.1), MathVerse (59.6), MathVision (52.6) β€” *Exceptional problem-solving*
38
+
39
+
40
+ ## 2. Evaluation
41
+
42
+
43
+ ---
44
+
45
+
46
+ ## 3. Usage
47
+
48
+ ### 1. Clone the Repository
49
+
50
+ ```shell
51
+ git clone https://github.com/SkyworkAI/Skywork-R1V.git
52
+ cd skywork-r1v/inference
53
+ ```
54
+ ### 2. Set Up the Environment
55
+
56
+ ```shell
57
+ # For Transformers
58
+ conda create -n r1-v python=3.10 && conda activate r1-v
59
+ bash setup.sh
60
+ # For vLLM
61
+ conda create -n r1v-vllm python=3.10 && conda activate r1v-vllm
62
+ pip install -U vllm
63
+ ```
64
+
65
+ ### 3. Run the Inference Script
66
+ transformers inference
67
+
68
+ ```shell
69
+ CUDA_VISIBLE_DEVICES="0,1" python inference_with_transformers.py \
70
+ --model_path path \
71
+ --image_paths image1_path \
72
+ --question "your question"
73
+ ```
74
+
75
+ vllm inference
76
+ ```shell
77
+ python inference_with_vllm.py \
78
+ --model_path path \
79
+ --image_paths image1_path image2_path \
80
+ --question "your question" \
81
+ --tensor_parallel_size 4
82
+ ```
83
+
84
+ ---
85
+
86
+ ## 4. Citation
87
+ If you use Skywork-R1V in your research, please cite:
88
+
89
+ ```
90
+ @misc{chris2025skyworkr1v2multimodalhybrid,
91
+ title={Skywork R1V2: Multimodal Hybrid Reinforcement Learning for Reasoning},
92
+ author={Peiyu Wang and Yichen Wei and Yi Peng and Xiaokun Wang and Weijie Qiu and Wei Shen and Tianyidan Xie and Jiangbo Pei and Jianhao Zhang and Yunzhuo Hao and Xuchen Song and Yang Liu and Yahui Zhou},
93
+ year={2025},
94
+ eprint={2504.16656},
95
+ archivePrefix={arXiv},
96
+ primaryClass={cs.CV},
97
+ url={https://arxiv.org/abs/2504.16656},
98
+ }
99
+ ```
100
+
101
+ ```
102
+ @misc{peng2025skyworkr1vpioneeringmultimodal,
103
+ title={Skywork R1V: Pioneering Multimodal Reasoning with Chain-of-Thought},
104
+ author={Yi Peng and Peiyu Wang and Xiaokun Wang and Yichen Wei and Jiangbo Pei and Weijie Qiu and Ai Jian and Yunzhuo Hao and Jiachun Pan and Tianyidan Xie and Li Ge and Rongxian Zhuang and Xuchen Song and Yang Liu and Yahui Zhou},
105
+ year={2025},
106
+ eprint={2504.05599},
107
+ archivePrefix={arXiv},
108
+ primaryClass={cs.CV},
109
+ url={https://arxiv.org/abs/2504.05599},
110
+ }
111
+ ```
112
+
113
+ *This project is released under an open-source license.*