sealical commited on
Commit
bac0036
·
verified ·
1 Parent(s): da1d18b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +360 -0
README.md ADDED
@@ -0,0 +1,360 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ pipeline_tag: image-text-to-text
5
+ ---
6
+ # Skywork-R1V2-38B-AWQ
7
+
8
+ <div align="center">
9
+ <img src="skywork-logo.png" alt="Introduction Image" width="500" height="400">
10
+ </div>
11
+
12
+ ## 📖 [Technical Report](https://github.com/SkyworkAI/Skywork-R1V/blob/main/Skywork_R1V2.pdf) | 💻 [GitHub](https://github.com/SkyworkAI/Skywork-R1V)
13
+
14
+ <div align="center">
15
+
16
+ [![GitHub Stars](https://img.shields.io/github/stars/SkyworkAI/Skywork-R1V)](https://github.com/SkyworkAI/Skywork-R1V/stargazers)[![GitHub Forks](https://img.shields.io/github/forks/SkyworkAI/Skywork-R1V)](https://github.com/SkyworkAI/Skywork-R1V/fork)
17
+
18
+ </div>
19
+
20
+
21
+ ## Evaluation
22
+
23
+ <div align="center">
24
+ <b>Comprehensive performance comparison across text and multimodal reasoning benchmarks.</b>
25
+ </div>
26
+ <table align="center" border="1" style="border-collapse: collapse; width: 100%;">
27
+ <thead>
28
+ <tr>
29
+ <th>Model</th>
30
+ <th align="center">MMMU</th>
31
+ <th align="center">Math-Vista</th>
32
+ <th align="center">Math-Vision</th>
33
+ <th align="center">Olympiad Bench</th>
34
+ <th align="center">AIME 24</th>
35
+ <th align="center">LiveCode bench</th>
36
+ <th align="center">Live Bench</th>
37
+ <th align="center">IFEVAL</th>
38
+ </tr>
39
+ </thead>
40
+ <tbody>
41
+ <tr>
42
+ <td colspan="9" align="center"><i>Proprietary Models</i></td>
43
+ </tr>
44
+ <tr>
45
+ <td>Claude-3.5-Sonnet</td>
46
+ <td align="center">70.4</td>
47
+ <td align="center">67.7</td>
48
+ <td align="center">-</td>
49
+ <td align="center">-</td>
50
+ <td align="center">-</td>
51
+ <td align="center">-</td>
52
+ <td align="center">-</td>
53
+ <td align="center">-</td>
54
+ </tr>
55
+ <tr>
56
+ <td>Gemini-2-Flash</td>
57
+ <td align="center">70.7</td>
58
+ <td align="center">73.1</td>
59
+ <td align="center">41.3</td>
60
+ <td align="center">-</td>
61
+ <td align="center">-</td>
62
+ <td align="center">-</td>
63
+ <td align="center">-</td>
64
+ <td align="center">-</td>
65
+ </tr>
66
+ <tr>
67
+ <td>Kimi-k1.5-longcot</td>
68
+ <td align="center">70.0</td>
69
+ <td align="center">74.9</td>
70
+ <td align="center">53.3</td>
71
+ <td align="center">-</td>
72
+ <td align="center">-</td>
73
+ <td align="center">-</td>
74
+ <td align="center">-</td>
75
+ <td align="center">-</td>
76
+ </tr>
77
+ <tr>
78
+ <td>OpenAI-o1</td>
79
+ <td align="center">-</td>
80
+ <td align="center">-</td>
81
+ <td align="center">-</td>
82
+ <td align="center">-</td>
83
+ <td align="center">74.3</td>
84
+ <td align="center">63.4</td>
85
+ <td align="center">72.2</td>
86
+ <td align="center">-</td>
87
+ </tr>
88
+ <tr>
89
+ <td>OpenAI-o4-mini</td>
90
+ <td align="center"><b>81.6</b></td>
91
+ <td align="center"><b>84.3</b></td>
92
+ <td align="center"><b>58.0</b></td>
93
+ <td align="center">-</td>
94
+ <td align="center"><b>93.4</b></td>
95
+ <td align="center"><b>74.6</b></td>
96
+ <td align="center"><b>78.1</b></td>
97
+ <td align="center">-</td>
98
+ </tr>
99
+ <tr>
100
+ <td colspan="9" align="center"><i>Open-Source Models</i></td>
101
+ </tr>
102
+ <tr>
103
+ <td>Skywork-R1V1</td>
104
+ <td align="center">68.0</td>
105
+ <td align="center">67.0</td>
106
+ <td align="center">-</td>
107
+ <td align="center">-</td>
108
+ <td align="center">72.0</td>
109
+ <td align="center">57.2</td>
110
+ <td align="center">54.6</td>
111
+ <td align="center">72.5</td>
112
+ </tr>
113
+ <tr>
114
+ <td>DeepseekR1-671B</td>
115
+ <td align="center">-</td>
116
+ <td align="center">-</td>
117
+ <td align="center">-</td>
118
+ <td align="center">-</td>
119
+ <td align="center"><b>79.8</b></td>
120
+ <td align="center"><b>65.9</b></td>
121
+ <td align="center">71.6</td>
122
+ <td align="center"><b>83.3</b></td>
123
+ </tr>
124
+ <tr>
125
+ <td>InternVL3-38B</td>
126
+ <td align="center">70.1</td>
127
+ <td align="center">75.1</td>
128
+ <td align="center">34.2</td>
129
+ <td align="center">-</td>
130
+ <td align="center">-</td>
131
+ <td align="center">-</td>
132
+ <td align="center">-</td>
133
+ <td align="center">-</td>
134
+ </tr>
135
+ <tr>
136
+ <td>Qwen2.5-VL-72B</td>
137
+ <td align="center">70.2</td>
138
+ <td align="center">74.8</td>
139
+ <td align="center">38.1</td>
140
+ <td align="center">40.4</td>
141
+ <td align="center">-</td>
142
+ <td align="center">-</td>
143
+ <td align="center">-</td>
144
+ <td align="center">-</td>
145
+ </tr>
146
+ <tr>
147
+ <td>QvQ-Preview-72B</td>
148
+ <td align="center">70.3</td>
149
+ <td align="center">71.4</td>
150
+ <td align="center">35.9</td>
151
+ <td align="center">33.2</td>
152
+ <td align="center">-</td>
153
+ <td align="center">-</td>
154
+ <td align="center">-</td>
155
+ <td align="center">-</td>
156
+ </tr>
157
+ <tr>
158
+ <td>Skywork-R1V2 (Ours)</td>
159
+ <td align="center"><b>73.6</b></td>
160
+ <td align="center">74.0</td>
161
+ <td align="center"><b>49.0</b></td>
162
+ <td align="center"><b>62.6</b></td>
163
+ <td align="center">78.9</td>
164
+ <td align="center">63.6</td>
165
+ <td align="center"><b>73.2</b></td>
166
+ <td align="center">82.9</td>
167
+ </tr>
168
+ <tr>
169
+ <td>Skywork-R1V2-AWQ</td>
170
+ <td align="center">-</td>
171
+ <td align="center">-</td>
172
+ <td align="center">-</td>
173
+ <td align="center">-</td>
174
+ <td align="center">-</td>
175
+ <td align="center">-</td>
176
+ <td align="center">-</td>
177
+ <td align="center">-</td>
178
+ </tr>
179
+ </tbody>
180
+ </table>
181
+
182
+ ## Usage
183
+ You can use the quantized model with different inference frameworks:
184
+ ### Using VLLM
185
+
186
+ #### Python API
187
+
188
+ ```python
189
+ import os
190
+ from vllm import LLM, SamplingParams
191
+ from vllm.entrypoints.chat_utils import load_chat_template
192
+ model_name = "Skywork/Skywork-R1V2-38B-AWQ" # or local path
193
+ llm = LLM(model_name,
194
+ dtype='float16',
195
+ quantization="awq",
196
+ gpu_memory_utilization=0.9,
197
+ max_model_len=4096,
198
+ trust_remote_code=True,
199
+ )
200
+ # Add your inference code here
201
+ ```
202
+
203
+ #### OpenAI-compatible API Server
204
+
205
+ ```bash
206
+ MODEL_ID="Skywork/Skywork-R1V2-38B-AWQ" # or local path
207
+ CUDA_VISIBLE_DEVICES=0 \
208
+ python -m vllm.entrypoints.openai.api_server \
209
+ --model $MODEL_ID \
210
+ --dtype float16 \
211
+ --quantization awq \
212
+ --port 23334 \
213
+ --max-model-len 12000 \
214
+ --gpu-memory-utilization 0.9 \
215
+ --trust-remote-code
216
+ ```
217
+
218
+ ### Using LMDeploy
219
+
220
+ ```python
221
+ import os
222
+ from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
223
+ from lmdeploy.vl import load_image
224
+ model_path = "Skywork/Skywork-R1V2-38B-AWQ" # or local path
225
+ engine_config = TurbomindEngineConfig(cache_max_entry_count=0.75)
226
+ chat_template_config = ChatTemplateConfig(model_name=model_path)
227
+ pipe = pipeline(model_path,
228
+ backend_config=engine_config,
229
+ chat_template_config=chat_template_config,
230
+ )
231
+ # Example: Multimodal inference
232
+ image = load_image('table.jpg')
233
+ response = pipe(('Describe this image?', image))
234
+ print(response.text)
235
+ ```
236
+
237
+ ## Hardware Requirements
238
+
239
+ The AWQ quantization reduces the memory footprint compared to the original FP16 model. We recommend:
240
+
241
+ - At least one GPU with 30GB+ VRAM for inference
242
+ - For optimal performance with longer contexts, 40GB+ VRAM is recommended
243
+
244
+ ## Citation
245
+
246
+ If you use this model in your research, please cite:
247
+
248
+ ```bibtex
249
+ @misc{peng2025skyworkr1vpioneeringmultimodal,
250
+ title={Skywork R1V: Pioneering Multimodal Reasoning with Chain-of-Thought},
251
+ author={Yi Peng and Chris and Xiaokun Wang and Yichen Wei and Jiangbo Pei and Weijie Qiu and Ai Jian and Yunzhuo Hao and Jiachun Pan and Tianyidan Xie and Li Ge and Rongxian Zhuang and Xuchen Song and Yang Liu and Yahui Zhou},
252
+ year={2025},
253
+ eprint={2504.05599},
254
+ archivePrefix={arXiv},
255
+ primaryClass={cs.CV},
256
+ url={https://arxiv.org/abs/2504.05599},
257
+ }
258
+
259
+ @misc{chris2025skyworkr1v2multimodalhybrid,
260
+ title={Skywork R1V2: Multimodal Hybrid Reinforcement Learning for Reasoning},
261
+ author={Chris and Yichen Wei and Yi Peng and Xiaokun Wang and Weijie Qiu and Wei Shen and Tianyidan Xie and Jiangbo Pei and Jianhao Zhang and Yunzhuo Hao and Xuchen Song and Yang Liu and Yahui Zhou},
262
+ year={2025},
263
+ eprint={2504.16656},
264
+ archivePrefix={arXiv},
265
+ primaryClass={cs.CV},
266
+ url={https://arxiv.org/abs/2504.16656},
267
+ }
268
+ ```
269
+
270
+ # Skywork-R1V2-38B-AWQ (中文说明)
271
+
272
+
273
+ ## 使用方法
274
+ 您可以使用不同的推理框架来使用这个量化模型:
275
+
276
+ ### 使用 VLLM
277
+
278
+ #### Python API
279
+
280
+ ```python
281
+ import os
282
+ from vllm import LLM, SamplingParams
283
+ from vllm.entrypoints.chat_utils import load_chat_template
284
+ model_name = "Skywork/Skywork-R1V2-38B-AWQ" # 或本地路径
285
+ llm = LLM(model_name,
286
+ dtype='float16',
287
+ quantization="awq",
288
+ gpu_memory_utilization=0.85,
289
+ max_model_len=4096,
290
+ trust_remote_code=True,
291
+ )
292
+ # 在此添加您的推理代码
293
+ ```
294
+
295
+ #### OpenAI 兼容的 API 服务器
296
+
297
+ ```bash
298
+ MODEL_ID="Skywork/Skywork-R1V2-38B-AWQ" # 或本地路径
299
+ CUDA_VISIBLE_DEVICES=0 \
300
+ python -m vllm.entrypoints.openai.api_server \
301
+ --model $MODEL_ID \
302
+ --dtype float16 \
303
+ --quantization awq \
304
+ --port 23334 \
305
+ --max-model-len 12000 \
306
+ --gpu-memory-utilization 0.9 \
307
+ --trust-remote-code
308
+ ```
309
+
310
+ ### 使用 LMDeploy
311
+
312
+ ```python
313
+ import os
314
+ from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
315
+ from lmdeploy.vl import load_image
316
+ model_path = "Skywork/Skywork-R1V2-38B-AWQ" # 或本地路径
317
+ engine_config = TurbomindEngineConfig(cache_max_entry_count=0.75)
318
+ chat_template_config = ChatTemplateConfig(model_name=model_path)
319
+ pipe = pipeline(model_path,
320
+ backend_config=engine_config,
321
+ chat_template_config=chat_template_config,
322
+ )
323
+ # 示例:多模态推理
324
+ image = load_image('table.jpg')
325
+ response = pipe(('描述这个图片?', image))
326
+ print(response.text)
327
+ ```
328
+
329
+ ## 硬件要求
330
+
331
+ 与原始 FP16 模型相比,AWQ 量化减少了内存占用。我们建议:
332
+
333
+ - 至少一块 30GB+ 显存的 GPU 用于推理
334
+ - 对于更长上下文的最佳性能,建议使用 40GB+ 显存
335
+
336
+ ## 引用
337
+
338
+ 如果您在研究中使用此模型,请引用:
339
+
340
+ ```bibtex
341
+ @misc{peng2025skyworkr1vpioneeringmultimodal,
342
+ title={Skywork R1V: Pioneering Multimodal Reasoning with Chain-of-Thought},
343
+ author={Yi Peng and Chris and Xiaokun Wang and Yichen Wei and Jiangbo Pei and Weijie Qiu and Ai Jian and Yunzhuo Hao and Jiachun Pan and Tianyidan Xie and Li Ge and Rongxian Zhuang and Xuchen Song and Yang Liu and Yahui Zhou},
344
+ year={2025},
345
+ eprint={2504.05599},
346
+ archivePrefix={arXiv},
347
+ primaryClass={cs.CV},
348
+ url={https://arxiv.org/abs/2504.05599},
349
+ }
350
+
351
+ @misc{chris2025skyworkr1v2multimodalhybrid,
352
+ title={Skywork R1V2: Multimodal Hybrid Reinforcement Learning for Reasoning},
353
+ author={Chris and Yichen Wei and Yi Peng and Xiaokun Wang and Weijie Qiu and Wei Shen and Tianyidan Xie and Jiangbo Pei and Jianhao Zhang and Yunzhuo Hao and Xuchen Song and Yang Liu and Yahui Zhou},
354
+ year={2025},
355
+ eprint={2504.16656},
356
+ archivePrefix={arXiv},
357
+ primaryClass={cs.CV},
358
+ url={https://arxiv.org/abs/2504.16656},
359
+ }
360
+ ```