Update README.md
Browse files
README.md
CHANGED
|
@@ -1,26 +1,41 @@
|
|
| 1 |
---
|
| 2 |
-
pipeline_tag:
|
| 3 |
library_name: transformers
|
| 4 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
---
|
| 6 |
|
| 7 |
-
<div align="center">
|
| 8 |
-
<img src="skywork-logo.png" alt="Skywork Logo" width="
|
|
|
|
| 9 |
</div>
|
| 10 |
|
| 11 |
-
# Skywork-R1V3
|
| 12 |
-
|
| 13 |
-
## 📖 [R1V3 Report](https://github.com/SkyworkAI/Skywork-R1V/Skywork_R1V3) | 💻 [GitHub](https://github.com/SkyworkAI/Skywork-R1V)
|
| 14 |
<p align="center">
|
| 15 |
-
|
| 16 |
-
<
|
| 17 |
-
|
| 18 |
-
<a href="https://github.com/SkyworkAI/Skywork-R1V/fork">
|
| 19 |
-
<img src="https://img.shields.io/github/forks/SkyworkAI/Skywork-R1V" alt="GitHub Forks" />
|
| 20 |
-
</a>
|
| 21 |
</p>
|
|
|
|
|
|
|
| 22 |
|
| 23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
## 1. Model Introduction
|
| 25 |
|
| 26 |
**Skywork-R1V3-38B** is the **latest and most powerful open-source multimodal reasoning model** in the Skywork series, pushing the boundaries of multimodal and cross-disciplinary intelligence. With elaborate RL algorithm in the post-training stage, R1V3 significantly enhances multimodal reasoning ablity and achieves **open-source state-of-the-art (SOTA)** performance across multiple multimodal reasoning benchmarks.
|
|
@@ -45,40 +60,60 @@ license: mit
|
|
| 45 |
|
| 46 |
## 3. Usage
|
| 47 |
|
| 48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
-
```shell
|
| 51 |
-
git clone https://github.com/SkyworkAI/Skywork-R1V.git
|
| 52 |
-
cd skywork-r1v/inference
|
| 53 |
```
|
| 54 |
-
### 2. Set Up the Environment
|
| 55 |
|
|
|
|
| 56 |
```shell
|
| 57 |
-
|
| 58 |
-
conda create -n r1-v python=3.10 && conda activate r1-v
|
| 59 |
-
bash setup.sh
|
| 60 |
-
# For vLLM
|
| 61 |
-
conda create -n r1v-vllm python=3.10 && conda activate r1v-vllm
|
| 62 |
-
pip install -U vllm
|
| 63 |
-
```
|
| 64 |
-
|
| 65 |
-
### 3. Run the Inference Script
|
| 66 |
-
transformers inference
|
| 67 |
|
| 68 |
-
```shell
|
| 69 |
-
CUDA_VISIBLE_DEVICES="0,1" python inference_with_transformers.py \
|
| 70 |
-
--model_path path \
|
| 71 |
-
--image_paths image1_path \
|
| 72 |
-
--question "your question"
|
| 73 |
-
```
|
| 74 |
-
|
| 75 |
-
vllm inference
|
| 76 |
-
```shell
|
| 77 |
-
python inference_with_vllm.py \
|
| 78 |
-
--model_path path \
|
| 79 |
-
--image_paths image1_path image2_path \
|
| 80 |
-
--question "your question" \
|
| 81 |
-
--tensor_parallel_size 4
|
| 82 |
```
|
| 83 |
|
| 84 |
---
|
|
@@ -86,28 +121,15 @@ python inference_with_vllm.py \
|
|
| 86 |
## 4. Citation
|
| 87 |
If you use Skywork-R1V in your research, please cite:
|
| 88 |
|
| 89 |
-
```
|
| 90 |
-
@misc{chris2025skyworkr1v2multimodalhybrid,
|
| 91 |
-
title={Skywork R1V2: Multimodal Hybrid Reinforcement Learning for Reasoning},
|
| 92 |
-
author={Peiyu Wang and Yichen Wei and Yi Peng and Xiaokun Wang and Weijie Qiu and Wei Shen and Tianyidan Xie and Jiangbo Pei and Jianhao Zhang and Yunzhuo Hao and Xuchen Song and Yang Liu and Yahui Zhou},
|
| 93 |
-
year={2025},
|
| 94 |
-
eprint={2504.16656},
|
| 95 |
-
archivePrefix={arXiv},
|
| 96 |
-
primaryClass={cs.CV},
|
| 97 |
-
url={https://arxiv.org/abs/2504.16656},
|
| 98 |
-
}
|
| 99 |
-
```
|
| 100 |
|
| 101 |
```
|
| 102 |
-
@
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
url={https://arxiv.org/abs/2504.05599},
|
| 110 |
}
|
| 111 |
-
```
|
| 112 |
|
| 113 |
*This project is released under an open-source license.*
|
|
|
|
| 1 |
---
|
| 2 |
+
pipeline_tag: visual-question-answering
|
| 3 |
library_name: transformers
|
| 4 |
license: mit
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
base_model:
|
| 8 |
+
- OpenGVLab/InternVL3-38B
|
| 9 |
+
tags:
|
| 10 |
+
- Skywork R1V
|
| 11 |
---
|
| 12 |
|
| 13 |
+
<div align="center">
|
| 14 |
+
<img src="skywork-logo.png" alt="Skywork Logo" width="400">
|
| 15 |
+
<h1><strong>Skywork-R1V3</strong></h1>
|
| 16 |
</div>
|
| 17 |
|
|
|
|
|
|
|
|
|
|
| 18 |
<p align="center">
|
| 19 |
+
<a href="https://github.com/SkyworkAI/Skywork-R1V/blob/main/report/Skywork_R1V3.pdf"><strong>📖 R1V3 Report</strong></a> |
|
| 20 |
+
<a href="https://github.com/SkyworkAI/Skywork-R1V"><strong>💻 GitHub</strong></a> |
|
| 21 |
+
<a href="https://huggingface.co/Skywork/Skywork-R1V3"><strong>🤗 Hugging Face</strong></a>
|
|
|
|
|
|
|
|
|
|
| 22 |
</p>
|
| 23 |
+
<!-- # Skywork-R1V3 -->
|
| 24 |
+
|
| 25 |
|
| 26 |
|
| 27 |
+
<p align="center">
|
| 28 |
+
<a href="https://github.com/SkyworkAI/Skywork-R1V/stargazers">
|
| 29 |
+
<img src="https://img.shields.io/github/stars/SkyworkAI/Skywork-R1V?style=social" alt="GitHub Stars">
|
| 30 |
+
</a>
|
| 31 |
+
<a href="https://github.com/SkyworkAI/Skywork-R1V/fork">
|
| 32 |
+
<img src="https://img.shields.io/github/forks/SkyworkAI/Skywork-R1V?style=social" alt="GitHub Forks">
|
| 33 |
+
</a>
|
| 34 |
+
<a href="https://github.com/SkyworkAI/Skywork-R1V/blob/main/LICENSE">
|
| 35 |
+
<img src="https://img.shields.io/github/license/SkyworkAI/Skywork-R1V" alt="License">
|
| 36 |
+
</a>
|
| 37 |
+
</p>
|
| 38 |
+
|
| 39 |
## 1. Model Introduction
|
| 40 |
|
| 41 |
**Skywork-R1V3-38B** is the **latest and most powerful open-source multimodal reasoning model** in the Skywork series, pushing the boundaries of multimodal and cross-disciplinary intelligence. With elaborate RL algorithm in the post-training stage, R1V3 significantly enhances multimodal reasoning ablity and achieves **open-source state-of-the-art (SOTA)** performance across multiple multimodal reasoning benchmarks.
|
|
|
|
| 60 |
|
| 61 |
## 3. Usage
|
| 62 |
|
| 63 |
+
If you need the detailed inference code and evaluation script, please refer to our [GitHub](https://github.com/SkyworkAI/Skywork-R1V).
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
### Run the Inference Script
|
| 67 |
+
#### transformers inference
|
| 68 |
+
|
| 69 |
+
```python
|
| 70 |
+
import torch
|
| 71 |
+
from transformers import AutoModel, AutoTokenizer
|
| 72 |
+
from utils import load_image, split_model
|
| 73 |
+
import argparse
|
| 74 |
+
|
| 75 |
+
def main():
|
| 76 |
+
parser = argparse.ArgumentParser(description="Run inference with Skywork-R1V model.")
|
| 77 |
+
parser.add_argument('--model_path', type=str, default='Skywork/Skywork-R1V3-38B', help="Path to the model.")
|
| 78 |
+
parser.add_argument('--image_paths', type=str, nargs='+', required=True, help="Path(s) to the image(s).")
|
| 79 |
+
parser.add_argument('--question', type=str, required=True, help="Question to ask the model.")
|
| 80 |
+
args = parser.parse_args()
|
| 81 |
+
|
| 82 |
+
device_map = split_model(args.model_path)
|
| 83 |
+
model = AutoModel.from_pretrained(
|
| 84 |
+
args.model_path,
|
| 85 |
+
torch_dtype=torch.bfloat16,
|
| 86 |
+
load_in_8bit=False,
|
| 87 |
+
low_cpu_mem_usage=True,
|
| 88 |
+
use_flash_attn=True,
|
| 89 |
+
trust_remote_code=True,
|
| 90 |
+
device_map=device_map
|
| 91 |
+
).eval()
|
| 92 |
+
tokenizer = AutoTokenizer.from_pretrained(args.model_path, trust_remote_code=True, use_fast=False)
|
| 93 |
+
|
| 94 |
+
pixel_values = [load_image(img_path, max_num=12).to(torch.bfloat16).cuda() for img_path in args.image_paths]
|
| 95 |
+
if len(pixel_values) > 1:
|
| 96 |
+
num_patches_list = [img.size(0) for img in pixel_values]
|
| 97 |
+
pixel_values = torch.cat(pixel_values, dim=0)
|
| 98 |
+
else:
|
| 99 |
+
pixel_values = pixel_values[0]
|
| 100 |
+
num_patches_list = None
|
| 101 |
+
|
| 102 |
+
prompt = "<image>\n"*len(args.image_paths) + args.question
|
| 103 |
+
generation_config = dict(max_new_tokens=64000, do_sample=True, temperature=0.6, top_p=0.95, repetition_penalty=1.05)
|
| 104 |
+
response = model.chat(tokenizer, pixel_values, prompt, generation_config, num_patches_list=num_patches_list)
|
| 105 |
+
|
| 106 |
+
print(f'User: {args.question}\nAssistant: {response}')
|
| 107 |
+
|
| 108 |
+
if __name__ == '__main__':
|
| 109 |
+
main()
|
| 110 |
|
|
|
|
|
|
|
|
|
|
| 111 |
```
|
|
|
|
| 112 |
|
| 113 |
+
#### vllm inference
|
| 114 |
```shell
|
| 115 |
+
python -m vllm.entrypoints.openai.api_server --model $MODEL_PATH --max_model_len 32768 --limit-mm-per-prompt "image=20" --tensor-parallel-size $N_GPU --dtype auto --trust-remote-code
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 116 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 117 |
```
|
| 118 |
|
| 119 |
---
|
|
|
|
| 121 |
## 4. Citation
|
| 122 |
If you use Skywork-R1V in your research, please cite:
|
| 123 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 124 |
|
| 125 |
```
|
| 126 |
+
@techreport{wshen2025skyworkr1v3,
|
| 127 |
+
author = {Wei Shen and Jiangbo Pei and Yi Peng and Xuchen Song and Yang Liu and Jian Peng and Haofeng Sun and Yunzhuo Hao and Peiyu Wang and Yahui Zhou},
|
| 128 |
+
title = {Skywork-R1V3 Technical Report},
|
| 129 |
+
institution = {SkyworkAI},
|
| 130 |
+
year = {2025},
|
| 131 |
+
url = {https://github.com/SkyworkAI/Skywork-R1V/blob/main/report/Skywork_R1V3.pdf},
|
| 132 |
+
urldate = {2025-07-08}
|
|
|
|
| 133 |
}
|
|
|
|
| 134 |
|
| 135 |
*This project is released under an open-source license.*
|