Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,109 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
base_model:
|
| 4 |
+
- Qwen/Qwen2.5-VL-3B-Instruct
|
| 5 |
+
tags:
|
| 6 |
+
- mm math reasoning
|
| 7 |
+
datasets:
|
| 8 |
+
- open-r1/OpenR1-Math-220k
|
| 9 |
+
metrics:
|
| 10 |
+
- accuracy
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# TBAC-VLR1-3B-SFT
|
| 14 |
+
|
| 15 |
+
## Overview
|
| 16 |
+
This is a multimodal language model fine-tuned by **Tencent PCG Basic Algorithm Center**. Based on Qwen2.5-VL-3B-Instruct, TBAC-VLR1-3B-SFT undergoes SFT
|
| 17 |
+
training using 40k sft data filtered from OpenR1-Math-220k. TBAC-VLR1-3B then employs GRPO (Group Relative Policy Optimization) and adapts Clip-Higher from DAPO,
|
| 18 |
+
achieving **state-of-the-art** results on several multimodal reasoning benchmarks among models of the same size.
|
| 19 |
+
|
| 20 |
+
## Performance
|
| 21 |
+
| Model | **Average** | **MathVista**| **MathVision** | **MathVerse** | **DynaMath** | **LogicVista** |
|
| 22 |
+
| :-------------------: | :---------: | :-----------:| :------------: | :-----------: | :-----------: | :----------: |
|
| 23 |
+
| Qwen2-VL-2B | 20.5 | 48.0 | 16.1 | 17.5 | 3.8 | 26.6 |
|
| 24 |
+
| InternVL2.5-2B | 21.2 | 51.1 | 14.0 | 22.3 | 4.4 | 27.3 |
|
| 25 |
+
| InternVL3-2B | 29.1 | 57.6 | 20.2 | 24.5 | 14.8 | 40.3 |
|
| 26 |
+
| Qwen2.5-VL-3B | 31.8 | 61.2 | 21.9 | 31.2 | 13.2 | 40.3 |
|
| 27 |
+
| VLM-R1-3B-Math-0305 | 33.4 | 62.7 | 21.9 | 32.2 | 13.0 | 40.5 |
|
| 28 |
+
| Taichu-VLR-3B | 33.6 | 64.9 | 23.1 | 32.1 | 12.6 | 38.7 |
|
| 29 |
+
| VLAA-Thinker-Qwen2.5VL-3B | 35.4 | 61.0 | 24.4 | 36.4 | 18.2 | 38.5 |
|
| 30 |
+
| TBAC-VLR1-3B-preview | 35.7 | 64.8 | 25.0 | 33.2 | 17.7 | 40.8 |
|
| 31 |
+
| TBAC-VLR1-3B-SFT | 36.8 | 57.2 | 27.3 | 44.5 | 15.0 | 40.0 |
|
| 32 |
+
| TBAC-VLR1-3B | **38.7** | 58.2 | 29.0 | 45.3 | 16.1 | 44.9 |
|
| 33 |
+
|
| 34 |
+
<!--  -->
|
| 35 |
+
|
| 36 |
+

|
| 37 |
+
|
| 38 |
+
<!-- The compared results are sourced from https://opencompass.org.cn. -->
|
| 39 |
+
|
| 40 |
+
The results of our model are self-reported, obtained by running evaluations offline on each benchmark.
|
| 41 |
+
|
| 42 |
+
## Usage
|
| 43 |
+
```python
|
| 44 |
+
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
|
| 45 |
+
from qwen_vl_utils import process_vision_info
|
| 46 |
+
|
| 47 |
+
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
|
| 48 |
+
"TencentBAC/TBAC-VLR1-3B-SFT", torch_dtype="auto", device_map="auto"
|
| 49 |
+
)
|
| 50 |
+
|
| 51 |
+
processor = AutoProcessor.from_pretrained("TencentBAC/TBAC-VLR1-3B-SFT")
|
| 52 |
+
|
| 53 |
+
messages = [
|
| 54 |
+
{
|
| 55 |
+
"role": "system",
|
| 56 |
+
"content": "You FIRST think about the reasoning process as an internal monologue and then provide the final answer. The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in \\boxed{}."
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"role": "user",
|
| 60 |
+
"content": [
|
| 61 |
+
{
|
| 62 |
+
"type": "image",
|
| 63 |
+
"image": image_path,
|
| 64 |
+
},
|
| 65 |
+
{"type": "text", "text": query},
|
| 66 |
+
],
|
| 67 |
+
}
|
| 68 |
+
]
|
| 69 |
+
|
| 70 |
+
# Preparation for inference
|
| 71 |
+
text = processor.apply_chat_template(
|
| 72 |
+
messages, tokenize=False, add_generation_prompt=True
|
| 73 |
+
)
|
| 74 |
+
image_inputs, video_inputs = process_vision_info(messages)
|
| 75 |
+
inputs = processor(
|
| 76 |
+
text=[text],
|
| 77 |
+
images=image_inputs,
|
| 78 |
+
videos=video_inputs,
|
| 79 |
+
padding=True,
|
| 80 |
+
return_tensors="pt",
|
| 81 |
+
)
|
| 82 |
+
inputs = inputs.to("cuda")
|
| 83 |
+
|
| 84 |
+
# Inference: Generation of the output
|
| 85 |
+
generated_ids = model.generate(**inputs, max_new_tokens=128, do_sample=False)
|
| 86 |
+
generated_ids_trimmed = [
|
| 87 |
+
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
|
| 88 |
+
]
|
| 89 |
+
output_text = processor.batch_decode(
|
| 90 |
+
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
|
| 91 |
+
)
|
| 92 |
+
print(output_text)
|
| 93 |
+
```
|
| 94 |
+
## Citation
|
| 95 |
+
If you find our model useful in your research, please consider giving ❤️ and citations. Thanks!
|
| 96 |
+
```
|
| 97 |
+
@misc{Ou2025TBACVLR1,
|
| 98 |
+
title = {TBAC-VLR1-3B},
|
| 99 |
+
author = {Ou, Linyu and Xu, Junzhe and Yin, Yuyang},
|
| 100 |
+
year = {2025},
|
| 101 |
+
url = {https://huggingface.co/TencentBAC/TBAC-VLR1-3B},
|
| 102 |
+
}
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
**About**
|
| 108 |
+
|
| 109 |
+
Created by the Tencent PCG Basic Algorithm Center. All rights reserved.
|