File size: 3,168 Bytes
1b8d1da dca8c92 08b7962 fcd0900 dca8c92 817ea2e dca8c92 817ea2e dca8c92 817ea2e dca8c92 817ea2e 08b7962 817ea2e dca8c92 817ea2e dca8c92 817ea2e dca8c92 817ea2e dca8c92 817ea2e 432901a 5500321 432901a 5500321 432901a 817ea2e dca8c92 2dfd1c3 dca8c92 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
---

<div align="center">
<a href="https://github.com/linkangheng/Open-Vision-Reasoner/tree/main/paper/Open-Vision-Reasoner.pdf" target="blank" style="margin-right: 10px;">
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-OVR-red?logo=arxiv" height="20" />
</a><a href="https://huggingface.co/collections/Kangheng/ovr-686646849f9b43daccbe2fe0" target="blank" style="margin-right: 10px;">
<img alt="HF Model: OVR" src="https://img.shields.io/badge/%F0%9F%A4%97%20_Model-OVR-fb8740?&logoColor=white" height="20" />
</a><a href="" target="blank" style="margin-right: 10px;">
<img alt="Dataset: OVR" src="https://img.shields.io/badge/%F0%9F%97%84%EF%B8%8F%20_Dataset(coming)-OVR-48b9d0?&logoColor=white" height="20" />
</a>
</div>
### Overview

The remarkable reasoning capbility of Large Language Models (LLMs) stems from cognitive behaviors that emerge when reinforcing against verifiable rewards. This work investigates how to transfer this principle to Multimodal LLMs (MLLMs) to unlock **advanced visual reasoning**.
We introduce a two-stage paradigm built on Qwen2.5-VL-7B: a massive **text-only cold-start fine-tuning**, followed by **multimodal reinforcement learning** (RL) spanning nearly 1,000 steps—surpassing all prior open-source efforts in scale. This pioneering work reveals three fundamental insights:
1. Behavior transfer emerges surprisingly early in cold start due to linguistic mental imagery.
2. Cold start broadly memorizes visual behaviors, while RL critically discerns and scales up effective patterns.
3. Transfer strategically favors high-utility behaviors such as visual reflection.
Our resulting model, Open-Vision-Reasoner (OVR), achieves state-of-the-art performance on a suite of reasoning benchmarks, including **95.3%** on MATH500, **51.8%** on MathVision and **54.6%** on MathVerse. We release our model, data, and training dynamics to catalyze the development of more capable, behavior-aligned multimodal reasoners.
### Model Card
| **Model** | **Description** | **Download** |
|:---------:|:---------------:|:------------:|
| OVR-7B-ColdStart | Intermediate model after massive language-only cold-start fine-tuning | [🤗 OVR-7B-ColdStart](https://huggingface.co/Kangheng/OVR-7B-ColdStart) |
| OVR-7B-RL | Final model after large-scale multimodal RL training | [🤗 OVR-7B-RL](https://huggingface.co/Kangheng/OVR-7B-RL) |
### Performance


### Training Dynamics and Performance Evolution
<table align="center">
<tr>
<td align="center">
<img width="100%" src="assets/coldstart_dynamic.png">
</td>
<td align="center">
<img width="100%" src="assets/rl_dynamics.png">
</td>
</tr>
</table>
<br>
<p align="center">
<img width="100%" src="assets/performance.png">
</p>
### Model Deployment
```bash
vllm serve Kangheng/OVR-7B-ColdStart --port 8000 --host 0.0.0.0 --tensor-parallel-size 1 --gpu-memory-utilization 0.6
``` |