--- license: apache-2.0 language: - en base_model: - Qwen/Qwen2.5-VL-7B-Instruct --- ![title](./assets/title.png)
arXiv HF Model: OVR Dataset: OVR
### Overview ![preview](./assets/preview.png) The remarkable reasoning capbility of Large Language Models (LLMs) stems from cognitive behaviors that emerge when reinforcing against verifiable rewards. This work investigates how to transfer this principle to Multimodal LLMs (MLLMs) to unlock **advanced visual reasoning**. We introduce a two-stage paradigm built on Qwen2.5-VL-7B: a massive **text-only cold-start fine-tuning**, followed by **multimodal reinforcement learning** (RL) spanning nearly 1,000 steps—surpassing all prior open-source efforts in scale. This pioneering work reveals three fundamental insights: 1. Behavior transfer emerges surprisingly early in cold start due to linguistic mental imagery. 2. Cold start broadly memorizes visual behaviors, while RL critically discerns and scales up effective patterns. 3. Transfer strategically favors high-utility behaviors such as visual reflection. Our resulting model, Open-Vision-Reasoner (OVR), achieves state-of-the-art performance on a suite of reasoning benchmarks, including **95.3%** on MATH500, **51.8%** on MathVision and **54.6%** on MathVerse. We release our model, data, and training dynamics to catalyze the development of more capable, behavior-aligned multimodal reasoners. ### Model Card | **Model** | **Description** | **Download** | |:---------:|:---------------:|:------------:| | OVR-7B-ColdStart | Intermediate model after massive language-only cold-start fine-tuning | [🤗 OVR-7B-ColdStart](https://huggingface.co/Kangheng/OVR-7B-ColdStart) | | OVR-7B-RL | Final model after large-scale multimodal RL training | [🤗 OVR-7B-RL](https://huggingface.co/Kangheng/OVR-7B-RL) | ### Performance ![performance-text](./assets/performance-text.png) ![performance-vision](./assets/performance-vision.png) ### Training Dynamics and Performance Evolution

### Model Deployment ```bash vllm serve Kangheng/OVR-7B-ColdStart --port 8000 --host 0.0.0.0 --tensor-parallel-size 1 --gpu-memory-utilization 0.6 ```