Kangheng commited on
Commit
817ea2e
·
verified ·
1 Parent(s): 3afe5d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -12
README.md CHANGED
@@ -10,28 +10,27 @@ base_model:
10
  ![title](./assets/title.png)
11
 
12
  <div align="center">
13
- <a href="https://arxiv.org/abs/2506.12000" target="blank">
14
  <img alt="arXiv" src="https://img.shields.io/badge/arXiv-OVR-red?logo=arxiv" height="20" />
15
- </a> <a href="https://huggingface.co/ovr" target="blank">
16
  <img alt="HF Model: OVR" src="https://img.shields.io/badge/%F0%9F%A4%97%20_Model-OVR-fb8740?&logoColor=white" height="20" />
17
- </a> <a href="https://huggingface.co/datasets/ovr" target="blank">
18
- <img alt="Dataset: OVR" src="https://img.shields.io/badge/%F0%9F%97%84%EF%B8%8F%20_Dataset-OVR-48b9d0?&logoColor=white" height="20" />
19
- </a> <a href="https://huggingface.co/spaces/ovr/demo" target="blank">
20
- <img alt="Demo: OVR" src="https://img.shields.io/badge/%F0%9F%9A%80%20_Demo-OVR-9368AB?&logoColor=white" height="20" />
21
  </a>
22
  </div>
23
 
 
24
  ![preview](./assets/preview.png)
25
 
26
- **OVR-7B-ColdStart** is the foundational checkpoint from the **Open Vision Reasoner (OVR)** project, trained on over 2 million text-based reasoning samples to learn powerful cognitive behaviors that naturally transfer to visual tasks.
27
 
28
- This model demonstrates a key finding: **linguistic cognitive patterns learned from text can effectively generalize to multimodal reasoning**, serving as the foundation for the final visual reasoning model `OVR-7B-RL`.
29
 
30
- ### Model Highlights
 
 
31
 
32
- - **Base Model:** Built on `Qwen2.5-VL-7B`.
33
- - **Training:** Pure text cold-start fine-tuning to establish core cognitive reasoning patterns
34
- - **Transfer Learning:** Cognitive behaviors acquired from language reasoning successfully transfer to visual tasks.
35
 
36
  ### Model Card
37
 
@@ -44,6 +43,17 @@ This model demonstrates a key finding: **linguistic cognitive patterns learned f
44
  ![performance-text](./assets/performance-text.png)
45
  ![performance-vision](./assets/performance-vision.png)
46
 
 
 
 
 
 
 
 
 
 
 
 
47
  ### Model Deployment
48
 
49
  ```bash
 
10
  ![title](./assets/title.png)
11
 
12
  <div align="center">
13
+ <a href="https://github.com/linkangheng/Open-Vision-Reasoner/tree/main/paper/Open-Vision-Reasoner.pdf" target="blank" style="margin-right: 10px;">
14
  <img alt="arXiv" src="https://img.shields.io/badge/arXiv-OVR-red?logo=arxiv" height="20" />
15
+ </a><a href="https://huggingface.co/collections/Kangheng/ovr-686646849f9b43daccbe2fe0" target="blank" style="margin-right: 10px;">
16
  <img alt="HF Model: OVR" src="https://img.shields.io/badge/%F0%9F%A4%97%20_Model-OVR-fb8740?&logoColor=white" height="20" />
17
+ </a><a href="" target="blank" style="margin-right: 10px;">
18
+ <img alt="Dataset: OVR" src="https://img.shields.io/badge/%F0%9F%97%84%EF%B8%8F%20_Dataset(coming)-OVR-48b9d0?&logoColor=white" height="20" />
 
 
19
  </a>
20
  </div>
21
 
22
+ ### Overview
23
  ![preview](./assets/preview.png)
24
 
25
+ The remarkable reasoning capbility of Large Language Models (LLMs) stems from cognitive behaviors that emerge when reinforcing against verifiable rewards. This work investigates how to transfer this principle to Multimodal LLMs (MLLMs) to unlock **advanced visual reasoning**.
26
 
27
+ We introduce a two-stage paradigm built on Qwen2.5-VL-7B: a massive **text-only cold-start fine-tuning**, followed by **multimodal reinforcement learning** (RL) spanning nearly 1,000 steps—surpassing all prior open-source efforts in scale. This pioneering work reveals three fundamental insights:
28
 
29
+ 1. Behavior transfer emerges surprisingly early in cold start due to linguistic mental imagery.
30
+ 2. Cold start broadly memorizes visual behaviors, while RL critically discerns and scales up effective patterns.
31
+ 3. Transfer strategically favors high-utility behaviors such as visual reflection.
32
 
33
+ Our resulting model, Open-Vision-Reasoner (OVR), achieves state-of-the-art performance on a suite of reasoning benchmarks, including **95.3%** on MATH500, **51.8%** on MathVision and **54.6%** on MathVerse. We release our model, data, and training dynamics to catalyze the development of more capable, behavior-aligned multimodal reasoners.
 
 
34
 
35
  ### Model Card
36
 
 
43
  ![performance-text](./assets/performance-text.png)
44
  ![performance-vision](./assets/performance-vision.png)
45
 
46
+ ### Training Dynamics and Performance Evolution
47
+
48
+ <p align="center">
49
+ <img width="45%" src="assets/coldstart_dynamic.png">
50
+ <img width="45%" src="assets/rl_dynamics.png">
51
+ </p>
52
+
53
+ <p align="center">
54
+ <img width="100%" src="assets/performance.png">
55
+ </p>
56
+
57
  ### Model Deployment
58
 
59
  ```bash