nielsr HF Staff commited on
Commit
b1d67b9
·
verified ·
1 Parent(s): e8d4b46

Enhance OpenVLThinker-7B model card: Add paper info, abstract, and descriptive tags

Browse files

This PR significantly improves the model card for OpenVLThinker-7B by:

- **Adding comprehensive tags to metadata**: Enhances discoverability on the Hub with relevant keywords like `reasoning`, `multimodal`, `vlm`, `math`, and `visual-question-answering`, extracted from the paper's abstract.
- **Introducing a prominent paper section**: Provides immediate access to the paper's title, direct Hugging Face paper page link, and the full abstract, offering users crucial context about the model.
- **Refining the citation**: Updates the BibTeX entry's URL to the official Hugging Face paper page and corrects the title, ensuring accurate and consistent referencing within the Hub. The citation block format is also explicitly set to `bibtex` for proper rendering.

These updates aim to make the model card more informative, discoverable, and aligned with Hugging Face Hub documentation best practices.

Files changed (1) hide show
  1. README.md +18 -4
README.md CHANGED
@@ -1,11 +1,25 @@
1
  ---
2
  base_model:
3
  - Qwen/Qwen2.5-VL-7B-Instruct
4
- license: apache-2.0
5
  library_name: transformers
 
6
  pipeline_tag: image-text-to-text
 
 
 
 
 
 
7
  ---
8
 
 
 
 
 
 
 
 
 
9
  ## Overview
10
  OpenVLThinker-7B is a vision-language reasoning model designed to handle multimodal tasks. It is especially tuned for visual mathematical problem-solving.
11
 
@@ -85,14 +99,14 @@ print(generated_text)
85
  ```
86
 
87
  ### Citation
88
- ```text
89
  @misc{deng2025openvlthinker,
90
- title={OpenVLThinker: An Early Exploration to Complex Vision-Language Reasoning via Iterative Self-Improvement},
91
  author={Yihe Deng and Hritik Bansal and Fan Yin and Nanyun Peng and Wei Wang and Kai-Wei Chang},
92
  year={2025},
93
  eprint={2503.17352},
94
  archivePrefix={arXiv},
95
  primaryClass={cs.CV},
96
- url={https://arxiv.org/abs/2503.17352},
97
  }
98
  ```
 
1
  ---
2
  base_model:
3
  - Qwen/Qwen2.5-VL-7B-Instruct
 
4
  library_name: transformers
5
+ license: apache-2.0
6
  pipeline_tag: image-text-to-text
7
+ tags:
8
+ - reasoning
9
+ - multimodal
10
+ - vlm
11
+ - math
12
+ - visual-question-answering
13
  ---
14
 
15
+ # OpenVLThinker: Complex Vision-Language Reasoning via Iterative SFT-RL Cycles
16
+
17
+ This model was presented in the paper [OpenVLThinker: Complex Vision-Language Reasoning via Iterative SFT-RL Cycles](https://huggingface.co/papers/2503.17352).
18
+
19
+ ## Abstract
20
+
21
+ We introduce OpenVLThinker, one of the first open-source large vision-language models (LVLMs) to exhibit sophisticated chain-of-thought reasoning, achieving notable performance gains on challenging visual reasoning tasks. While text-based reasoning models (e.g., Deepseek R1) show promising results in text-only tasks, distilling their reasoning into LVLMs via supervised fine-tuning (SFT) often results in performance degradation due to imprecise visual grounding. Conversely, purely reinforcement learning (RL)-based methods face a large search space, hindering the emergence of reflective behaviors in smaller models (e.g., 7B LVLMs). Surprisingly, alternating between SFT and RL ultimately results in significant performance improvements after a few iterations. Our analysis reveals that the base model rarely exhibits reasoning behaviors initially, but SFT effectively surfaces these latent actions and narrows the RL search space, accelerating the development of reasoning capabilities. Each subsequent RL stage further refines the model's reasoning skills, producing higher-quality SFT data for continued self-improvement. OpenVLThinker-7B consistently advances performance across six benchmarks demanding mathematical and general reasoning, notably improving MathVista by 3.8%, EMMA by 2.4%, and HallusionBench by 1.6%. Beyond demonstrating the synergy between SFT and RL for complex reasoning tasks, our findings provide early evidence towards achieving R1-style reasoning in multimodal contexts. The code, model and data are held at this https URL .
22
+
23
  ## Overview
24
  OpenVLThinker-7B is a vision-language reasoning model designed to handle multimodal tasks. It is especially tuned for visual mathematical problem-solving.
25
 
 
99
  ```
100
 
101
  ### Citation
102
+ ```bibtex
103
  @misc{deng2025openvlthinker,
104
+ title={OpenVLThinker: Complex Vision-Language Reasoning via Iterative SFT-RL Cycles},
105
  author={Yihe Deng and Hritik Bansal and Fan Yin and Nanyun Peng and Wei Wang and Kai-Wei Chang},
106
  year={2025},
107
  eprint={2503.17352},
108
  archivePrefix={arXiv},
109
  primaryClass={cs.CV},
110
+ url={https://huggingface.co/papers/2503.17352},
111
  }
112
  ```