nielsr's picture
nielsr HF Staff
Improve model card: Update title, add pipeline tag, library name, and paper link
9025a34 verified
|
raw
history blame
7.79 kB
metadata
base_model:
  - Qwen/Qwen2-VL-2B-Instruct
datasets:
  - tanhuajie2001/Reason-RFT-CoT-Dataset
language:
  - en
license: apache-2.0
metrics:
  - accuracy
pipeline_tag: image-text-to-text
library_name: transformers

Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning of Vision Language Models

This repository contains model checkpoints from the project "Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning", as presented in the paper Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning of Vision Language Models.

  β­οΈ Project   β”‚   πŸŒŽ Github   β”‚   πŸ”₯ Dataset   β”‚   πŸ“‘ Paper   β”‚   πŸ’¬ WeChat

  πŸ€– RoboBrain: Aim to Explore ReasonRFT Paradigm to Enhance RoboBrain's Embodied Reasoning Capabilities.

♣️ Model List

Tasks Reason-RFT-Zero-2B Reason-RFT-Zero-7B Reason-RFT-2B Reason-RFT-7B
Visual Counting πŸ€—VC-GRPO-Zero-2B πŸ€—VC-GRPO-Zero-7B πŸ€—VC-GRPO-2B πŸ€—VC-GRPO-7B
Structure Perception πŸ€—SP-GRPO-Zero-2B πŸ€—SP-GRPO-Zero-7B πŸ€—SP-GRPO-2B πŸ€—SP-GRPO-7B
Spatial Transformation πŸ€—ST-GRPO-Zero-2B πŸ€—ST-GRPO-Zero-7B πŸ€—ST-GRPO-2B πŸ€—ST-GRPO-7B
Embodied Tasks πŸ€– Stay Turned πŸ€– Stay Turned πŸ€– Stay Turned πŸ€– Stay Turned

πŸ”₯ Overview

Visual reasoning abilities play a crucial role in understanding complex multimodal data, advancing both domain-specific applications and artificial general intelligence (AGI). Existing methods improve VLM reasoning via Chain-of-Thought (CoT) supervised fine-tuning, using meticulously annotated training data to enhance visual reasoning capabilities. However, this training paradigm may lead to overfitting and cognitive rigidity, restricting the model's ability to transfer visual reasoning skills across domains and limiting its real-world applicability. To address these limitations, we propose Reason-RFT, a novel reinforcement fine-tuning framework that significantly enhances generalization capabilities in visual reasoning tasks. Reason-RFT introduces a two-phase training framework for visual reasoning: (1) Supervised Fine-Tuning (SFT) with curated Chain-of-Thought (CoT) data activates the reasoning potential of Vision-Language Models (VLMs), followed by (2) Group Relative Policy Optimization (GRPO)-based reinforcement learning that generates multiple reasoning-response pairs, significantly enhancing generalization in visual reasoning tasks. To evaluate Reason-RFT's visual reasoning capabilities, we reconstructed a comprehensive dataset spanning visual counting, structure perception, and spatial transformation, serving as a benchmark to systematically assess visual cognition, geometric understanding, and spatial generalization. Experimental results demonstrate Reasoning-RFT's three key advantages: (1) Performance Enhancement: achieving state-of-the-art results across multiple tasks, outperforming most mainstream open-source and proprietary models; (2) Generalization Superiority: consistently maintaining robust performance across diverse tasks and domains, outperforming alternative training paradigms; (3) Data Efficiency: excelling in few-shot learning scenarios while surpassing full-dataset SFT baselines; Reason-RFT introduces a novel paradigm in visual reasoning, significantly advancing multimodal research.

πŸ—žοΈ News

⭐️ Usage

Please refer to Reason-RFT for more details.

πŸ“‘ Citation

If you find this project useful, welcome to cite us.

@article{tan2025reason,
  title={Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning},
  author={Tan, Huajie and Ji, Yuheng and Hao, Xiaoshuai and Lin, Minglan and Wang, Pengwei and Wang, Zhongyuan and Zhang, Shanghang},
  journal={arXiv preprint arXiv:2503.20752},
  year={2025}
}

@article{team2025robobrain,
  title={Robobrain 2.0 technical report},
  author={Team, BAAI RoboBrain and Cao, Mingyu and Tan, Huajie and Ji, Yuheng and Lin, Minglan and Li, Zhiyu and Cao, Zhou and Wang, Pengwei and Zhou, Enshen and Han, Yi and others},
  journal={arXiv preprint arXiv:2507.02029},
  year={2025}
}

@article{ji2025robobrain,
  title={RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete},
  author={Ji, Yuheng and Tan, Huajie and Shi, Jiayu and Hao, Xiaoshuai and Zhang, Yuan and Zhang, Hengyuan and Wang, Pengwei and Zhao, Mengdi and Mu, Yao and An, Pengju and others},
  journal={arXiv preprint arXiv:2502.21257},
  year={2025}
}