Improve model card: Add pipeline tag, library name, update content
#1
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,23 +1,25 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
- en
|
| 5 |
datasets:
|
| 6 |
- tanhuajie2001/Reason-RFT-CoT-Dataset
|
|
|
|
|
|
|
|
|
|
| 7 |
metrics:
|
| 8 |
- accuracy
|
| 9 |
-
|
| 10 |
-
|
| 11 |
---
|
| 12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
<div align="center">
|
| 14 |
<img src="https://github.com/tanhuajie/Reason-RFT/raw/main/assets/logo.png" width="500"/>
|
| 15 |
</div>
|
| 16 |
|
| 17 |
-
# π€ Reason-RFT CoT Dateset
|
| 18 |
-
*The model checkpoints in our project "Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning"*.
|
| 19 |
-
|
| 20 |
-
|
| 21 |
<p align="center">
|
| 22 |
</a>  βοΈ <a href="https://tanhuajie.github.io/ReasonRFT/">Project</a></a>   β   π <a href="https://github.com/tanhuajie/Reason-RFT">Github</a>   β   π₯ <a href="https://huggingface.co/datasets/tanhuajie2001/Reason-RFT-CoT-Dataset">Dataset</a>   β   π <a href="https://arxiv.org/abs/2503.20752">ArXiv</a>   β   π¬ <a href="https://github.com/tanhuajie/Reason-RFT/raw/main/assets/wechat.png">WeChat</a>
|
| 23 |
</p>
|
|
@@ -35,15 +37,8 @@ base_model:
|
|
| 35 |
| Spatial Transformation | [π€ST-GRPO-Zero-2B](https://huggingface.co/tanhuajie2001/Reason-RFT-Zero-Spatial-Transformation-Qwen2-VL-2B) | [π€ST-GRPO-Zero-7B](https://huggingface.co/tanhuajie2001/Reason-RFT-Zero-Spatial-Transformation-Qwen2-VL-7B) | [π€ST-GRPO-2B](https://huggingface.co/tanhuajie2001/Reason-RFT-Spatial-Transformation-Qwen2-VL-2B) | [π€ST-GRPO-7B](https://huggingface.co/tanhuajie2001/Reason-RFT-Spatial-Transformation-Qwen2-VL-7B) |
|
| 36 |
| ***Embodied Tasks*** | π€ *Stay Turned* | π€ *Stay Turned* | π€ *Stay Turned* | π€ *Stay Turned* |
|
| 37 |
|
| 38 |
-
|
| 39 |
## π₯ Overview
|
| 40 |
-
Visual reasoning abilities play a crucial role in understanding complex multimodal data, advancing both domain-specific applications and artificial general intelligence (AGI).
|
| 41 |
-
Existing methods improve VLM reasoning via Chain-of-Thought (CoT) supervised fine-tuning, using meticulously annotated training data to enhance visual reasoning capabilities.
|
| 42 |
-
However, this training paradigm may lead to overfitting and cognitive rigidity, restricting the model's ability to transfer visual reasoning skills across domains and limiting its real-world applicability.
|
| 43 |
-
To address these limitations, we propose **Reason-RFT**, a novel reinforcement fine-tuning framework that significantly enhances generalization capabilities in visual reasoning tasks.
|
| 44 |
-
**Reason-RFT** introduces a two-phase training framework for visual reasoning: (1) Supervised Fine-Tuning (SFT) with curated Chain-of-Thought (CoT) data activates the reasoning potential of Vision-Language Models (VLMs), followed by (2) Group Relative Policy Optimization (GRPO)-based reinforcement learning that generates multiple reasoning-response pairs, significantly enhancing generalization in visual reasoning tasks.
|
| 45 |
-
To evaluate **Reason-RFT**'s visual reasoning capabilities, we reconstructed a comprehensive dataset spanning visual counting, structure perception, and spatial transformation, serving as a benchmark to systematically assess visual cognition, geometric understanding, and spatial generalization.
|
| 46 |
-
Experimental results demonstrate Reasoning-RFT's three key advantages: **(1) Performance Enhancement**: achieving state-of-the-art results across multiple tasks, outperforming most mainstream open-source and proprietary models;
|
| 47 |
**(2) Generalization Superiority**: consistently maintaining robust performance across diverse tasks and domains, outperforming alternative training paradigms;
|
| 48 |
**(3) Data Efficiency**: excelling in few-shot learning scenarios while surpassing full-dataset SFT baselines;
|
| 49 |
**Reason-RFT** introduces a novel paradigm in visual reasoning, significantly advancing multimodal research.
|
|
@@ -53,18 +48,31 @@ Experimental results demonstrate Reasoning-RFT's three key advantages: **(1) Per
|
|
| 53 |
</div>
|
| 54 |
|
| 55 |
## ποΈ News
|
| 56 |
-
|
| 57 |
-
- **`2025-
|
|
|
|
| 58 |
- **`2025-04-04`**: π€ We released our [datasets](https://huggingface.co/datasets/tanhuajie2001/Reason-RFT-CoT-Dataset/) to huggingface for [General Visual Reasoning Tasks](#GeneralVisualTasks).
|
| 59 |
- **`2025-04-02`**: π₯ We released codes and scripts for training/evaluation on [General Visual Reasoning Tasks](#GeneralVisualTasks).
|
| 60 |
- **`2025-03-29`**: π We released the [repository](https://github.com/tanhuajie/Reason-RFT/) and [roadmap](#RoadMap) for **Reason-RFT**.
|
| 61 |
- **`2025-03-26`**: π We released our initial [ArXiv paper](https://arxiv.org/abs/2503.20752/) of **Reason-RFT**.
|
| 62 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
## βοΈ Usage
|
| 65 |
|
| 66 |
*Please refer to [Reason-RFT](https://github.com/tanhuajie/Reason-RFT) for more details.*
|
| 67 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
## π Citation
|
| 69 |
If you find this project useful, welcome to cite us.
|
| 70 |
```bib
|
|
@@ -74,4 +82,18 @@ If you find this project useful, welcome to cite us.
|
|
| 74 |
journal={arXiv preprint arXiv:2503.20752},
|
| 75 |
year={2025}
|
| 76 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
```
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- Qwen/Qwen2-VL-2B-Instruct
|
|
|
|
| 4 |
datasets:
|
| 5 |
- tanhuajie2001/Reason-RFT-CoT-Dataset
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
license: apache-2.0
|
| 9 |
metrics:
|
| 10 |
- accuracy
|
| 11 |
+
pipeline_tag: image-text-to-text
|
| 12 |
+
library_name: transformers
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning of Vision Language Models
|
| 16 |
+
|
| 17 |
+
This repository contains the model checkpoints from the project "[Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning of Vision Language Models](https://huggingface.co/papers/2503.20752)".
|
| 18 |
+
|
| 19 |
<div align="center">
|
| 20 |
<img src="https://github.com/tanhuajie/Reason-RFT/raw/main/assets/logo.png" width="500"/>
|
| 21 |
</div>
|
| 22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
<p align="center">
|
| 24 |
</a>  βοΈ <a href="https://tanhuajie.github.io/ReasonRFT/">Project</a></a>   β   π <a href="https://github.com/tanhuajie/Reason-RFT">Github</a>   β   π₯ <a href="https://huggingface.co/datasets/tanhuajie2001/Reason-RFT-CoT-Dataset">Dataset</a>   β   π <a href="https://arxiv.org/abs/2503.20752">ArXiv</a>   β   π¬ <a href="https://github.com/tanhuajie/Reason-RFT/raw/main/assets/wechat.png">WeChat</a>
|
| 25 |
</p>
|
|
|
|
| 37 |
| Spatial Transformation | [π€ST-GRPO-Zero-2B](https://huggingface.co/tanhuajie2001/Reason-RFT-Zero-Spatial-Transformation-Qwen2-VL-2B) | [π€ST-GRPO-Zero-7B](https://huggingface.co/tanhuajie2001/Reason-RFT-Zero-Spatial-Transformation-Qwen2-VL-7B) | [π€ST-GRPO-2B](https://huggingface.co/tanhuajie2001/Reason-RFT-Spatial-Transformation-Qwen2-VL-2B) | [π€ST-GRPO-7B](https://huggingface.co/tanhuajie2001/Reason-RFT-Spatial-Transformation-Qwen2-VL-7B) |
|
| 38 |
| ***Embodied Tasks*** | π€ *Stay Turned* | π€ *Stay Turned* | π€ *Stay Turned* | π€ *Stay Turned* |
|
| 39 |
|
|
|
|
| 40 |
## π₯ Overview
|
| 41 |
+
Visual reasoning abilities play a crucial role in understanding complex multimodal data, advancing both domain-specific applications and artificial general intelligence (AGI). Existing methods improve VLM reasoning via Chain-of-Thought (CoT) supervised fine-tuning, using meticulously annotated training data to enhance visual reasoning capabilities. However, this training paradigm may lead to overfitting and cognitive rigidity, restricting the model's ability to transfer visual reasoning skills across domains and limiting its real-world applicability. To address these limitations, we propose **Reason-RFT**, a novel reinforcement fine-tuning framework that significantly enhances generalization capabilities in visual reasoning tasks. **Reason-RFT** introduces a two-phase training framework for visual reasoning: (1) Supervised Fine-Tuning (SFT) with curated Chain-of-Thought (CoT) data activates the reasoning potential of Vision-Language Models (VLMs), followed by (2) Group Relative Policy Optimization (GRPO)-based reinforcement learning that generates multiple reasoning-response pairs, significantly enhancing generalization in visual reasoning tasks. To evaluate **Reason-RFT**'s visual reasoning capabilities, we reconstructed a comprehensive dataset spanning visual counting, structure perception, and spatial transformation, serving as a benchmark to systematically assess visual cognition, geometric understanding, and spatial generalization. Experimental results demonstrate Reasoning-RFT's three key advantages: **(1) Performance Enhancement**: achieving state-of-the-art results across multiple tasks, outperforming most mainstream open-source and proprietary models;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
**(2) Generalization Superiority**: consistently maintaining robust performance across diverse tasks and domains, outperforming alternative training paradigms;
|
| 43 |
**(3) Data Efficiency**: excelling in few-shot learning scenarios while surpassing full-dataset SFT baselines;
|
| 44 |
**Reason-RFT** introduces a novel paradigm in visual reasoning, significantly advancing multimodal research.
|
|
|
|
| 48 |
</div>
|
| 49 |
|
| 50 |
## ποΈ News
|
| 51 |
+
- **`2025-09-18`**: π₯π₯π₯ **Reason-RFT** gets accepted to NeurIPS 2025! See you in Mexico City and San Diego, USA!
|
| 52 |
+
- **`2025-06-06`**: π€ We're excited to announce the release of our more powerful [RoboBrain 2.0](https://github.com/FlagOpen/RoboBrain2.0) using Reason-RFT.
|
| 53 |
+
- **`2025-04-13`**: β¨ We released our [model zoo](https://github.com/tanhuajie/Reason-RFT?tab=readme-ov-file#--model-zoo) to huggingface.
|
| 54 |
- **`2025-04-04`**: π€ We released our [datasets](https://huggingface.co/datasets/tanhuajie2001/Reason-RFT-CoT-Dataset/) to huggingface for [General Visual Reasoning Tasks](#GeneralVisualTasks).
|
| 55 |
- **`2025-04-02`**: π₯ We released codes and scripts for training/evaluation on [General Visual Reasoning Tasks](#GeneralVisualTasks).
|
| 56 |
- **`2025-03-29`**: π We released the [repository](https://github.com/tanhuajie/Reason-RFT/) and [roadmap](#RoadMap) for **Reason-RFT**.
|
| 57 |
- **`2025-03-26`**: π We released our initial [ArXiv paper](https://arxiv.org/abs/2503.20752/) of **Reason-RFT**.
|
| 58 |
|
| 59 |
+
## <a id="Method">βοΈ Pipeline</a>
|
| 60 |
+
|
| 61 |
+
<div align="center">
|
| 62 |
+
<img src="https://github.com/tanhuajie/Reason-RFT/raw/main/assets/pipeline.png" />
|
| 63 |
+
</div>
|
| 64 |
|
| 65 |
## βοΈ Usage
|
| 66 |
|
| 67 |
*Please refer to [Reason-RFT](https://github.com/tanhuajie/Reason-RFT) for more details.*
|
| 68 |
|
| 69 |
+
## <a id="EmbodiedVisualReasoningTasks"> π€ Embodied Visual Reasoning Tasks</a>
|
| 70 |
+
We apply Reason-RFT to train more powerful RoboBrain 2.0. Please refer to [RoboBrain 2.0 Github](https://github.com/FlagOpen/RoboBrain2.0) for more details. Here is a simplified comparison result (not the final version):
|
| 71 |
+
|
| 72 |
+
<div align="center">
|
| 73 |
+
<img src="https://github.com/tanhuajie/Reason-RFT/raw/main/assets/rb2_res.png" />
|
| 74 |
+
</div>
|
| 75 |
+
|
| 76 |
## π Citation
|
| 77 |
If you find this project useful, welcome to cite us.
|
| 78 |
```bib
|
|
|
|
| 82 |
journal={arXiv preprint arXiv:2503.20752},
|
| 83 |
year={2025}
|
| 84 |
}
|
| 85 |
+
|
| 86 |
+
@article{team2025robobrain,
|
| 87 |
+
title={Robobrain 2.0 technical report},
|
| 88 |
+
author={Team, BAAI RoboBrain and Cao, Mingyu and Tan, Huajie and Ji, Yuheng and Lin, Minglan and Li, Zhiyu and Cao, Zhou and Wang, Pengwei and Zhou, Enshen and Han, Yi and others},
|
| 89 |
+
journal={arXiv preprint arXiv:2507.02029},
|
| 90 |
+
year={2025}
|
| 91 |
+
}
|
| 92 |
+
|
| 93 |
+
@article{ji2025robobrain,
|
| 94 |
+
title={RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete},
|
| 95 |
+
author={Ji, Yuheng and Tan, Huajie and Shi, Jiayu and Hao, Xiaoshuai and Zhang, Yuan and Zhang, Hengyuan and Wang, Pengwei and Zhao, Mengdi and Mu, Yao and An, Pengju and others},
|
| 96 |
+
journal={arXiv preprint arXiv:2502.21257},
|
| 97 |
+
year={2025}
|
| 98 |
+
}
|
| 99 |
```
|