Improve model card: Add pipeline tag, library name, and refine content

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +23 -7
README.md CHANGED
@@ -1,13 +1,15 @@
1
  ---
2
- license: apache-2.0
3
- language:
4
- - en
5
  datasets:
6
  - tanhuajie2001/Reason-RFT-CoT-Dataset
 
 
 
7
  metrics:
8
  - accuracy
9
- base_model:
10
- - Qwen/Qwen2-VL-2B-Instruct
11
  ---
12
 
13
  <div align="center">
@@ -15,7 +17,7 @@ base_model:
15
  </div>
16
 
17
  # 🤗 Reason-RFT CoT Dateset
18
- *The model checkpoints in our project "Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning"*.
19
 
20
 
21
  <p align="center">
@@ -63,7 +65,7 @@ Experimental results demonstrate Reasoning-RFT's three key advantages: **(1) Per
63
 
64
  ## ⭐️ Usage
65
 
66
- *Please refer to [Reason-RFT](https://github.com/tanhuajie/Reason-RFT) for more details.*
67
 
68
  ## 📑 Citation
69
  If you find this project useful, welcome to cite us.
@@ -74,4 +76,18 @@ If you find this project useful, welcome to cite us.
74
  journal={arXiv preprint arXiv:2503.20752},
75
  year={2025}
76
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  ```
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2-VL-2B-Instruct
 
4
  datasets:
5
  - tanhuajie2001/Reason-RFT-CoT-Dataset
6
+ language:
7
+ - en
8
+ license: apache-2.0
9
  metrics:
10
  - accuracy
11
+ pipeline_tag: image-text-to-text
12
+ library_name: transformers
13
  ---
14
 
15
  <div align="center">
 
17
  </div>
18
 
19
  # 🤗 Reason-RFT CoT Dateset
20
+ *This repository hosts the model checkpoints for the project "Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning of Vision Language Models", as presented in the paper [Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning of Vision Language Models](https://arxiv.org/abs/2503.20752).*
21
 
22
 
23
  <p align="center">
 
65
 
66
  ## ⭐️ Usage
67
 
68
+ For detailed instructions on how to use the models, including inference code and setup, please refer to the [Reason-RFT GitHub repository](https://github.com/tanhuajie/Reason-RFT#--usage).
69
 
70
  ## 📑 Citation
71
  If you find this project useful, welcome to cite us.
 
76
  journal={arXiv preprint arXiv:2503.20752},
77
  year={2025}
78
  }
79
+
80
+ @article{team2025robobrain,
81
+ title={Robobrain 2.0 technical report},
82
+ author={Team, BAAI RoboBrain and Cao, Mingyu and Tan, Huajie and Ji, Yuheng and Lin, Minglan and Li, Zhiyu and Cao, Zhou and Wang, Pengwei and Zhou, Enshen and Han, Yi and others},
83
+ journal={arXiv preprint arXiv:2507.02029},
84
+ year={2025}
85
+ }
86
+
87
+ @article{ji2025robobrain,
88
+ title={RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete},
89
+ author={Ji, Yuheng and Tan, Huajie and Shi, Jiayu and Hao, Xiaoshuai and Zhang, Yuan and Zhang, Hengyuan and Wang, Pengwei and Zhao, Mengdi and Mu, Yao and An, Pengju and others},
90
+ journal={arXiv preprint arXiv:2502.21257},
91
+ year={2025}
92
+ }
93
  ```