Datasets:
Enhance dataset card: Add paper, code, task category, and detailed description
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,3 +1,53 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-text-to-text
|
| 5 |
+
tags:
|
| 6 |
+
- vlm
|
| 7 |
+
- vqa
|
| 8 |
+
- reinforcement-learning
|
| 9 |
+
- efficiency
|
| 10 |
+
- multimodal
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning
|
| 14 |
+
|
| 15 |
+
This repository contains the training and validation datasets for the **VisionThink** project, which introduces a smart and efficient vision-language model (VLM) paradigm via reinforcement learning. The datasets are used for training and evaluating VisionThink models, which are designed for dynamically processing distinct samples with different visual resolutions, thereby achieving strong fine-grained visual understanding on OCR-related tasks while significantly saving visual tokens on simpler tasks.
|
| 16 |
+
|
| 17 |
+
For more details on the VisionThink project and its methodology, please refer to the paper:
|
| 18 |
+
[VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning](https://huggingface.co/papers/2507.13348)
|
| 19 |
+
|
| 20 |
+
The code for the VisionThink project, including training and evaluation scripts, can be found on GitHub:
|
| 21 |
+
[https://github.com/dvlab-research/VisionThink](https://github.com/dvlab-research/VisionThink)
|
| 22 |
+
|
| 23 |
+
## Highlights
|
| 24 |
+
1. Our VisionThink leverages reinforcement learning to **autonomously** learn whether to reduce visual tokens. Compared to traditional efficient VLM approaches, our method achieves significant improvements on **fine-grained** benchmarks, such as those involving OCR-related tasks.
|
| 25 |
+
2. VisionThink improves performance on **General VQA** tasks while reducing visual tokens by **50%**, achieving **102%** of the original model’s performance across nine benchmarks.
|
| 26 |
+
3. VisionThink achieves strong performance and efficiency by simply resizing input images to reduce visual tokens. We hope this inspires further research into **Efficient Reasoning Vision Language Models**.
|
| 27 |
+
|
| 28 |
+
## Data Preparation
|
| 29 |
+
|
| 30 |
+
The datasets used in VisionThink are hosted separately on Hugging Face. You can download them using the `huggingface-cli` tool:
|
| 31 |
+
|
| 32 |
+
#### Dataset for General VQA
|
| 33 |
+
```bash
|
| 34 |
+
huggingface-cli download --repo-type dataset --resume-download Senqiao/VisionThink-General-Train --local-dir datasets/VisionThink-General-Train
|
| 35 |
+
huggingface-cli download --repo-type dataset --resume-download Senqiao/VisionThink-General-Val --local-dir datasets/VisionThink-General-Val
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
#### Dataset for Efficient Reasoning VLM
|
| 39 |
+
```bash
|
| 40 |
+
huggingface-cli download --repo-type dataset --resume-download Senqiao/VisionThink-Smart-Train --local-dir datasets/VisionThink-Smart-Train
|
| 41 |
+
huggingface-cli download --repo-type dataset --resume-download Senqiao/VisionThink-Smart-Val --local-dir datasets/VisionThink-Smart-Val
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
## Citation
|
| 45 |
+
If you find this project useful in your research, please consider citing:
|
| 46 |
+
```bibtex
|
| 47 |
+
@article{yang2025visionthink,
|
| 48 |
+
title={VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning},
|
| 49 |
+
author={Yang, Senqiao and Li, Junyi and Lai, Xin and Yu, Bei and Zhao, Hengshuang and Jia, Jiaya},
|
| 50 |
+
journal={arXiv preprint arXiv:2507.13348},
|
| 51 |
+
year={2025}
|
| 52 |
+
}
|
| 53 |
+
```
|