| license: apache-2.0 | |
| task_categories: | |
| - image-text-to-text | |
| # GUI-Drag Dataset | |
| [**Project Page**](https://osu-nlp-group.github.io/GUI-Drag) | [**Paper**](https://huggingface.co/papers/2601.06031) | [**GitHub**](https://github.com/OSU-NLP-Group/GUI-Drag) | |
| GUI-Drag is a diverse dataset of 161K text dragging examples synthesized through a scalable pipeline. It is designed to advance Graphical User Interface (GUI) grounding beyond simple clicking, focusing on the essential interaction of dragging the mouse to select and manipulate textual content. | |
| Our dataset is built upon the Uground, Jedi, and additional public paper-style academic document screenshots. | |
| **NOTE**: Before you use this dataset, make sure you understand the logic of absolute coordinates and [image processor](https://github.com/QwenLM/Qwen2.5-VL/blob/d2240f11656bfe404b9ba56db4e51cd09f522ff1/qwen-vl-utils/src/qwen_vl_utils/vision_process.py#L60) for [Qwen2.5-VL](https://arxiv.org/abs/2502.13923). | |
| This dataset is set with the image processor max tokens to be 2700, a.k.a max_pixels=2700x14x14x2x2 , the coordinates were resized to be smaller and you have to resize the image as well within max_pixels=2700x14x14x2x2 via image processor to make them align. | |
| Make sure you also follow it in your training procedure, otherwise the performance will not be as expected. | |
| ## Citation | |
| If you find our data, model, benchmark or the general resources useful, please consider citing: | |
| ```bibtex | |
| @article{wu2025beyond, | |
| title={Beyond Clicking: A Step Towards Generalist GUI Grounding via Text Dragging}, | |
| author={Wu, et al.}, | |
| journal={arXiv preprint arXiv:2601.06031}, | |
| year={2025} | |
| } | |
| ``` |