Datasets:

Modalities:
Image
ArXiv:
License:

Add paper link, code link, and task category

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +21 -2
README.md CHANGED
@@ -1,11 +1,30 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
4
 
5
- Our dataset is built upon the Uground, Jedi, and additional public paper-style academic document screenshots.
 
 
6
 
7
- Project webpage: https://osu-nlp-group.github.io/GUI-Drag
 
 
8
 
9
  **NOTE**: Before you use this dataset, make sure you understand the logic of absolute coordinates and [image processor](https://github.com/QwenLM/Qwen2.5-VL/blob/d2240f11656bfe404b9ba56db4e51cd09f522ff1/qwen-vl-utils/src/qwen_vl_utils/vision_process.py#L60) for [Qwen2.5-VL](https://arxiv.org/abs/2502.13923).
10
  This dataset is set with the image processor max tokens to be 2700, a.k.a max_pixels=2700x14x14x2x2 , the coordinates were resized to be smaller and you have to resize the image as well within max_pixels=2700x14x14x2x2 via image processor to make them align.
11
  Make sure you also follow it in your training procedure, otherwise the performance will not be as expected.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - image-text-to-text
5
  ---
6
 
7
+ # GUI-Drag Dataset
8
+
9
+ [**Project Page**](https://osu-nlp-group.github.io/GUI-Drag) | [**Paper**](https://huggingface.co/papers/2601.06031) | [**GitHub**](https://github.com/OSU-NLP-Group/GUI-Drag)
10
 
11
+ GUI-Drag is a diverse dataset of 161K text dragging examples synthesized through a scalable pipeline. It is designed to advance Graphical User Interface (GUI) grounding beyond simple clicking, focusing on the essential interaction of dragging the mouse to select and manipulate textual content.
12
+
13
+ Our dataset is built upon the Uground, Jedi, and additional public paper-style academic document screenshots.
14
 
15
  **NOTE**: Before you use this dataset, make sure you understand the logic of absolute coordinates and [image processor](https://github.com/QwenLM/Qwen2.5-VL/blob/d2240f11656bfe404b9ba56db4e51cd09f522ff1/qwen-vl-utils/src/qwen_vl_utils/vision_process.py#L60) for [Qwen2.5-VL](https://arxiv.org/abs/2502.13923).
16
  This dataset is set with the image processor max tokens to be 2700, a.k.a max_pixels=2700x14x14x2x2 , the coordinates were resized to be smaller and you have to resize the image as well within max_pixels=2700x14x14x2x2 via image processor to make them align.
17
  Make sure you also follow it in your training procedure, otherwise the performance will not be as expected.
18
+
19
+ ## Citation
20
+
21
+ If you find our data, model, benchmark or the general resources useful, please consider citing:
22
+
23
+ ```bibtex
24
+ @article{wu2025beyond,
25
+ title={Beyond Clicking: A Step Towards Generalist GUI Grounding via Text Dragging},
26
+ author={Wu, et al.},
27
+ journal={arXiv preprint arXiv:2601.06031},
28
+ year={2025}
29
+ }
30
+ ```