Add initial dataset card for GUI-AIMA dataset

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - image-text-to-text
4
+ language:
5
+ - en
6
+ tags:
7
+ - gui-grounding
8
+ - multimodal
9
+ ---
10
+
11
+ # GUI-AIMA Dataset
12
+
13
+ This repository contains the dataset, likely the 85k screenshots, used to train GUI-AIMA, as presented in the paper [GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding](https://huggingface.co/papers/2511.00810).
14
+
15
+ GUI-AIMA is an attention-based and coordinate-free supervised fine-tuning framework designed for efficient GUI grounding. It aims to map natural-language instructions to actionable screen regions by aligning the intrinsic multimodal attention of Multimodal Large Language Models (MLLMs) with patch-wise grounding signals.
16
+
17
+ ## Abstract
18
+
19
+ Graphical user interface (GUI) grounding is a key function of computer-use agents, which maps natural-language instructions to actionable screen regions. Existing approaches based on Multimodal Large Language Models (MLLMs) typically formulate it as a text-based coordinate generation task, yet directly generating precise coordinates from visual inputs remains challenging and computationally intensive. An intuitive way to implement GUI grounding is to first select visual patches relevant to the instructions and then determine the precise click location within those patches. Based on the observations that general MLLMs have some native grounding capability, nested within their attentions, we propose GUI-AIMA, an attention-based and coordinate-free supervised fine-tuning framework for efficient GUI grounding. GUI-AIMA aligns the intrinsic multimodal attention of MLLMs with patch-wise grounding signals. These signals are calculated adaptively for diverse user instructions by multi-head aggregation on simplified query-visual attention matrices. Besides, its coordinate-free manner can easily integrate a plug-and-play zoom-in stage. GUI-AIMA-3B was trained with only 85k screenshots, demonstrating exceptional data efficiency and verifying that light training can trigger the native grounding capability of MLLMs. It achieves state-of-the-art performance among 3B models, attaining an average accuracy of 58.6% on ScreenSpot-Pro and 62.2% on OSWorld-G. Project page: this https URL
20
+
21
+ ## Project Page and Code
22
+
23
+ The project page and code for GUI-AIMA, which utilizes this dataset, can be found on GitHub:
24
+ [https://github.com/sjz5202/GUI-AIMA](https://github.com/sjz5202/GUI-AIMA)
25
+
26
+ ## Data Preparation
27
+
28
+ To prepare the necessary data for training models with the GUI-AIMA framework, follow these steps:
29
+
30
+ 1. Download the GUI-Actor data from [here](https://huggingface.co/datasets/cckevinn/GUI-Actor-Data).
31
+ 2. Download the UGround single-round dialogue json data from [here](https://huggingface.co/datasets/smz8599/UGround-single).
32
+ 3. Download the GTA1 data without the web part from [here](https://huggingface.co/datasets/smz8599/GTA_data_no_web).
33
+
34
+ ## Sample Usage
35
+
36
+ To set up the environment and explore the GUI-AIMA project, follow the installation steps provided in the GitHub repository:
37
+
38
+ ### Installation
39
+
40
+ ```bash
41
+ git clone https://github.com/sjz5202/GUI-AIMA
42
+ cd GUI-AIMA
43
+ conda create -n gui_aima python=3.10
44
+ conda activate gui_aima
45
+ pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
46
+ pip install -e .
47
+ ```
48
+
49
+ A single sample example usage for inference is available in `eval/example_inference.py` within the cloned repository.
50
+
51
+ ## Citation
52
+
53
+ If you find this dataset or the associated work useful, please cite the paper:
54
+
55
+ ```bibtex
56
+ @misc{zhou2025guiaimaaligningintrinsicmultimodal,
57
+ title={GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding},
58
+ author={Shijie Zhou and Viet Dac Lai and Hao Tan and Jihyung Kil and Wanrong Zhu and Changyou Chen and Ruiyi Zhang},
59
+ year={2025},
60
+ eprint={2511.00810},
61
+ archivePrefix={arXiv},
62
+ primaryClass={cs.CV},
63
+ url={https://arxiv.org/abs/2511.00810},
64
+ }
65
+ ```