Add comprehensive model card for GUI-AIMA-3B
Browse filesThis PR adds a comprehensive model card for the GUI-AIMA-3B model.
It includes:
- Relevant metadata tags: `license`, `library_name`, and `pipeline_tag`.
- A clear description of the model from the paper's abstract.
- Links to the Hugging Face paper page and the GitHub repository.
- Installation instructions from the GitHub README.
- Main results and illustrative figures from the GitHub README.
- Referencing the sample usage available in the GitHub repository.
- Acknowledgements and citation information.
Please review and merge this PR to enhance the discoverability and usability of the model on the Hugging Face Hub.
README.md
ADDED
|
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: image-text-to-text
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding
|
| 8 |
+
|
| 9 |
+
This repository hosts the **GUI-AIMA-3B** model, which is presented in the paper [GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding](https://huggingface.co/papers/2511.00810).
|
| 10 |
+
|
| 11 |
+
## Project Page and Code
|
| 12 |
+
The code and project details can be found on our GitHub repository: [https://github.com/sjz5202/GUI-AIMA](https://github.com/sjz5202/GUI-AIMA)
|
| 13 |
+
|
| 14 |
+
## Abstract
|
| 15 |
+
Graphical user interface (GUI) grounding is a key function of computer-use agents, which maps natural-language instructions to actionable screen regions. Existing approaches based on Multimodal Large Language Models (MLLMs) typically formulate it as a text-based coordinate generation task, yet directly generating precise coordinates from visual inputs remains challenging and computationally intensive. An intuitive way to implement GUI grounding is to first select visual patches relevant to the instructions and then determine the precise click location within those patches. Based on the observations that general MLLMs have some native grounding capability, nested within their attentions, we propose GUI-AIMA, an attention-based and coordinate-free supervised fine-tuning framework for efficient GUI grounding. GUI-AIMA aligns the intrinsic multimodal attention of MLLMs with patch-wise grounding signals. These signals are calculated adaptively for diverse user instructions by multi-head aggregation on simplified query-visual attention matrices. Besides, its coordinate-free manner can easily integrate a plug-and-play zoom-in stage. GUI-AIMA-3B was trained with only 85k screenshots, demonstrating exceptional data efficiency and verifying that light training can trigger the native grounding capability of MLLMs. It achieves state-of-the-art performance among 3B models, attaining an average accuracy of 58.6% on ScreenSpot-Pro and 62.2% on OSWorld-G.
|
| 16 |
+
|
| 17 |
+
## Model Overview
|
| 18 |
+
|
| 19 |
+
<div align="center">
|
| 20 |
+
<img src="https://github.com/sjz5202/GUI-AIMA/raw/main/assets/images/comparison.png?raw=true" width="85%">
|
| 21 |
+
</div>
|
| 22 |
+
|
| 23 |
+
Figure 1. **GUI-AIMA** utilize the inherent attention of MLLMs for patch-wise GUI grounding. It simplifies the vanilla attention grounding requiring proper aggregation between all query tokens' grounding vectors by adding a learnable ANCHOR token as the context anchor of query. The multi-head aggregation on attention vectors between ANCHOR and visual tokens is adequate for grounding.
|
| 24 |
+
|
| 25 |
+
<div align="center">
|
| 26 |
+
<img src="https://github.com/sjz5202/GUI-AIMA/raw/main/assets/images/main_fig.png?raw=true" width="85%">
|
| 27 |
+
</div>
|
| 28 |
+
|
| 29 |
+
Figure 2. **GUI-AIMA** proposes an effective multi-head weighting approach by measuring the uniformity between global query-visual pattern and head-wise query-visual pattern.
|
| 30 |
+
|
| 31 |
+
## Main Results
|
| 32 |
+
There are two variants of GUI-AIMA: [GUI-AIMA-3B](https://huggingface.co/smz8599/GUI-AIMA-3B) and [GUI-AIMA-3B(soft)](https://huggingface.co/smz8599/GUI-AIMA-3B-kl) with slight differences of multihead weighting.
|
| 33 |
+
|
| 34 |
+
1-step inference of GUI-AIMA achieves **47.1%** and **56.9%** on ScreenSpot-pro and OSWorld-G. With 2-step zoom-in inference, it can achieve **58.6%** and **62.2%** on ScreenSpot-pro and OSWorld-G.
|
| 35 |
+
|
| 36 |
+
We trained GUI-AIMA for one-step center points predictions. However, **GUI-AIMA can be inferenced in the 2-step fashion without further fine-tuning**: (step 1) 1st inferece to determine rough grounding areas; (step 2) crop and zoom-in the rough grounding areas for 2nd preciser grounding inference. The 2-step inference is very helpful for GUI grounding on high-resolution screenshots, such as samples in ScreenSpot-pro and OSWorld-G.
|
| 37 |
+
|
| 38 |
+
<div align="left">
|
| 39 |
+
<img src="https://github.com/sjz5202/GUI-AIMA/raw/main/assets/images/ss_pro.png?raw=true" width="100%">
|
| 40 |
+
</div>
|
| 41 |
+
|
| 42 |
+
<div align="left">
|
| 43 |
+
<img src="https://github.com/sjz5202/GUI-AIMA/raw/main/assets/images/osworld-g.png?raw=true" width="80%">
|
| 44 |
+
</div>
|
| 45 |
+
|
| 46 |
+
<div align="left">
|
| 47 |
+
<img src="https://github.com/sjz5202/GUI-AIMA/raw/main/assets/images/ss_v2.png?raw=true" width=\"85%\">
|
| 48 |
+
</div>
|
| 49 |
+
|
| 50 |
+
## Installation
|
| 51 |
+
1. Environment:
|
| 52 |
+
```bash
|
| 53 |
+
git clone https://github.com/sjz5202/GUI-AIMA
|
| 54 |
+
cd GUI-AIMA
|
| 55 |
+
conda create -n gui_aima python=3.10
|
| 56 |
+
conda activate gui_aima
|
| 57 |
+
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
|
| 58 |
+
pip install -e .
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
## Sample Usage
|
| 62 |
+
A single sample inference example is available in `eval/example_inference.py` within the [GitHub repository](https://github.com/sjz5202/GUI-AIMA).
|
| 63 |
+
|
| 64 |
+
## Acknowledgements
|
| 65 |
+
|
| 66 |
+
GUI-AIMA is built upon the following projects.
|
| 67 |
+
- [GUI-Actor](https://github.com/microsoft/GUI-Actor)
|
| 68 |
+
- [TAG](https://github.com/HeimingX/TAG.git)
|
| 69 |
+
- [GTA1](https://github.com/Yan98/GTA1)
|
| 70 |
+
- [Transformers](https://github.com/huggingface/transformers)
|
| 71 |
+
- [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL)
|
| 72 |
+
- [AGUVIS](https://github.com/xlang-ai/aguvis)
|
| 73 |
+
- [OS-Atlas](https://github.com/OS-Copilot/OS-Atlas)
|
| 74 |
+
- [SeeClick](https://github.com/njucckevin/SeeClick)
|
| 75 |
+
|
| 76 |
+
Thanks for their great work!
|
| 77 |
+
|
| 78 |
+
## Citation
|
| 79 |
+
If you find our work helpful or inspiring, please feel free to cite it.
|
| 80 |
+
```bibtex
|
| 81 |
+
@misc{zhou2025guiaimaaligningintrinsicmultimodal,
|
| 82 |
+
title={GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding},
|
| 83 |
+
author={Shijie Zhou and Viet Dac Lai and Hao Tan and Jihyung Kil and Wanrong Zhu and Changyou Chen and Ruiyi Zhang},
|
| 84 |
+
year={2025},
|
| 85 |
+
eprint={2511.00810},
|
| 86 |
+
archivePrefix={arXiv},
|
| 87 |
+
primaryClass={cs.CV},
|
| 88 |
+
url={https://arxiv.org/abs/2511.00810},
|
| 89 |
+
}
|
| 90 |
+
```
|