Datasets:
File size: 4,752 Bytes
52ab873 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 |
---
language:
- en
tags:
- computer_use
- agents
- grounding
- multimodal
- ui-vision
- GroundCUA
size_categories:
- "1M<n<10M"
license: mit
task_categories:
- image-to-text
---
<!-- <p align="center">
<img src="assets/groundcua-hq.png" width="100%" alt="GroundCUA Overview">
</p> -->
<h1 align="center" style="font-size:42px; font-weight:700;">
GroundCUA: Grounding Computer Use Agents on Human Demonstrations
</h1>
<p align="center">
๐ <a href="https://groundcua.github.io">Website</a> |
๐ <a href="https://arxiv.org/abs/2511.07332">Paper</a> |
๐ค <a href="https://huggingface.co/datasets/ServiceNow/GroundCUA">Dataset</a> |
๐ค <a href="https://huggingface.co/ServiceNow/GroundNext-7B-V0">Models</a>
</p>
<p align="center">
<img src="assets/groundcua-hq.png" width="100%" alt="GroundCUA Overview">
</p>
# GroundCUA Dataset
GroundCUA is a large and diverse dataset of real UI screenshots paired with structured annotations for building multimodal computer use agents. It covers **87 software platforms** across productivity tools, browsers, creative tools, communication apps, development environments, and system utilities. GroundCUA is designed for research on GUI grounding, UI perception, and vision-language-action models that interact with computers.
---
## Highlights
- **87 platforms** spanning Windows, macOS, Linux, and cross-platform apps
- **Annotated UI elements** with bounding boxes, text, and coarse semantic categories
- **SHA-256 file pairing** between screenshots and JSON annotations
- **Supports research on GUI grounding, multimodal agents, and UI understanding**
- **MIT license** for broad academic and open source use
---
## Dataset Structure
```
GroundCUA/
โโโ data/ # JSON annotation files
โโโ images/ # Screenshot images
โโโ README.md
```
### Directory Layout
Each platform appears as a directory name inside both `data/` and `images/`.
- `data/PlatformName/` contains annotation JSON files
- `images/PlatformName/` contains corresponding PNG screenshots
Image and annotation files share the same SHA-256 hash.
---
## File Naming Convention
Each screenshot has a matching annotation file using the same hash:
- `data/PlatformName/[hash].json`
- `images/PlatformName/[hash].png`
This structure ensures:
- Unique identifiers for each screenshot
- Easy pairing between images and annotations
- Compatibility with pipelines that expect hash-based addressing
---
## Annotation Format
Each annotation file is a list of UI element entries describing visible elements in the screenshot.
```json
[
{
"image_path": "PlatformName/screenshot_hash.png",
"bbox": [x1, y1, x2, y2],
"text": "UI element text",
"category": "Element category",
"id": "unique-id"
}
]
```
### Field Descriptions
**image_path**
Relative path to the screenshot.
**bbox**
Bounding box coordinates `[x1, y1, x2, y2]` in pixel space.
**text**
Visible text or a short description of the element.
**category**
Coarse UI type label. Present only for some elements.
**id**
Unique identifier for the annotation entry.
---
## UI Element Categories
Categories are approximate and not guaranteed for all elements. Examples include:
- **Button**
- **Menu**
- **Input Elements**
- **Navigation**
- **Sidebar**
- **Visual Elements**
- **Information Display**
- **Others**
These labels provide light structure for UI grounding tasks but do not form a full ontology.
---
## Example Use Cases
GroundCUA can be used for:
- Training computer use agents to perceive and understand UI layouts
- Building GUI grounding modules for VLA agents
- Pretraining screen parsing and UI element detectors
- Benchmarking OCR, layout analysis, and cross-platform UI parsing
- Developing models that map UI regions to natural language or actions
---
## Citation
If you use GroundCUA in your research, please cite our work:
```bibtex
@misc{feizi2025groundingcomputeruseagents,
title={Grounding Computer Use Agents on Human Demonstrations},
author={Aarash Feizi and Shravan Nayak and Xiangru Jian and Kevin Qinghong Lin and Kaixin Li and Rabiul Awal and Xing Han Lรน and Johan Obando-Ceron and Juan A. Rodriguez and Nicolas Chapados and David Vazquez and Adriana Romero-Soriano and Reihaneh Rabbany and Perouz Taslakian and Christopher Pal and Spandana Gella and Sai Rajeswar},
year={2025},
eprint={2511.07332},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2511.07332},
}
```
## License
GroundCUA is released under the MIT License.
Users are responsible for ensuring compliance with all applicable laws and policies. |