Datasets:
File size: 5,526 Bytes
dfe0579 3a23207 1406d1d 3a23207 1406d1d 0728b58 1406d1d 0728b58 1406d1d 0da8127 1406d1d 1a63f67 1406d1d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 | ---
license: mit
tags:
- GUI
- CUA
- Agents
- action prediction
- multimodal
- computer-use
- video-demonstrations
- desktop-automation
language:
- en
size_categories:
- 10K<n<100K
---
<p align="center">
<img src="assets/cua-suite-logo.png" alt="CUA-Suite Logo" width="120"/>
</p>
<h1 align="center"><font size="7">VideoCUA</font></h1>
<p align="center">
<strong>The largest open, human annotated video corpus for desktop computer use</strong><br>
Part of <a href="https://cua-suite.github.io/">CUA-Suite</a>: Massive Human-annotated Video Demonstrations for Computer-Use Agents
</p>
<p align="center">
<a href="https://arxiv.org/abs/2603.24440">Paper</a> •
<a href="https://cua-suite.github.io/">Project Page</a> •
<a href="https://github.com/ServiceNow/GroundCUA/tree/main/VideoCUA">GitHub</a> •
<a href="https://uivision.github.io/">UI-Vision</a> •
<a href="https://groundcua.github.io/">GroundCUA</a>
</p>
<p align="center">
<img src="assets/cua-suite-teaser.png" alt="CUA-Suite Teaser" width="100%"/>
</p>
## Overview
**VideoCUA** is the largest open expert video corpus for desktop computer use, comprising **~10K tasks**, **55 hours** of continuous 30 fps screen recordings, and **6 million frames** across **87 professional desktop applications** spanning 12 categories.
Unlike sparse screenshot datasets, VideoCUA preserves the full temporal dynamics of human interaction — every mouse movement, click, drag, scroll, and keystroke is logged with millisecond precision alongside continuous video. This enables research in action prediction, imitation learning, visual world models, and video-based reward modeling.
VideoCUA is part of [CUA-Suite](https://cua-suite.github.io/), a unified ecosystem that also includes:
- [**UI-Vision**](https://uivision.github.io/) — A rigorous desktop-centric benchmark evaluating element grounding, layout understanding, and action prediction.
- [**GroundCUA**](https://groundcua.github.io/) — A large-scale pixel-precise UI grounding dataset with 5M+ human-verified element annotations.
## Repository Structure
```
.
├── assets/
│ ├── cua-suite-logo.png
│ └── cua-suite-teaser.png
├── raw_data/ # One zip per application (87 total)
│ ├── 7-Zip.zip
│ ├── Affine.zip
│ ├── Anki.zip
│ ├── ...
│ └── draw.io.zip
└── README.md
```
## Data Format
Each application zip in `raw_data/` contains multiple task folders identified by numeric task IDs. Each task folder has the following structure:
```
<task_id>/
├── action_log.json # Task metadata and timestamped actions
└── video/
├── video.mp4 # Continuous 30 fps screen recording (1920×1080)
└── video_metadata.json # Video properties (fps, duration, resolution, etc.)
```
### `action_log.json`
```json
{
"task_id": 45525,
"task_instruction": "Open test.7z present in archive it and see the contents",
"platform": "7-Zip",
"action_log": [
{
"action_type": "CLICK",
"timestamp": 2.581,
"action_params": {
"x": 47,
"y": 242,
"numClicks": 2
},
"groundcua_id": "9a7daeed..."
}
]
}
```
Each action entry includes a `groundcua_id` field — this is the unique identifier for the corresponding screenshot in the [GroundCUA](https://huggingface.co/datasets/ServiceNow/GroundCUA) repository. Using this ID, you can look up the fully annotated screenshot (with pixel-precise bounding boxes, textual labels, and semantic categories for every visible UI element) in GroundCUA, linking the video action trajectory to dense UI grounding annotations.
## Citation
If you find VideoCUA or any of the other works in CUA-Suite useful for your research, please cite our works:
```bibtex
@inproceedings{
jian2026cuasuite,
title={{CUA}-Suite: Massive Human-annotated Video Demonstrations for Computer-Use Agents},
author={Xiangru Jian and Shravan Nayak and Kevin Qinghong Lin and Aarash Feizi and Kaixin Li and Patrice Bechard and Spandana Gella and Sai Rajeswar},
booktitle={ICLR 2026 Workshop on Lifelong Agents: Learning, Aligning, Evolving},
year={2026},
url={https://openreview.net/forum?id=IgTUGrZfMr}
}
@inproceedings{
feizi2026grounding,
title={Grounding Computer Use Agents on Human Demonstrations},
author={Aarash Feizi and Shravan Nayak and Xiangru Jian and Kevin Qinghong Lin and Kaixin Li and Rabiul Awal and Xing Han L{\`u} and Johan Obando-Ceron and Juan A. Rodriguez and Nicolas Chapados and David Vazquez and Adriana Romero-Soriano and Reihaneh Rabbany and Perouz Taslakian and Christopher Pal and Spandana Gella and Sai Rajeswar},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=9WiPZy3Kro}
}
@inproceedings{
nayak2025uivision,
title={{UI}-Vision: A Desktop-centric {GUI} Benchmark for Visual Perception and Interaction},
author={Shravan Nayak and Xiangru Jian and Kevin Qinghong Lin and Juan A. Rodriguez and Montek Kalsi and Nicolas Chapados and M. Tamer {\"O}zsu and Aishwarya Agrawal and David Vazquez and Christopher Pal and Perouz Taslakian and Spandana Gella and Sai Rajeswar},
booktitle={Forty-second International Conference on Machine Learning},
year={2025},
url={https://openreview.net/forum?id=5Rtj4mYH1C}
}
```
## License
This dataset is released under the [MIT License](LICENSE). |