| --- |
| license: mit |
| tags: |
| - GUI |
| - CUA |
| - Agents |
| - action prediction |
| - multimodal |
| - computer-use |
| - video-demonstrations |
| - desktop-automation |
| language: |
| - en |
| size_categories: |
| - 10K<n<100K |
| --- |
| |
| <p align="center"> |
| <img src="assets/cua-suite-logo.png" alt="CUA-Suite Logo" width="120"/> |
| </p> |
|
|
| <h1 align="center"><font size="7">VideoCUA</font></h1> |
|
|
| <p align="center"> |
| <strong>The largest open, human annotated video corpus for desktop computer use</strong><br> |
| Part of <a href="https://cua-suite.github.io/">CUA-Suite</a>: Massive Human-annotated Video Demonstrations for Computer-Use Agents |
| </p> |
|
|
| <p align="center"> |
| <a href="https://arxiv.org/abs/2603.24440">Paper</a> • |
| <a href="https://cua-suite.github.io/">Project Page</a> • |
| <a href="https://github.com/ServiceNow/GroundCUA/tree/main/VideoCUA">GitHub</a> • |
| <a href="https://uivision.github.io/">UI-Vision</a> • |
| <a href="https://groundcua.github.io/">GroundCUA</a> |
| </p> |
|
|
| <p align="center"> |
| <img src="assets/cua-suite-teaser.png" alt="CUA-Suite Teaser" width="100%"/> |
| </p> |
|
|
| ## Overview |
|
|
| **VideoCUA** is the largest open expert video corpus for desktop computer use, comprising **~10K tasks**, **55 hours** of continuous 30 fps screen recordings, and **6 million frames** across **87 professional desktop applications** spanning 12 categories. |
|
|
| Unlike sparse screenshot datasets, VideoCUA preserves the full temporal dynamics of human interaction — every mouse movement, click, drag, scroll, and keystroke is logged with millisecond precision alongside continuous video. This enables research in action prediction, imitation learning, visual world models, and video-based reward modeling. |
|
|
| VideoCUA is part of [CUA-Suite](https://cua-suite.github.io/), a unified ecosystem that also includes: |
|
|
| - [**UI-Vision**](https://uivision.github.io/) — A rigorous desktop-centric benchmark evaluating element grounding, layout understanding, and action prediction. |
| - [**GroundCUA**](https://groundcua.github.io/) — A large-scale pixel-precise UI grounding dataset with 5M+ human-verified element annotations. |
|
|
| ## Repository Structure |
|
|
| ``` |
| . |
| ├── assets/ |
| │ ├── cua-suite-logo.png |
| │ └── cua-suite-teaser.png |
| ├── raw_data/ # One zip per application (87 total) |
| │ ├── 7-Zip.zip |
| │ ├── Affine.zip |
| │ ├── Anki.zip |
| │ ├── ... |
| │ └── draw.io.zip |
| └── README.md |
| ``` |
|
|
| ## Data Format |
|
|
| Each application zip in `raw_data/` contains multiple task folders identified by numeric task IDs. Each task folder has the following structure: |
|
|
| ``` |
| <task_id>/ |
| ├── action_log.json # Task metadata and timestamped actions |
| └── video/ |
| ├── video.mp4 # Continuous 30 fps screen recording (1920×1080) |
| └── video_metadata.json # Video properties (fps, duration, resolution, etc.) |
| ``` |
|
|
| ### `action_log.json` |
| |
| ```json |
| { |
| "task_id": 45525, |
| "task_instruction": "Open test.7z present in archive it and see the contents", |
| "platform": "7-Zip", |
| "action_log": [ |
| { |
| "action_type": "CLICK", |
| "timestamp": 2.581, |
| "action_params": { |
| "x": 47, |
| "y": 242, |
| "numClicks": 2 |
| }, |
| "groundcua_id": "9a7daeed..." |
| } |
| ] |
| } |
| ``` |
| |
| Each action entry includes a `groundcua_id` field — this is the unique identifier for the corresponding screenshot in the [GroundCUA](https://huggingface.co/datasets/ServiceNow/GroundCUA) repository. Using this ID, you can look up the fully annotated screenshot (with pixel-precise bounding boxes, textual labels, and semantic categories for every visible UI element) in GroundCUA, linking the video action trajectory to dense UI grounding annotations. |
|
|
| ## Citation |
|
|
| If you find VideoCUA or any of the other works in CUA-Suite useful for your research, please cite our works: |
|
|
| ```bibtex |
| @inproceedings{ |
| jian2026cuasuite, |
| title={{CUA}-Suite: Massive Human-annotated Video Demonstrations for Computer-Use Agents}, |
| author={Xiangru Jian and Shravan Nayak and Kevin Qinghong Lin and Aarash Feizi and Kaixin Li and Patrice Bechard and Spandana Gella and Sai Rajeswar}, |
| booktitle={ICLR 2026 Workshop on Lifelong Agents: Learning, Aligning, Evolving}, |
| year={2026}, |
| url={https://openreview.net/forum?id=IgTUGrZfMr} |
| } |
| |
| @inproceedings{ |
| feizi2026grounding, |
| title={Grounding Computer Use Agents on Human Demonstrations}, |
| author={Aarash Feizi and Shravan Nayak and Xiangru Jian and Kevin Qinghong Lin and Kaixin Li and Rabiul Awal and Xing Han L{\`u} and Johan Obando-Ceron and Juan A. Rodriguez and Nicolas Chapados and David Vazquez and Adriana Romero-Soriano and Reihaneh Rabbany and Perouz Taslakian and Christopher Pal and Spandana Gella and Sai Rajeswar}, |
| booktitle={The Fourteenth International Conference on Learning Representations}, |
| year={2026}, |
| url={https://openreview.net/forum?id=9WiPZy3Kro} |
| } |
| |
| @inproceedings{ |
| nayak2025uivision, |
| title={{UI}-Vision: A Desktop-centric {GUI} Benchmark for Visual Perception and Interaction}, |
| author={Shravan Nayak and Xiangru Jian and Kevin Qinghong Lin and Juan A. Rodriguez and Montek Kalsi and Nicolas Chapados and M. Tamer {\"O}zsu and Aishwarya Agrawal and David Vazquez and Christopher Pal and Perouz Taslakian and Spandana Gella and Sai Rajeswar}, |
| booktitle={Forty-second International Conference on Machine Learning}, |
| year={2025}, |
| url={https://openreview.net/forum?id=5Rtj4mYH1C} |
| } |
| ``` |
|
|
| ## License |
|
|
| This dataset is released under the [MIT License](LICENSE). |