Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,8 +6,133 @@ tags:
|
|
| 6 |
- Agents
|
| 7 |
- action prediction
|
| 8 |
- multimodal
|
|
|
|
|
|
|
|
|
|
| 9 |
language:
|
| 10 |
- en
|
| 11 |
size_categories:
|
| 12 |
- 10K<n<100K
|
| 13 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
- Agents
|
| 7 |
- action prediction
|
| 8 |
- multimodal
|
| 9 |
+
- computer-use
|
| 10 |
+
- video-demonstrations
|
| 11 |
+
- desktop-automation
|
| 12 |
language:
|
| 13 |
- en
|
| 14 |
size_categories:
|
| 15 |
- 10K<n<100K
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
<p align="center">
|
| 19 |
+
<img src="assets/cua-suite-logo.png" alt="CUA-Suite Logo" width="200"/>
|
| 20 |
+
</p>
|
| 21 |
+
|
| 22 |
+
<h1 align="center">VideoCUA</h1>
|
| 23 |
+
|
| 24 |
+
<p align="center">
|
| 25 |
+
<strong>The largest open, human annotated video corpus for desktop computer use</strong><br>
|
| 26 |
+
Part of <a href="https://cua-suite.github.io/">CUA-Suite</a>: Massive Human-annotated Video Demonstrations for Computer-Use Agents
|
| 27 |
+
</p>
|
| 28 |
+
|
| 29 |
+
<p align="center">
|
| 30 |
+
<a href="https://openreview.net/forum?id=IgTUGrZfMr">Paper</a> •
|
| 31 |
+
<a href="https://cua-suite.github.io/">Project Page</a> •
|
| 32 |
+
<a href="https://uivision.github.io/">UI-Vision</a> •
|
| 33 |
+
<a href="https://groundcua.github.io/">GroundCUA</a>
|
| 34 |
+
</p>
|
| 35 |
+
|
| 36 |
+
<p align="center">
|
| 37 |
+
<img src="assets/cua-suite-teaser.png" alt="CUA-Suite Teaser" width="100%"/>
|
| 38 |
+
</p>
|
| 39 |
+
|
| 40 |
+
## Overview
|
| 41 |
+
|
| 42 |
+
**VideoCUA** is the largest open expert video corpus for desktop computer use, comprising **~10K tasks**, **55 hours** of continuous 30 fps screen recordings, and **6 million frames** across **87 professional desktop applications** spanning 12 categories.
|
| 43 |
+
|
| 44 |
+
Unlike sparse screenshot datasets, VideoCUA preserves the full temporal dynamics of human interaction — every mouse movement, click, drag, scroll, and keystroke is logged with millisecond precision alongside continuous video. This enables research in action prediction, imitation learning, visual world models, and video-based reward modeling.
|
| 45 |
+
|
| 46 |
+
VideoCUA is part of [CUA-Suite](https://cua-suite.github.io/), a unified ecosystem that also includes:
|
| 47 |
+
|
| 48 |
+
- [**UI-Vision**](https://uivision.github.io/) — A rigorous desktop-centric benchmark evaluating element grounding, layout understanding, and action prediction.
|
| 49 |
+
- [**GroundCUA**](https://groundcua.github.io/) — A large-scale pixel-precise UI grounding dataset with 5M+ human-verified element annotations.
|
| 50 |
+
|
| 51 |
+
## Repository Structure
|
| 52 |
+
|
| 53 |
+
```
|
| 54 |
+
.
|
| 55 |
+
├── assets/
|
| 56 |
+
│ ├── cua-suite-logo.png
|
| 57 |
+
│ └── cua-suite-teaser.png
|
| 58 |
+
├── raw_data/ # One zip per application (87 total)
|
| 59 |
+
│ ├── 7-Zip.zip
|
| 60 |
+
│ ├── Affine.zip
|
| 61 |
+
│ ├── Anki.zip
|
| 62 |
+
│ ├── ...
|
| 63 |
+
│ └── draw.io.zip
|
| 64 |
+
└── README.md
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
## Data Format
|
| 68 |
+
|
| 69 |
+
Each application zip in `raw_data/` contains multiple task folders identified by numeric task IDs. Each task folder has the following structure:
|
| 70 |
+
|
| 71 |
+
```
|
| 72 |
+
<task_id>/
|
| 73 |
+
├── action_log.json # Task metadata and timestamped actions
|
| 74 |
+
└── video/
|
| 75 |
+
├── video.mp4 # Continuous 30 fps screen recording (1920×1080)
|
| 76 |
+
└── video_metadata.json # Video properties (fps, duration, resolution, etc.)
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
### `action_log.json`
|
| 80 |
+
|
| 81 |
+
```json
|
| 82 |
+
{
|
| 83 |
+
"task_id": 45525,
|
| 84 |
+
"task_instruction": "Open test.7z present in archive it and see the contents",
|
| 85 |
+
"platform": "7-Zip",
|
| 86 |
+
"action_log": [
|
| 87 |
+
{
|
| 88 |
+
"action_type": "CLICK",
|
| 89 |
+
"timestamp": 2.581,
|
| 90 |
+
"action_params": {
|
| 91 |
+
"x": 47,
|
| 92 |
+
"y": 242,
|
| 93 |
+
"numClicks": 2
|
| 94 |
+
},
|
| 95 |
+
"groundcua_id": "9a7daeed..."
|
| 96 |
+
}
|
| 97 |
+
]
|
| 98 |
+
}
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
Each action entry includes a `groundcua_id` field — this is the unique identifier for the corresponding screenshot in the [GroundCUA](https://huggingface.co/datasets/ServiceNow/GroundCUA) repository. Using this ID, you can look up the fully annotated screenshot (with pixel-precise bounding boxes, textual labels, and semantic categories for every visible UI element) in GroundCUA, linking the video action trajectory to dense UI grounding annotations.
|
| 102 |
+
|
| 103 |
+
## Citation
|
| 104 |
+
|
| 105 |
+
If you find VideoCUA or any of the other works in CUA-Suite useful for your research, please cite our works:
|
| 106 |
+
|
| 107 |
+
```bibtex
|
| 108 |
+
@inproceedings{
|
| 109 |
+
jian2026cuasuite,
|
| 110 |
+
title={{CUA}-Suite: Massive Human-annotated Video Demonstrations for Computer-Use Agents},
|
| 111 |
+
author={Xiangru Jian and Shravan Nayak and Kevin Qinghong Lin and Aarash Feizi and Kaixin Li and Patrice Bechard and Spandana Gella and Sai Rajeswar},
|
| 112 |
+
booktitle={ICLR 2026 Workshop on Lifelong Agents: Learning, Aligning, Evolving},
|
| 113 |
+
year={2026},
|
| 114 |
+
url={https://openreview.net/forum?id=IgTUGrZfMr}
|
| 115 |
+
}
|
| 116 |
+
|
| 117 |
+
@inproceedings{
|
| 118 |
+
feizi2026grounding,
|
| 119 |
+
title={Grounding Computer Use Agents on Human Demonstrations},
|
| 120 |
+
author={Aarash Feizi and Shravan Nayak and Xiangru Jian and Kevin Qinghong Lin and Kaixin Li and Rabiul Awal and Xing Han L{\`u} and Johan Obando-Ceron and Juan A. Rodriguez and Nicolas Chapados and David Vazquez and Adriana Romero-Soriano and Reihaneh Rabbany and Perouz Taslakian and Christopher Pal and Spandana Gella and Sai Rajeswar},
|
| 121 |
+
booktitle={The Fourteenth International Conference on Learning Representations},
|
| 122 |
+
year={2026},
|
| 123 |
+
url={https://openreview.net/forum?id=9WiPZy3Kro}
|
| 124 |
+
}
|
| 125 |
+
|
| 126 |
+
@inproceedings{
|
| 127 |
+
nayak2025uivision,
|
| 128 |
+
title={{UI}-Vision: A Desktop-centric {GUI} Benchmark for Visual Perception and Interaction},
|
| 129 |
+
author={Shravan Nayak and Xiangru Jian and Kevin Qinghong Lin and Juan A. Rodriguez and Montek Kalsi and Nicolas Chapados and M. Tamer {\"O}zsu and Aishwarya Agrawal and David Vazquez and Christopher Pal and Perouz Taslakian and Spandana Gella and Sai Rajeswar},
|
| 130 |
+
booktitle={Forty-second International Conference on Machine Learning},
|
| 131 |
+
year={2025},
|
| 132 |
+
url={https://openreview.net/forum?id=5Rtj4mYH1C}
|
| 133 |
+
}
|
| 134 |
+
```
|
| 135 |
+
|
| 136 |
+
## License
|
| 137 |
+
|
| 138 |
+
This dataset is released under the [MIT License](LICENSE).
|