|
|
--- |
|
|
license: cc-by-4.0 |
|
|
language: |
|
|
- en |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
task_categories: |
|
|
- image-text-to-text |
|
|
tags: |
|
|
- Tecky |
|
|
- UI |
|
|
- VLM |
|
|
--- |
|
|
# Tecky UI Automation Dataset |
|
|
|
|
|
## 🧠 Tecky: Your True Virtual Employee |
|
|
|
|
|
**Tecky** is an AI-powered virtual teammate that lives on your machine. It learns how you interact with apps and automates workflows across both public tools and internal software. |
|
|
|
|
|
Tecky acts as a **context-aware agent**, observing how users complete digital tasks and suggesting or executing them through a human-like interface. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📊 Dataset Overview |
|
|
|
|
|
|
|
|
This dataset is the foundation for training **Vision-Language Models (VLMs)** to understand **graphical user interfaces (GUIs)** in desktop environments. It contains thousands of labeled UI elements from real-world app screenshots. |
|
|
|
|
|
> This dataset is focused solely on **UI element detection and understanding**. |
|
|
> Workflow modeling and full action-sequence understanding will be released in a future dataset. |
|
|
|
|
|
With this data, models can: |
|
|
|
|
|
- Parse GUI screenshots |
|
|
- Identify actionable elements like buttons, inputs, sliders |
|
|
- Predict appropriate click targets based on instructions |
|
|
- Learn general UI semantics across diverse applications |
|
|
|
|
|
--- |
|
|
|
|
|
## 🎯 Goal |
|
|
|
|
|
The main objective is to train VLMs that: |
|
|
|
|
|
- Understand visual structure in desktop environments |
|
|
- Predict user actions (like click targets) based on screenshots and instructions |
|
|
- Learn generic UI semantics across apps and layouts |
|
|
- Enable automation agents like Tecky to reason and act visually |
|
|
|
|
|
The dataset is a unified and normalized collection from open sources, focused on real desktop UI. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📁 Dataset Structure |
|
|
|
|
|
Each entry in the JSONL files includes: |
|
|
|
|
|
- `system` prompt (task context for Tecky) |
|
|
- `user` message with: |
|
|
- `image`: path to a screenshot |
|
|
- `text`: instruction (e.g. “click the settings icon”) |
|
|
- `assistant` response: |
|
|
- JSON object with: |
|
|
- `actions`: array of predicted clicks |
|
|
- `status_update`: short explanation of the action |
|
|
|
|
|
**Image paths** are relative to the repo and organized in folders like `images_split/00000-09999/`. |
|
|
|
|
|
We provide: |
|
|
- `train.jsonl`: main training set |
|
|
- `valid.jsonl`: 5% validation sample from across all sources |
|
|
|
|
|
--- |
|
|
|
|
|
## ⚖️ Licenses & Attribution |
|
|
|
|
|
This dataset merges multiple public sources. Each is preserved under its original license and properly credited. |
|
|
|
|
|
| Dataset | License | Attribution | |
|
|
|--------|---------|-------------| |
|
|
| **UI element Detect Computer Vision Project** by Uled | CC BY 4.0 | `@misc{ ui-element-detect_dataset, title = { UI element Detect Dataset }, type = { Open Source Dataset }, author = { UIed }, howpublished = { \url{ https://universe.roboflow.com/uied/ui-element-detect } }, url = { https://universe.roboflow.com/uied/ui-element-detect }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2023 }, month = { oct }, note = { visited on 2025-07-11 }, } ` | |
|
|
| **UI-Elements-Detection-Dataset** by YashJain | Apache 2.0 | [YashJain/UI-Elements-Detection-Dataset](https://huggingface.co/datasets/YashJain/UI-Elements-Detection-Dataset) | |
|
|
| **ui_elemenz_dataset** by Maleke Chaker | MIT | [Original repo](https://www.kaggle.com/datasets/malekechaker/ui-elemenz-dataset) | |
|
|
| **ScreenSpot** by RootsAutomation | Apache 2.0 | [rootsautomation/ScreenSpot](https://huggingface.co/datasets/rootsautomation/ScreenSpot) | |
|
|
| **Annotated UI Element Dataset for Desktop Environments** | CC BY 4.0 | Martínez-Rojas, A., Rodríguez-Ruíz, A., González Enríquez, J., & Jiménez-Ramírez, A. (2024). Annotated UI Element Dataset for Desktop Environments [Data set]. Zenodo. [https://doi.org/10.5281/zenodo.10822752](https://doi.org/10.5281/zenodo.10822752) | |
|
|
|
|
|
All usage of this dataset must respect these licenses. This release falls under **CC BY 4.0** to ensure attribution to all original sources. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧠 Intended Use |
|
|
|
|
|
This dataset is intended for: |
|
|
|
|
|
- Training VLMs and UI agents (like Tecky) |
|
|
- Research on UI interaction understanding |
|
|
- Fine-tuning language models to act visually |
|
|
- Prototyping autonomous UX testing and interface parsing |
|
|
|
|
|
Not intended for facial recognition, surveillance, or non-UI classification tasks. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🙌 Contributing |
|
|
|
|
|
Contributions welcome! |
|
|
|
|
|
- Fork the dataset repo |
|
|
- Add your JSONL/image data |
|
|
- Submit a pull request |
|
|
- Or open an issue with feedback or questions |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧾 Citation |
|
|
|
|
|
If you use this dataset, please cite the original sources listed above and acknowledge the **Tecky** Project. |