---
license: cc-by-nc-4.0
tags:
- computer-vision
- mobile-ui
- ios
- multimodal
- layout-analysis
- vision-language
task_categories:
- object-detection
- image-classification
- other
language:
- en
pretty_name: iOS-1K-Mobile-UI-Dataset
---
# iOS-1K-Mobile-UI-Dataset
## Overview
**iOS-1K-Mobile-UI-Dataset** is a curated dataset of **1,000 real-world iOS mobile UI screens** collected from diverse application categories available on the Apple App Store.
Each screen is paired with **human-validated structured JSON ground truth annotations**, enabling research in UI understanding, layout analysis, and multimodal modeling.
The dataset includes:
- Simple layouts (e.g., login, onboarding screens)
- Visually dense interfaces (e.g., feeds, checkout flows)
- Structured UI element annotations
- Bounding boxes for UI components
- Element type labels
- Clickability attributes
- Text content for text-bearing elements
---
## Dataset Structure
```
ios-ui-dataset/
│
├── images/
│ ├── 0001.png
│ ├── 0002.png
│ └── ...
│
├── annotations/
│ ├── 0001.json
│ ├── 0002.json
│ └── ...
│
├── metadata.csv
├── README.md
└── dataset_infos.json
```
---
## Metadata Format
The `metadata.csv` file provides mapping between images and annotations:
| screen_id | image_file | annotation_file |
|-----------|------------------|--------------------------|
| 0001 | images/0001.png | annotations/0001.json |
| 0002 | images/0002.png | annotations/0002.json |
---
---
## Example Screen
Below is a sample Eleven Reader App’s login screen from the Dataset :
---
## Annotation Format
Each JSON annotation follows a structured schema:
```json
{
"screen_id": "0001",
"elements": [
{
"id": 1,
"type": "button",
"bbox": [x, y, width, height],
"text": "Login",
"clickable": true
}
]
}
```
Each element includes:
- `type`: UI component category (e.g., button, text, image)
- `bbox`: Bounding box coordinates
- `text`: Visible text content (if applicable)
- `clickable`: Boolean interaction label
---
## Intended Use Cases
This dataset is designed for:
- Mobile UI understanding
- Layout parsing and structural analysis
- UI element detection
- Vision–language modeling
- Multimodal LLM grounding
- Autonomous UI agent research
---
## Download Instructions
You can download the dataset using the Hugging Face Hub:
```python
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="atharparvezce/iOS-1K-Mobile-UI-Dataset",
repo_type="dataset",
local_dir="./iOS-1K-Mobile-UI-Dataset"
)
```
---
## Limitations
- Covers only iOS platform interfaces
- Contains 1,000 screens in the current release
- Category distribution reflects App Store sampling
- UI copyrights remain with original application developers
---
## Future Work
We are actively working on extending the **iOS-1K-Mobile-UI-Dataset** with:
- Additional UI screens across more application categories
- Increased dataset scale beyond 1,000 screens
- More detailed attribute-level annotations
- Expanded layout complexity coverage
- Benchmark splits for training and evaluation
Our goal is to develop this into a larger benchmark for mobile UI understanding and multimodal research.
If you are interested in collaboration, contributing to the dataset, or using extended versions for research purposes, please feel free to reach out:
📧 **atharparvezce@gmail.com**
---
## License
This dataset is released under the **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)** license.
- Attribution required
- Non-commercial use only
The dataset is intended strictly for academic and research purposes.
No personal user data is included.
---
## Citation
If you use this dataset in your research, please cite:
```bibtex
@dataset{ios_1k_mobile_ui_dataset_2026,
title = {iOS-1K-Mobile-UI-Dataset: A Human-Validated iOS UI Benchmark},
author = {Athar Parvez},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/atharparvezce/iOS-1K-Mobile-UI-Dataset}
}
```