Screenspot5G_VQA / README.md
chandrabhuma's picture
Update README.md
4582f4e verified
---
dataset_info:
features:
- name: id
dtype: int64
- name: image_name
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 34909987.036
num_examples: 1177
- name: test
num_bytes: 9179938.0
num_examples: 295
download_size: 41934761
dataset_size: 44089925.036
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
pretty_name: Screenspot5G_VQA
license: cc-by-4.0
task_categories:
- visual-question-answering
- image-text-to-text
language:
- en
multimodal:
- image
- text
size_categories:
- 1K<n<10K
---
πŸ“± Screenspot5G VQA Dataset
🧾 Dataset Details
πŸ“Œ Dataset Description
Screenspot5G_VQA is a visual question answering (VQA) dataset for mobile screenshot understanding.
It is designed to evaluate a model’s ability to reason over real smartphone screen content, including UI elements, icons, layout structure, and visible text.
All images were captured using a real 5G smartphone, ensuring realistic visual characteristics such as screen resolution, font rendering, and UI density.
Device: OnePlus Nord CE 2 Lite 5G
Model Number: CPH2381
Language: English
License: CC BY 4.0
πŸ“Š Dataset Statistics
Split Samples Size
Train 1,177 ~33.3 MB
Test 295 ~8.7 MB
Total 1,472 ~42 MB
πŸ‘₯ Contributors
πŸŽ“ Faculty
πŸ‘¨β€πŸ« Dr. B. Chandra Mohan
Professor, Dept. of ECE
Bapatla Engineering College, Bapatla
πŸ‘¨β€πŸ« Sri K. Sri Harsha
Assistant Professor, Dept. of ECE
Bapatla Engineering College, Bapatla
πŸ‘¨β€πŸ« Dr. P. Vinod Babu
Associate Professor, Dept. of ECE
Bapatla Engineering College, Bapatla
πŸŽ“ Students
πŸ‘¨β€πŸŽ“ Yarramsetty Sindhu
Undergraduate Student, Dept. of ECE
Bapatla Engineering College, Bapatla
πŸ‘¨β€πŸŽ“ Vasipalli Prasanna
Undergraduate Student, Dept. of ECE
Bapatla Engineering College, Bapatla
πŸ‘¨β€πŸŽ“ Pilli Harsha Vardhan
Undergraduate Student, Dept. of ECE
Bapatla Engineering College, Bapatla
πŸ‘¨β€πŸŽ“ Thulava Vamsi
Undergraduate Student, Dept. of ECE
Bapatla Engineering College, Bapatla
🎯 Uses
βœ… Direct Use
This dataset is suitable for:
πŸ“± Mobile UI understanding
πŸ‘οΈ Screenshot-based VQA
🧠 Vision–Language Model (VLM) evaluation
β™Ώ Accessibility and assistive technologies
πŸ” UI element reasoning and screen comprehension
🚫 Out-of-Scope Use
### 🚫 Out-of-Scope Use
- πŸ”’ **Privacy-invasive monitoring**
-
- ⚠️ **Real-time automated decision-making without human oversight**
-
- πŸ”  **OCR-only benchmarking** (the dataset emphasizes reasoning, not just text extraction)
πŸ—‚οΈ Dataset Structure
Each sample in the dataset contains the following fields:
- πŸ†” **id** *(int64)*
Unique sample identifier
- πŸ–ΌοΈ **image_name** *(string)*
Filename of the captured screenshot
- ❓ **question** *(string)*
Natural language question referring to the screenshot
- βœ… **answer** *(string)*
Ground-truth answer corresponding to the question
- πŸ“· **image** *(image)*
Screenshot image used for visual reasoning
The dataset is provided with **πŸ§ͺ train/test splits** to support **reproducible evaluation**.
πŸ—οΈ Dataset Creation
🎯 Curation Rationale
Modern Vision–Language Models (VLMs) often struggle with mobile screen understanding due to:
- 🧩 **Dense UI layouts**
- πŸ” **Small icons and fine-grained fonts**
- πŸ”€πŸ–ΌοΈ **Mixed visual–textual semantics**
**Screenspot5G_VQA** addresses this gap by using πŸ“± **real-device screenshots**, rather than synthetic UI renders, enabling more realistic evaluation of mobile screen understanding.
πŸ“₯ Source Data
πŸ”§ Data Collection and Processing
- πŸ“± **Screenshots captured manually** from a physical smartphone
- ❓ **Questions designed to test:**
- 🧩 **UI comprehension**
- πŸ”€ **Text understanding**
- πŸ‘οΈπŸ§  **Visual grounding and reasoning**
- πŸ€— **Dataset formatted for Hugging Face compatibility**
πŸ‘€ Who are the source data producers?
The dataset was created and annotated by the listed contributors using a personal mobile device.
No automated web scraping or third-party datasets were used.
🏷️ Annotations
✍️ Annotation Process
- ✍️ **Questions and answers were manually authored**
- 🎯 **Each question targets a visible element or semantic property of the screen**
- πŸ”’ **Single-answer VQA format**
πŸ‘₯ Who are the annotators?
The contributors listed above performed the annotation and validation.
πŸ” Personal and Sensitive Information
The dataset does not intentionally include personal, private, or sensitive information.
Screens were curated to avoid identifiable personal data.
βš–οΈ Bias, Risks, and Limitations
- πŸ“± **Screenshots are limited to a single smartphone model**
- πŸ€– **UI design reflects a specific Android ecosystem**
- 🌐 **Language coverage is limited to English**
πŸ“Œ Recommendations
- πŸ”„ **Combine with datasets from other devices** to improve model generalization
- πŸ§ πŸ”€ **Use alongside OCR benchmarks** for comprehensive screen understanding evaluation
πŸ“š Citation
If you use this dataset, please cite:
B. Chandra Mohan, K. Sri Harsha, P. Vinod Babu, Y. Sindhu, V. Prasanna, P. Harsha Vardhan, and T. Vamsi,
β€œScreenspot5G_VQA: A Visual Question Answering Dataset for Mobile Screenshot Understanding,”
Bapatla Engineering College, Dept. of Electronics and Communication Engineering, Bapatla, India, 2026.