metadata
license: cc-by-sa-4.0
language:
- zh
- en
tags:
- gui
- agents
pretty_name: FunUI
size_categories:
- 1K<n<10K
FunUI Benchmark
π Introduction
FunUI is a bilingual benchmark designed to fill the gap of comprehensive evaluation datasets in the field of screen understanding.
It encompasses four fundamental tasks and provides a holistic evaluation platform to assess modelsβ abilities on mobile UI comprehension.
β¨ Key Features
Bilingual
- Includes 2,150 Chinese screens and 9,347 English screens from Android devices.
- Annotated with about 14k Chinese samples and 18k English samples.
- The first benchmark that enables systematic evaluation of both Chinese and English screen understanding.
Comprehensive
- Covers multiple dimensions of screen understanding:
- UI Grounding (element localization)
- UI Referring (element identification)
- Screen Question Answering
- Screen Summarization
- Ranges from spatial grounding and entity recognition to integrated analysis of screen content.
- Covers multiple dimensions of screen understanding:
Diverse
- Provides QA pairs involving 120+ icons and widgets.
- Includes complex reasoning questions related to element relations, attributes, arithmetic, and more.
- Poses greater challenges compared to commonly used OCR-related tasks.
π Tasks
UI Grounding
- Models are required to localize the target UI element.
UI Referring
- Models identify the specific UI element described in bbox format.
Screen Question Answering
- Models answer diverse questions about screen content.
Screen Summarization
- Models generate summaries of the observed screen.
π Applications
- Automated UI comprehension and interaction.
- Development of intelligent assistants and mobile automation.
- Benchmarking multimodal models for screen understanding.
π Citation
If you use FunUI benchmark in your research, please cite our paper:
@article{202408.2137,
title = {UI-Hawk: Unleashing the Screen Stream Understanding for GUI Agents},
author = {Jiwen Zhang and Yaqi Yu and Minghui Liao and Wentao Li and Jihao Wu and Zhongyu Wei},
doi = {10.20944/preprints202408.2137.v1},
url = {https://doi.org/10.20944/preprints202408.2137.v1},
year = 2024,
month = {August},
publisher = {Preprints},
journal = {Preprints}
}