license: apache-2.0
task_categories:
- visual-question-answering
tags:
- mobile-ui
- gui-automation
- benchmark
- QA testing
- tap-accuracy
UI-TapBench
π Summary & Intention
UI-TapBench is an open-source benchmark created to evaluate the spatial precision of Large Multimodal Models (LMMs) in mobile environments.
As AI agents move toward "Actionable AI," the ability to translate a natural language instruction into exact screen coordinates is the most common point of failure. This dataset provides a standardized way to measure and improve how models handle dense UI layouts and list-based navigation, ensuring tap reliability in autonomous agents.
π About Drizz
Reimagining Mobile App Testing with Vision AI.
At Drizz, we're building the world's fastest AI-powered testing agent for mobile apps β no locators, no scripting, just plain English. Mobile teams today move fast, but testing tools haven't kept up. Drizz replaces brittle, locator-based frameworks with a vision-based AI engine that understands your app like a human.
With Drizz, teams achieve:
- β‘ 10x Faster Test Cycles
- π― 97%+ Test Accuracy
- π‘οΈ Zero Flaky Tests via our vision-based engine
We are releasing UI-TapBench to help the community move toward a world where UI automation is as simple, reliable, and "human-like" as possible.
π Dataset Structure
Each entry in metadata.jsonl follows this schema:
| Key | Description |
|---|---|
id |
Unique identifier for the sample. |
image |
Relative path to the screenshot (e.g., images/841.png). |
task |
The natural language command (e.g., "Tap on second option"). |
bbox |
Ground truth coordinates: [xmin, ymin, xmax, ymax]. |
app_name |
The package name of the app being tested. |
function |
The targeted action type (default: tap_call_llm). |
Example Entry
{
"id": 841,
"image": "images/841.png",
"task": "Tap on second option in the list.",
"bbox": [42, 733, 1038, 901],
"app_name": "com.duolingo",
"function": "tap_call_llm"
}
π Benchmark Results
We evaluated UI-TapBench across leading Large Multimodal Models (LMMs) to measure tap accuracy, spatial precision, and reliability for mobile UI interactions.
π Competitor Comparison
| Model | Accuracy | Precision | Recall | F1 Score |
|---|---|---|---|---|
| π Drizz (ours) | 94.51 | 96.22 | 98.16 | 97.18 |
| gpt-5.1 | 21.72 | 23.35 | 75.61 | 35.68 |
| gpt-5.2 | 44.83 | 45.71 | 95.88 | 61.91 |
| gemini-pro | 89.84 | 91.28 | 98.28 | 94.65 |
| gemini-flash | 81.44 | 83.78 | 96.67 | 89.77 |
| qwen3.5-27b | 92.98 | 94.98 | 97.61 | 96.28 |
π‘ Key Takeaway
The results show that while several models perform well on general UI grounding tasks, Drizz demonstrates the highest benchmark performance on UI-TapBench, achieving strong spatial precision and reliable tap execution even in dense mobile UI layouts.
π License
Released under the Apache 2.0 License.