File size: 5,971 Bytes
2827449 f8943db 2827449 f8943db 52b5e0d f8943db 52b5e0d f8943db 52b5e0d f8943db 52b5e0d f8943db 52b5e0d f8943db 52b5e0d f8943db 52b5e0d f8943db 52b5e0d 5bdd858 f8943db 5bdd858 f8943db 5bdd858 f8943db 5bdd858 f8943db | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | ---
license: apache-2.0
task_categories:
- visual-question-answering
tags:
- mobile-ui
- gui-automation
- benchmark
- QA testing
- tap-accuracy
---
# UI-TapBench
## π Summary & Intention
<span style="color:#4F46E5"><b>UI-TapBench</b></span> is an open-source benchmark created to evaluate the <span style="color:#4F46E5"><b>spatial precision</b></span> of Large Multimodal Models (LMMs) in mobile environments.
As AI agents move toward _"Actionable AI,"_ the ability to translate a natural language instruction into exact screen coordinates is the most common point of failure. This dataset provides a standardized way to measure and improve how models handle <span style="color:#4F46E5"><b>dense UI layouts</b></span> and <span style="color:#4F46E5"><b>list-based navigation</b></span>, ensuring <span style="color:#4F46E5"><b>tap reliability</b></span> in autonomous agents.
---
## π About Drizz
> <span style="color:#4F46E5"><b>Reimagining Mobile App Testing with Vision AI.</b></span>
At <span style="color:#4F46E5"><b>[Drizz](https://drizz.dev)</b></span>, we're building the world's fastest AI-powered testing agent for mobile apps β no locators, no scripting, just plain English. Mobile teams today move fast, but testing tools haven't kept up. Drizz replaces brittle, locator-based frameworks with a <span style="color:#4F46E5"><b>vision-based AI engine</b></span> that understands your app like a human.
With Drizz, teams achieve:
- β‘ <span style="color:#4F46E5"><b>10x Faster Test Cycles</b></span>
- π― <span style="color:#4F46E5"><b>97%+ Test Accuracy</b></span>
- π‘οΈ <span style="color:#4F46E5"><b>Zero Flaky Tests</b></span> via our vision-based engine
We are releasing <span style="color:#4F46E5"><b>UI-TapBench</b></span> to help the community move toward a world where UI automation is as simple, reliable, and _"human-like"_ as possible.
---
## π Dataset Structure
Each entry in `metadata.jsonl` follows this schema:
| Key | Description |
| ------------ | -------------------------------------------------------------- |
| `id` | Unique identifier for the sample. |
| `image` | Relative path to the screenshot (e.g., `images/841.png`). |
| `task` | The natural language command (e.g., _"Tap on second option"_). |
| `bbox` | Ground truth coordinates: `[xmin, ymin, xmax, ymax]`. |
| `app_name` | The package name of the app being tested. |
| `function` | The targeted action type (default: `tap_call_llm`). |
### Example Entry
```json
{
"id": 841,
"image": "images/841.png",
"task": "Tap on second option in the list.",
"bbox": [42, 733, 1038, 901],
"app_name": "com.duolingo",
"function": "tap_call_llm"
}
```
---
## π Benchmark Results
We evaluated <span style="color:#4F46E5"><b>UI-TapBench</b></span> across leading Large Multimodal Models (LMMs) to measure <span style="color:#4F46E5"><b>tap accuracy</b></span>, <span style="color:#4F46E5"><b>spatial precision</b></span>, and <span style="color:#4F46E5"><b>reliability</b></span> for mobile UI interactions.
### π Competitor Comparison
| Model | Accuracy | Precision | Recall | F1 Score |
| ----------------------------------------------------------- | ------------------------------------------------: | ------------------------------------------------: | ------------------------------------------------: | ------------------------------------------------: |
| π <span style="color:#4F46E5"><b>Drizz (ours)</b></span> | <span style="color:#4F46E5"><b>94.51</b></span> | <span style="color:#4F46E5"><b>96.22</b></span> | <span style="color:#4F46E5"><b>98.16</b></span> | <span style="color:#4F46E5"><b>97.18</b></span> |
| gpt-5.1 | 21.72 | 23.35 | 75.61 | 35.68 |
| gpt-5.2 | 44.83 | 45.71 | 95.88 | 61.91 |
| gemini-pro | 89.84 | 91.28 | 98.28 | 94.65 |
| gemini-flash | 81.44 | 83.78 | 96.67 | 89.77 |
| qwen3.5-27b | 92.98 | 94.98 | 97.61 | 96.28 |
### π‘ Key Takeaway
The results show that while several models perform well on general UI grounding tasks, <span style="color:#4F46E5"><b>Drizz</b></span> demonstrates the <span style="color:#4F46E5"><b>highest benchmark performance</b></span> on <span style="color:#4F46E5"><b>UI-TapBench</b></span>, achieving strong spatial precision and reliable tap execution even in <span style="color:#4F46E5"><b>dense mobile UI layouts</b></span>.
---
## π License
Released under the <span style="color:#4F46E5"><b>Apache 2.0</b></span> License. |