| --- |
| license: apache-2.0 |
| task_categories: |
| - visual-question-answering |
| tags: |
| - mobile-ui |
| - gui-automation |
| - benchmark |
| - QA testing |
| - tap-accuracy |
| --- |
| |
| # UI-TapBench |
|
|
| ## π Summary & Intention |
|
|
| <span style="color:#4F46E5"><b>UI-TapBench</b></span> is an open-source benchmark created to evaluate the <span style="color:#4F46E5"><b>spatial precision</b></span> of Large Multimodal Models (LMMs) in mobile environments. |
|
|
| As AI agents move toward _"Actionable AI,"_ the ability to translate a natural language instruction into exact screen coordinates is the most common point of failure. This dataset provides a standardized way to measure and improve how models handle <span style="color:#4F46E5"><b>dense UI layouts</b></span> and <span style="color:#4F46E5"><b>list-based navigation</b></span>, ensuring <span style="color:#4F46E5"><b>tap reliability</b></span> in autonomous agents. |
|
|
| --- |
|
|
| ## π About Drizz |
|
|
| > <span style="color:#4F46E5"><b>Reimagining Mobile App Testing with Vision AI.</b></span> |
|
|
| At <span style="color:#4F46E5"><b>[Drizz](https://drizz.dev)</b></span>, we're building the world's fastest AI-powered testing agent for mobile apps β no locators, no scripting, just plain English. Mobile teams today move fast, but testing tools haven't kept up. Drizz replaces brittle, locator-based frameworks with a <span style="color:#4F46E5"><b>vision-based AI engine</b></span> that understands your app like a human. |
|
|
| With Drizz, teams achieve: |
|
|
| - β‘ <span style="color:#4F46E5"><b>10x Faster Test Cycles</b></span> |
| - π― <span style="color:#4F46E5"><b>97%+ Test Accuracy</b></span> |
| - π‘οΈ <span style="color:#4F46E5"><b>Zero Flaky Tests</b></span> via our vision-based engine |
|
|
| We are releasing <span style="color:#4F46E5"><b>UI-TapBench</b></span> to help the community move toward a world where UI automation is as simple, reliable, and _"human-like"_ as possible. |
|
|
| --- |
|
|
| ## π Dataset Structure |
|
|
| Each entry in `metadata.jsonl` follows this schema: |
|
|
| | Key | Description | |
| | ------------ | -------------------------------------------------------------- | |
| | `id` | Unique identifier for the sample. | |
| | `image` | Relative path to the screenshot (e.g., `images/841.png`). | |
| | `task` | The natural language command (e.g., _"Tap on second option"_). | |
| | `bbox` | Ground truth coordinates: `[xmin, ymin, xmax, ymax]`. | |
| | `app_name` | The package name of the app being tested. | |
| | `function` | The targeted action type (default: `tap_call_llm`). | |
|
|
| ### Example Entry |
|
|
| ```json |
| { |
| "id": 841, |
| "image": "images/841.png", |
| "task": "Tap on second option in the list.", |
| "bbox": [42, 733, 1038, 901], |
| "app_name": "com.duolingo", |
| "function": "tap_call_llm" |
| } |
| ``` |
|
|
| --- |
|
|
| ## π Benchmark Results |
|
|
| We evaluated <span style="color:#4F46E5"><b>UI-TapBench</b></span> across leading Large Multimodal Models (LMMs) to measure <span style="color:#4F46E5"><b>tap accuracy</b></span>, <span style="color:#4F46E5"><b>spatial precision</b></span>, and <span style="color:#4F46E5"><b>reliability</b></span> for mobile UI interactions. |
|
|
| ### π Competitor Comparison |
|
|
| | Model | Accuracy | Precision | Recall | F1 Score | |
| | ----------------------------------------------------------- | ------------------------------------------------: | ------------------------------------------------: | ------------------------------------------------: | ------------------------------------------------: | |
| | π <span style="color:#4F46E5"><b>Drizz (ours)</b></span> | <span style="color:#4F46E5"><b>94.51</b></span> | <span style="color:#4F46E5"><b>96.22</b></span> | <span style="color:#4F46E5"><b>98.16</b></span> | <span style="color:#4F46E5"><b>97.18</b></span> | |
| | gpt-5.1 | 21.72 | 23.35 | 75.61 | 35.68 | |
| | gpt-5.2 | 44.83 | 45.71 | 95.88 | 61.91 | |
| | gemini-pro | 89.84 | 91.28 | 98.28 | 94.65 | |
| | gemini-flash | 81.44 | 83.78 | 96.67 | 89.77 | |
| | qwen3.5-27b | 92.98 | 94.98 | 97.61 | 96.28 | |
|
|
| ### π‘ Key Takeaway |
|
|
| The results show that while several models perform well on general UI grounding tasks, <span style="color:#4F46E5"><b>Drizz</b></span> demonstrates the <span style="color:#4F46E5"><b>highest benchmark performance</b></span> on <span style="color:#4F46E5"><b>UI-TapBench</b></span>, achieving strong spatial precision and reliable tap execution even in <span style="color:#4F46E5"><b>dense mobile UI layouts</b></span>. |
|
|
| --- |
|
|
| ## π License |
|
|
| Released under the <span style="color:#4F46E5"><b>Apache 2.0</b></span> License. |