update
Browse files
README.md
CHANGED
|
@@ -1,72 +1,90 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
task_categories:
|
| 4 |
-
- visual-question-answering
|
| 5 |
tags:
|
| 6 |
-
- mobile-ui
|
| 7 |
-
- gui-automation
|
| 8 |
-
- benchmark
|
| 9 |
-
- QA testing
|
| 10 |
-
- tap-accuracy
|
| 11 |
---
|
| 12 |
|
| 13 |
-
#
|
| 14 |
-
**UI-TapBench** is an open-source benchmark created to evaluate the spatial precision of Large Multimodal Models (LMMs) in mobile environments.
|
| 15 |
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
## ๐ About Drizz
|
| 19 |
-
**Reimagining Mobile App Testing with Vision AI.**
|
| 20 |
|
| 21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
* **Zero Flaky Tests** via our vision-based engine.
|
| 27 |
|
| 28 |
-
We are releasing **UI-TapBench** to help the community move toward a world where UI automation is as simple, reliable, and "human-like" as possible.
|
| 29 |
## ๐ Dataset Structure
|
| 30 |
|
| 31 |
Each entry in `metadata.jsonl` follows this schema:
|
| 32 |
|
| 33 |
-
| Key
|
| 34 |
-
|---|---|
|
| 35 |
-
| `id`
|
| 36 |
-
| `image`
|
| 37 |
-
| `task`
|
| 38 |
-
| `bbox`
|
| 39 |
-
| `app_name`
|
| 40 |
-
| `function`
|
| 41 |
|
| 42 |
### Example Entry
|
|
|
|
| 43 |
```json
|
| 44 |
{
|
| 45 |
-
"id": 841,
|
| 46 |
-
"image": "images/841.png",
|
| 47 |
-
"task": "Tap on second option in the list.",
|
| 48 |
-
"bbox": [42, 733, 1038, 901],
|
| 49 |
-
"app_name": "com.duolingo",
|
| 50 |
"function": "tap_call_llm"
|
| 51 |
}
|
|
|
|
| 52 |
|
| 53 |
---
|
| 54 |
|
| 55 |
## ๐ Benchmark Results
|
| 56 |
|
| 57 |
-
We evaluated
|
| 58 |
|
| 59 |
### ๐ Competitor Comparison
|
| 60 |
|
| 61 |
-
| Model
|
| 62 |
-
|---|---:|---:|---:|---:|
|
| 63 |
-
|
|
| 64 |
-
| gpt-5.1
|
| 65 |
-
| gpt-5.2
|
| 66 |
-
| gemini-pro
|
| 67 |
-
| gemini-flash
|
| 68 |
-
| qwen3.5-27b
|
| 69 |
|
| 70 |
### ๐ก Key Takeaway
|
| 71 |
|
| 72 |
-
The results show that while several models perform well on general UI grounding tasks,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
task_categories:
|
| 4 |
+
- visual-question-answering
|
| 5 |
tags:
|
| 6 |
+
- mobile-ui
|
| 7 |
+
- gui-automation
|
| 8 |
+
- benchmark
|
| 9 |
+
- QA testing
|
| 10 |
+
- tap-accuracy
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# UI-TapBench
|
|
|
|
| 14 |
|
| 15 |
+
## ๐ Summary & Intention
|
| 16 |
+
|
| 17 |
+
<span style="color:#4F46E5"><b>UI-TapBench</b></span> is an open-source benchmark created to evaluate the <span style="color:#4F46E5"><b>spatial precision</b></span> of Large Multimodal Models (LMMs) in mobile environments.
|
| 18 |
+
|
| 19 |
+
As AI agents move toward _"Actionable AI,"_ the ability to translate a natural language instruction into exact screen coordinates is the most common point of failure. This dataset provides a standardized way to measure and improve how models handle <span style="color:#4F46E5"><b>dense UI layouts</b></span> and <span style="color:#4F46E5"><b>list-based navigation</b></span>, ensuring <span style="color:#4F46E5"><b>tap reliability</b></span> in autonomous agents.
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
|
| 23 |
## ๐ About Drizz
|
|
|
|
| 24 |
|
| 25 |
+
> <span style="color:#4F46E5"><b>Reimagining Mobile App Testing with Vision AI.</b></span>
|
| 26 |
+
|
| 27 |
+
At <span style="color:#4F46E5"><b>[Drizz](https://drizz.dev)</b></span>, we're building the world's fastest AI-powered testing agent for mobile apps โ no locators, no scripting, just plain English. Mobile teams today move fast, but testing tools haven't kept up. Drizz replaces brittle, locator-based frameworks with a <span style="color:#4F46E5"><b>vision-based AI engine</b></span> that understands your app like a human.
|
| 28 |
+
|
| 29 |
+
With Drizz, teams achieve:
|
| 30 |
+
|
| 31 |
+
- โก <span style="color:#4F46E5"><b>10x Faster Test Cycles</b></span>
|
| 32 |
+
- ๐ฏ <span style="color:#4F46E5"><b>97%+ Test Accuracy</b></span>
|
| 33 |
+
- ๐ก๏ธ <span style="color:#4F46E5"><b>Zero Flaky Tests</b></span> via our vision-based engine
|
| 34 |
|
| 35 |
+
We are releasing <span style="color:#4F46E5"><b>UI-TapBench</b></span> to help the community move toward a world where UI automation is as simple, reliable, and _"human-like"_ as possible.
|
| 36 |
+
|
| 37 |
+
---
|
|
|
|
| 38 |
|
|
|
|
| 39 |
## ๐ Dataset Structure
|
| 40 |
|
| 41 |
Each entry in `metadata.jsonl` follows this schema:
|
| 42 |
|
| 43 |
+
| Key | Description |
|
| 44 |
+
| ------------ | -------------------------------------------------------------- |
|
| 45 |
+
| `id` | Unique identifier for the sample. |
|
| 46 |
+
| `image` | Relative path to the screenshot (e.g., `images/841.png`). |
|
| 47 |
+
| `task` | The natural language command (e.g., _"Tap on second option"_). |
|
| 48 |
+
| `bbox` | Ground truth coordinates: `[xmin, ymin, xmax, ymax]`. |
|
| 49 |
+
| `app_name` | The package name of the app being tested. |
|
| 50 |
+
| `function` | The targeted action type (default: `tap_call_llm`). |
|
| 51 |
|
| 52 |
### Example Entry
|
| 53 |
+
|
| 54 |
```json
|
| 55 |
{
|
| 56 |
+
"id": 841,
|
| 57 |
+
"image": "images/841.png",
|
| 58 |
+
"task": "Tap on second option in the list.",
|
| 59 |
+
"bbox": [42, 733, 1038, 901],
|
| 60 |
+
"app_name": "com.duolingo",
|
| 61 |
"function": "tap_call_llm"
|
| 62 |
}
|
| 63 |
+
```
|
| 64 |
|
| 65 |
---
|
| 66 |
|
| 67 |
## ๐ Benchmark Results
|
| 68 |
|
| 69 |
+
We evaluated <span style="color:#4F46E5"><b>UI-TapBench</b></span> across leading Large Multimodal Models (LMMs) to measure <span style="color:#4F46E5"><b>tap accuracy</b></span>, <span style="color:#4F46E5"><b>spatial precision</b></span>, and <span style="color:#4F46E5"><b>reliability</b></span> for mobile UI interactions.
|
| 70 |
|
| 71 |
### ๐ Competitor Comparison
|
| 72 |
|
| 73 |
+
| Model | Accuracy | Precision | Recall | F1 Score |
|
| 74 |
+
| ----------------------------------------------------------- | ------------------------------------------------: | ------------------------------------------------: | ------------------------------------------------: | ------------------------------------------------: |
|
| 75 |
+
| ๐ <span style="color:#4F46E5"><b>Drizz (ours)</b></span> | <span style="color:#4F46E5"><b>94.51</b></span> | <span style="color:#4F46E5"><b>96.22</b></span> | <span style="color:#4F46E5"><b>98.16</b></span> | <span style="color:#4F46E5"><b>97.18</b></span> |
|
| 76 |
+
| gpt-5.1 | 21.72 | 23.35 | 75.61 | 35.68 |
|
| 77 |
+
| gpt-5.2 | 44.83 | 45.71 | 95.88 | 61.91 |
|
| 78 |
+
| gemini-pro | 89.84 | 91.28 | 98.28 | 94.65 |
|
| 79 |
+
| gemini-flash | 81.44 | 83.78 | 96.67 | 89.77 |
|
| 80 |
+
| qwen3.5-27b | 92.98 | 94.98 | 97.61 | 96.28 |
|
| 81 |
|
| 82 |
### ๐ก Key Takeaway
|
| 83 |
|
| 84 |
+
The results show that while several models perform well on general UI grounding tasks, <span style="color:#4F46E5"><b>Drizz</b></span> demonstrates the <span style="color:#4F46E5"><b>highest benchmark performance</b></span> on <span style="color:#4F46E5"><b>UI-TapBench</b></span>, achieving strong spatial precision and reliable tap execution even in <span style="color:#4F46E5"><b>dense mobile UI layouts</b></span>.
|
| 85 |
+
|
| 86 |
+
---
|
| 87 |
+
|
| 88 |
+
## ๐ License
|
| 89 |
+
|
| 90 |
+
Released under the <span style="color:#4F46E5"><b>Apache 2.0</b></span> License.
|