techdrizzdev commited on
Commit
f8943db
ยท
verified ยท
1 Parent(s): cf7d81c
Files changed (1) hide show
  1. README.md +57 -39
README.md CHANGED
@@ -1,72 +1,90 @@
1
  ---
2
  license: apache-2.0
3
  task_categories:
4
- - visual-question-answering
5
  tags:
6
- - mobile-ui
7
- - gui-automation
8
- - benchmark
9
- - QA testing
10
- - tap-accuracy
11
  ---
12
 
13
- # ๐Ÿ“ Summary & Intention
14
- **UI-TapBench** is an open-source benchmark created to evaluate the spatial precision of Large Multimodal Models (LMMs) in mobile environments.
15
 
16
- As AI agents move toward "Actionable AI," the ability to translate a natural language instruction into exact screen coordinates is the most common point of failure. This dataset provides a standardized way to measure and improve how models handle dense UI layouts and list-based navigation, ensuring "tap" reliability in autonomous agents.
 
 
 
 
 
 
17
 
18
  ## ๐Ÿš€ About Drizz
19
- **Reimagining Mobile App Testing with Vision AI.**
20
 
21
- At **[Drizz](https://www.drizz.dev/)**, weโ€™re building the worldโ€™s fastest AI-powered testing agent for mobile appsโ€”no locators, no scripting, just plain English. Mobile teams today move fast, but testing tools havenโ€™t kept up. Drizz replaces brittle, locator-based frameworks with a vision-based AI engine that understands your app like a human.
 
 
 
 
 
 
 
 
22
 
23
- **With Drizz, teams achieve:**
24
- * **10x Faster Test Cycles**
25
- * **97%+ Test Accuracy**
26
- * **Zero Flaky Tests** via our vision-based engine.
27
 
28
- We are releasing **UI-TapBench** to help the community move toward a world where UI automation is as simple, reliable, and "human-like" as possible.
29
  ## ๐Ÿ“Š Dataset Structure
30
 
31
  Each entry in `metadata.jsonl` follows this schema:
32
 
33
- | Key | Description |
34
- |---|---|
35
- | `id` | Unique identifier for the sample. |
36
- | `image` | Relative path to the screenshot (e.g., `images/841.png`). |
37
- | `task` | The natural language command (e.g., "Tap on second option"). |
38
- | `bbox` | Ground truth coordinates: `[xmin, ymin, xmax, ymax]`. |
39
- | `app_name` | The package name of the app being tested. |
40
- | `function` | The targeted action type (default: `tap_call_llm`). |
41
 
42
  ### Example Entry
 
43
  ```json
44
  {
45
- "id": 841,
46
- "image": "images/841.png",
47
- "task": "Tap on second option in the list.",
48
- "bbox": [42, 733, 1038, 901],
49
- "app_name": "com.duolingo",
50
  "function": "tap_call_llm"
51
  }
 
52
 
53
  ---
54
 
55
  ## ๐Ÿ“ˆ Benchmark Results
56
 
57
- We evaluated **UI-TapBench** across leading Large Multimodal Models (LMMs) to measure tap accuracy, spatial and precision for mobile UI interactions.
58
 
59
  ### ๐Ÿ” Competitor Comparison
60
 
61
- | Model | Accuracy | Precision | Recall | F1 Score |
62
- |---|---:|---:|---:|---:|
63
- | **Drizz (ours)** | **94.51** | **96.22** | **98.16** | **97.18** |
64
- | gpt-5.1 | 21.72 | 23.35 | 75.61 | 35.68 |
65
- | gpt-5.2 | 44.83 | 45.71 | 95.88 | 61.91 |
66
- | gemini-pro | 89.84 | 91.28 | 98.28 | 94.65 |
67
- | gemini-flash | 81.44 | 83.78 | 96.67 | 89.77 |
68
- | qwen3.5-27b | 92.98 | 94.98 | 97.61 | 96.28 |
69
 
70
  ### ๐Ÿ’ก Key Takeaway
71
 
72
- The results show that while several models perform well on general UI grounding tasks, **Drizz** demonstrates the highest benchmark performance on **UI-TapBench**, achieving strong spatial precision and reliable tap execution even in dense mobile UI layouts.
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  task_categories:
4
+ - visual-question-answering
5
  tags:
6
+ - mobile-ui
7
+ - gui-automation
8
+ - benchmark
9
+ - QA testing
10
+ - tap-accuracy
11
  ---
12
 
13
+ # UI-TapBench
 
14
 
15
+ ## ๐Ÿ“ Summary & Intention
16
+
17
+ <span style="color:#4F46E5"><b>UI-TapBench</b></span> is an open-source benchmark created to evaluate the <span style="color:#4F46E5"><b>spatial precision</b></span> of Large Multimodal Models (LMMs) in mobile environments.
18
+
19
+ As AI agents move toward _"Actionable AI,"_ the ability to translate a natural language instruction into exact screen coordinates is the most common point of failure. This dataset provides a standardized way to measure and improve how models handle <span style="color:#4F46E5"><b>dense UI layouts</b></span> and <span style="color:#4F46E5"><b>list-based navigation</b></span>, ensuring <span style="color:#4F46E5"><b>tap reliability</b></span> in autonomous agents.
20
+
21
+ ---
22
 
23
  ## ๐Ÿš€ About Drizz
 
24
 
25
+ > <span style="color:#4F46E5"><b>Reimagining Mobile App Testing with Vision AI.</b></span>
26
+
27
+ At <span style="color:#4F46E5"><b>[Drizz](https://drizz.dev)</b></span>, we're building the world's fastest AI-powered testing agent for mobile apps โ€” no locators, no scripting, just plain English. Mobile teams today move fast, but testing tools haven't kept up. Drizz replaces brittle, locator-based frameworks with a <span style="color:#4F46E5"><b>vision-based AI engine</b></span> that understands your app like a human.
28
+
29
+ With Drizz, teams achieve:
30
+
31
+ - โšก <span style="color:#4F46E5"><b>10x Faster Test Cycles</b></span>
32
+ - ๐ŸŽฏ <span style="color:#4F46E5"><b>97%+ Test Accuracy</b></span>
33
+ - ๐Ÿ›ก๏ธ <span style="color:#4F46E5"><b>Zero Flaky Tests</b></span> via our vision-based engine
34
 
35
+ We are releasing <span style="color:#4F46E5"><b>UI-TapBench</b></span> to help the community move toward a world where UI automation is as simple, reliable, and _"human-like"_ as possible.
36
+
37
+ ---
 
38
 
 
39
  ## ๐Ÿ“Š Dataset Structure
40
 
41
  Each entry in `metadata.jsonl` follows this schema:
42
 
43
+ | Key | Description |
44
+ | ------------ | -------------------------------------------------------------- |
45
+ | `id` | Unique identifier for the sample. |
46
+ | `image` | Relative path to the screenshot (e.g., `images/841.png`). |
47
+ | `task` | The natural language command (e.g., _"Tap on second option"_). |
48
+ | `bbox` | Ground truth coordinates: `[xmin, ymin, xmax, ymax]`. |
49
+ | `app_name` | The package name of the app being tested. |
50
+ | `function` | The targeted action type (default: `tap_call_llm`). |
51
 
52
  ### Example Entry
53
+
54
  ```json
55
  {
56
+ "id": 841,
57
+ "image": "images/841.png",
58
+ "task": "Tap on second option in the list.",
59
+ "bbox": [42, 733, 1038, 901],
60
+ "app_name": "com.duolingo",
61
  "function": "tap_call_llm"
62
  }
63
+ ```
64
 
65
  ---
66
 
67
  ## ๐Ÿ“ˆ Benchmark Results
68
 
69
+ We evaluated <span style="color:#4F46E5"><b>UI-TapBench</b></span> across leading Large Multimodal Models (LMMs) to measure <span style="color:#4F46E5"><b>tap accuracy</b></span>, <span style="color:#4F46E5"><b>spatial precision</b></span>, and <span style="color:#4F46E5"><b>reliability</b></span> for mobile UI interactions.
70
 
71
  ### ๐Ÿ” Competitor Comparison
72
 
73
+ | Model | Accuracy | Precision | Recall | F1 Score |
74
+ | ----------------------------------------------------------- | ------------------------------------------------: | ------------------------------------------------: | ------------------------------------------------: | ------------------------------------------------: |
75
+ | ๐Ÿ† <span style="color:#4F46E5"><b>Drizz (ours)</b></span> | <span style="color:#4F46E5"><b>94.51</b></span> | <span style="color:#4F46E5"><b>96.22</b></span> | <span style="color:#4F46E5"><b>98.16</b></span> | <span style="color:#4F46E5"><b>97.18</b></span> |
76
+ | gpt-5.1 | 21.72 | 23.35 | 75.61 | 35.68 |
77
+ | gpt-5.2 | 44.83 | 45.71 | 95.88 | 61.91 |
78
+ | gemini-pro | 89.84 | 91.28 | 98.28 | 94.65 |
79
+ | gemini-flash | 81.44 | 83.78 | 96.67 | 89.77 |
80
+ | qwen3.5-27b | 92.98 | 94.98 | 97.61 | 96.28 |
81
 
82
  ### ๐Ÿ’ก Key Takeaway
83
 
84
+ The results show that while several models perform well on general UI grounding tasks, <span style="color:#4F46E5"><b>Drizz</b></span> demonstrates the <span style="color:#4F46E5"><b>highest benchmark performance</b></span> on <span style="color:#4F46E5"><b>UI-TapBench</b></span>, achieving strong spatial precision and reliable tap execution even in <span style="color:#4F46E5"><b>dense mobile UI layouts</b></span>.
85
+
86
+ ---
87
+
88
+ ## ๐Ÿ“œ License
89
+
90
+ Released under the <span style="color:#4F46E5"><b>Apache 2.0</b></span> License.