Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
marc-thibault-h commited on
Commit
dac3315
·
verified ·
1 Parent(s): 1739f5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -22
README.md CHANGED
@@ -30,23 +30,14 @@ configs:
30
 
31
 
32
 
33
- # Pixel-Navigator: A Multimodal Localization Benchmark for Web-Navigation Models
34
- We introduce Pixel-Navigator, a high-quality benchmark dataset for evaluating navigation and localization capabilities of multimodal models and agents in Web environments. Pixel-Navigator features 1,639 English-language web screenshots paired with precisely annotated natural-language instructions and pixel-level click targets, in the same format as the widely-used screenspot benchmark.
35
 
36
  ## Design Goals and Use Case
37
- Pixel-Navigator is designed to measure and advance the ability of AI systems to understand web interfaces, interpret user instructions, and take accurate actions within digital environments. The dataset contains three distinct groups of web screenshots that capture a range of real-world navigation scenarios, from agent-based web retrieval to human tasks like online shopping and calendar management.
38
 
39
  On a more technical level, this benchmark is intended for assessing multimodal models on their ability to navigate web interfaces, evaluating AI agents' understanding of UI elements and their functions, and testing models' abilities to ground natural language instructions to specific interactive elements.
40
 
41
- ## Technical Details: High Quality Annotations and NLP Instructions
42
-
43
-
44
-
45
- A key strength of this benchmark is its meticulous annotation: all bounding boxes correspond precisely to HTML element boundaries, ensuring rigorous evaluation of model performance. Each screenshot is paired with natural language instructions that simulate realistic navigation requests, requiring models to not only understand UI elements but also interpret contextual relationships between visual elements.
46
-
47
-
48
-
49
-
50
 
51
  ## Dataset Structure
52
 
@@ -67,34 +58,37 @@ The dataset includes several challenging scenarios:
67
  - Cases where OCR is insufficient because the visible text isn't the interactive element
68
  - Navigation requiring understanding of relative spatial relationships between information and interaction points
69
 
70
- ## Dataset Creation
 
 
 
71
 
72
  ### Curation Rationale
73
 
74
- Pixel-Navigator focuses on realism by capturing authentic interactions: actions taken by humans and agents.
75
- The records of Pixel Navigator are English-language, desktop-size screenshots of 100+ websites. Each record points to an element outlined by a rectangular bounding box and an intent corresponding to it. In particular, the dataset focuses on providing bounding boxes and intents that are not ambiguous, thus increasing the trustworthiness of the evaluation of a VLM on this data.
76
 
77
  ### Challenging Examples for UI Element Selection
78
 
 
 
79
  [comment]: # (Link to presentation with images https://docs.google.com/presentation/d/1NQGq75Ao_r-4GF8WCyK0BRPCdvkjzxIE2xP9ttV5UcM/edit#slide=id.g358e1dac3df_0_60)
80
 
81
  Our dataset includes examples that go beyond standard object detection or OCR, requiring genuine **UI understanding** and **instruction-based visual reasoning**. These examples highlight failure points in current models and test capabilities critical for real-world interaction with user interfaces, demonstrating H Company's commitment to creating targeted benchmarks around challenging areas.
82
 
83
- With this new benchmark, H Company aims to unlock new capabilities in VLMs, and stimulate the progress of web agents.
84
-
85
  ### Key Challenges Captured in the Benchmark
86
 
87
  - **UI Understanding**
88
  Tasks require comprehension of common UI conventions (e.g., icons, labels, layout). For instance, identifying the correct user settings button may involve recognizing a gear icon, or adding a specific product to a cart might require interpreting both imagery and adjacent labels. State-of-the-art models often fail at such tasks due to lack of contextual or semantic UI awareness.
89
 
90
  - **Instruction-Based Disambiguation**
91
- Some instructions describe objects based on spatial position, appearance, or intent (e.g., "middle of the page", "green button"). These tasks demand models to combine textual instruction with visual reasoning—something most models do not yet handle robustly.
92
 
93
  - **Calendar Navigation**
94
  Even frontier models struggle to interact with calendar widgets. Understanding which dates are available (e.g., not grayed out or marked unavailable) is a frequent failure case, demonstrating gaps in dynamic UI interpretation.
95
 
96
- - **Format and Locale Sensitivity**
97
- Instructions that rely on regional formats—like time (“18:45”) or date representations—test the model’s resilience to locale-specific variations. Models trained on culturally homogeneous data often perform poorly here.
98
 
99
  ### Example Tasks
100
 
@@ -116,8 +110,19 @@ With this new benchmark, H Company aims to unlock new capabilities in VLMs, and
116
 
117
 
118
  # Results of Popular Models
 
 
119
 
120
- *INSERT TABLE HERE
 
 
 
 
 
 
 
 
 
121
 
122
 
123
  ### Annotations
@@ -137,14 +142,35 @@ All labels were hand-written or hand-reviewed. Instructions were rewritten when
137
 
138
  ## Citation
139
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
  **BibTeX:**
141
  ```
142
  @dataset{hcompany2025uinavigate,
143
  author = {H Company Research Team},
144
- title = {Pixel-Navigator: A Benchmark Dataset for Web Navigation and Localization},
145
  year = {2025},
146
  publisher = {H Company},
147
  }
 
148
  ```
149
 
150
  ## Dataset Card Contact
 
30
 
31
 
32
 
33
+ # WebClick: A Multimodal Localization Benchmark for Web-Navigation Models
34
+ We introduce WebClick, a high-quality benchmark dataset for evaluating navigation and localization capabilities of multimodal models and agents in Web environments. WebClick features 1,639 English-language web screenshots from over 100 websites paired with precisely annotated natural-language instructions and pixel-level click targets, in the same format as the widely-used screenspot benchmark.
35
 
36
  ## Design Goals and Use Case
37
+ WebClick is designed to measure and advance the ability of AI systems to understand web interfaces, interpret user instructions, and take accurate actions within digital environments. The dataset contains three distinct groups of web screenshots that capture a range of real-world navigation scenarios, from agent-based web retrieval to human tasks like online shopping and calendar management.
38
 
39
  On a more technical level, this benchmark is intended for assessing multimodal models on their ability to navigate web interfaces, evaluating AI agents' understanding of UI elements and their functions, and testing models' abilities to ground natural language instructions to specific interactive elements.
40
 
 
 
 
 
 
 
 
 
 
41
 
42
  ## Dataset Structure
43
 
 
58
  - Cases where OCR is insufficient because the visible text isn't the interactive element
59
  - Navigation requiring understanding of relative spatial relationships between information and interaction points
60
 
61
+ ## Dataset Creation: High Quality Annotations and NLP Instructions
62
+
63
+
64
+ A key strength of this benchmark is its meticulous annotation: all bounding boxes correspond precisely to HTML element boundaries, ensuring rigorous evaluation of model performance. Each screenshot is paired with natural language instructions that simulate realistic navigation requests, requiring models to not only understand UI elements but also interpret contextual relationships between visual elements.
65
 
66
  ### Curation Rationale
67
 
68
+ WebClick focuses on realism by capturing authentic interactions: actions taken by humans and agents.
69
+ The records of WebClick are English-language, desktop-size screenshots of 100+ websites. Each record points to an element outlined by a rectangular bounding box and an intent corresponding to it. In particular, the dataset focuses on providing bounding boxes and intents that are not ambiguous, thus increasing the trustworthiness of the evaluation of a VLM on this data.
70
 
71
  ### Challenging Examples for UI Element Selection
72
 
73
+ With this new benchmark, H Company aims to unlock new capabilities in VLMs, and stimulate the progress of web agents.
74
+
75
  [comment]: # (Link to presentation with images https://docs.google.com/presentation/d/1NQGq75Ao_r-4GF8WCyK0BRPCdvkjzxIE2xP9ttV5UcM/edit#slide=id.g358e1dac3df_0_60)
76
 
77
  Our dataset includes examples that go beyond standard object detection or OCR, requiring genuine **UI understanding** and **instruction-based visual reasoning**. These examples highlight failure points in current models and test capabilities critical for real-world interaction with user interfaces, demonstrating H Company's commitment to creating targeted benchmarks around challenging areas.
78
 
 
 
79
  ### Key Challenges Captured in the Benchmark
80
 
81
  - **UI Understanding**
82
  Tasks require comprehension of common UI conventions (e.g., icons, labels, layout). For instance, identifying the correct user settings button may involve recognizing a gear icon, or adding a specific product to a cart might require interpreting both imagery and adjacent labels. State-of-the-art models often fail at such tasks due to lack of contextual or semantic UI awareness.
83
 
84
  - **Instruction-Based Disambiguation**
85
+ Some instructions describe objects based on spatial position, appearance, or intent (e.g., "middle of the page", "green button"). These tasks require combining textual instruction with visual reasoning in order to solve them a challange most models do not yet handle robustly.
86
 
87
  - **Calendar Navigation**
88
  Even frontier models struggle to interact with calendar widgets. Understanding which dates are available (e.g., not grayed out or marked unavailable) is a frequent failure case, demonstrating gaps in dynamic UI interpretation.
89
 
90
+ - **Format and Location Sensitivity**
91
+ Instructions that rely on regional formats—like time (“18:45”) or date representations—test the model’s resilience to location-specific variations. Models trained on culturally homogeneous data often perform poorly here.
92
 
93
  ### Example Tasks
94
 
 
110
 
111
 
112
  # Results of Popular Models
113
+ To put our benchmark into context, we evaluate our benchmark alongside the popular Screenspot [1] and ScreenspotV2 [2] benchmarks using a set of popular pre-trained models.
114
+ From the table we can observe that the models mostly underperform on WebClick compared to both Screenspot benchmarks, making it a more challenging task. We also find that WebClick provides better signal for downstream performance for agentic applications of the model.
115
 
116
+ | **Model** | **WebClick (ours)** | Screenspot | Screenspot V2 |
117
+ |-------------------------------|----------------------------|------------|---------------|
118
+ | osunlp/UGround-V1-2B [3] | 71.69% | 71.12% | 79.31% |
119
+ | osunlp/UGround-V1-7B [3] | 82.37% | 85.69% | 84.26% |
120
+ | Qwen/Qwen2.5-VL-3B-Instruct [4] | 71.15% | 82.78% | 84.34% |
121
+ | Qwen/Qwen2.5-VL-7B-Instruct [4] | 75.28% | 85.84% | 88.04% |
122
+ | ByteDance-Seed/UI-TARS-2B-SFT [5] | 64.23% | 66.82% | 69.39% |
123
+ | ByteDance-Seed/UI-TARS-7B-DPO [5] | 80.67% | 83.56% | 86.55% |
124
+ | H-1 | 84.56% | 85.53% | 87.25% |
125
+ | | | | |
126
 
127
 
128
  ### Annotations
 
142
 
143
  ## Citation
144
 
145
+ [1] SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents
146
+ Kanzhi Cheng, Qiushi Sun, Yougang Chu, Fangzhi Xu, Yantao Li, Jianbing Zhang, Zhiyong Wu
147
+ Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Aug. 2024
148
+
149
+ [2] OS-ATLAS: A Foundation Action Model for Generalist GUI Agents
150
+ Zhiyong Wu, Zhenyu Wu, Fangzhi Xu, Yian Wang, Qiushi Sun, Chengyou Jia, Kanzhi Cheng, Zichen Ding, Liheng Chen, Paul Pu Liang, Yu Qiao
151
+ arXiv preprint arXiv:2410.23218 (2024)
152
+
153
+ [3] Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents
154
+ Boyu Gou and Ruohan Wang and Boyuan Zheng and Yanan Xie and Cheng Chang and Yiheng Shu and Huan Sun and Yu Su
155
+ The Thirteenth International Conference on Learning Representations (2025)
156
+
157
+ [4] Qwen2.5-VL Technical Report
158
+ Qwen Team
159
+ arXiv preprint arXiv:2502.13923 (2025)
160
+
161
+ [5] UI-TARS: Pioneering Automated GUI Interaction with Native Agents
162
+ Yujia Qin, Yining Ye, Junjie Fang, Haoming Wang, Shihao Liang, Shizuo Tian, Junda Zhang, Jiahao Li, Yunxin Li, Shijue Huang, Wanjun Zhong, Kuanye Li, Jiale Yang, Yu Miao, Woyu Lin, Longxiang Liu, Xu Jiang, Qianli Ma, Jingyu Li, Xiaojun Xiao, Kai Cai, Chuang Li, Yaowei Zheng, Chaolin Jin, Chen Li, Xiao Zhou, Minchao Wang, Haoli Chen, Zhaojian Li, Haihua Yang, Haifeng Liu, Feng Lin, Tao Peng, Xin Liu, Guang Shi
163
+ arXiv:2501.12326 (2025)
164
+
165
  **BibTeX:**
166
  ```
167
  @dataset{hcompany2025uinavigate,
168
  author = {H Company Research Team},
169
+ title = {WebClick: A Benchmark Dataset for Web Navigation and Localization},
170
  year = {2025},
171
  publisher = {H Company},
172
  }
173
+ [TECH REPORT ARXIV]
174
  ```
175
 
176
  ## Dataset Card Contact