Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -28,9 +28,9 @@ configs:
|
|
| 28 |
path: test*
|
| 29 |
---
|
| 30 |
|
| 31 |
-
#
|
| 32 |
|
| 33 |
-
Pixel-Navigator is a high-quality benchmark dataset for evaluating grounded navigation and localization capabilities of multimodal models and agents in
|
| 34 |
|
| 35 |
## Dataset Details
|
| 36 |
|
|
@@ -78,11 +78,15 @@ The dataset includes several challenging scenarios:
|
|
| 78 |
|
| 79 |
### Curation Rationale
|
| 80 |
|
| 81 |
-
|
|
|
|
|
|
|
|
|
|
| 82 |
|
| 83 |
### Annotations
|
| 84 |
|
| 85 |
-
Annotations were created by UI experts with specialized knowledge of web interfaces. Each screenshot was paired with a natural language instruction describing an intended action,
|
|
|
|
| 86 |
|
| 87 |
## Citation
|
| 88 |
|
|
@@ -99,3 +103,4 @@ Annotations were created by UI experts with specialized knowledge of web interfa
|
|
| 99 |
## Dataset Card Contact
|
| 100 |
|
| 101 |
research@hcompany.ai
|
|
|
|
|
|
| 28 |
path: test*
|
| 29 |
---
|
| 30 |
|
| 31 |
+
# Pixel-Navigator: a curated localization dataset focused on real-world Web navigation
|
| 32 |
|
| 33 |
+
Pixel-Navigator is a high-quality benchmark dataset for evaluating grounded navigation and localization capabilities of multimodal models and agents in Web environments. It features 1,639 precisely annotated English-language web screenshots paired with natural-language instructions and pixel-level click targets.
|
| 34 |
|
| 35 |
## Dataset Details
|
| 36 |
|
|
|
|
| 78 |
|
| 79 |
### Curation Rationale
|
| 80 |
|
| 81 |
+
Pixel Navigator focuses on realism by capturing authentic interactions: real actions undertaken by humans and real actions undertaken by agents.
|
| 82 |
+
The records of Pixel Navigator are English-language, desktop-size screenshots of websites. Each record points to an element outlined by a rectangular bounding box and an intent corresponding to it. In particular, the dataset focuses on providing bounding boxes and intents that are not ambiguous, thus increasing the trustworthiness of the evaluation of a VLM on this data.
|
| 83 |
+
This focus on genuine interaction patterns makes our benchmark a superior evaluation tool that will move the needle for agent development. The calendar segment specifically targets known failure points in current systems, demonstrating H Company's commitment to creating targeted benchmarks around challenging areas.
|
| 84 |
+
By identifying and focusing on these difficult cases, H Company aims to unlock new capabilities in VLMs and agents, driving progress in the field through carefully designed evaluation challenges.
|
| 85 |
|
| 86 |
### Annotations
|
| 87 |
|
| 88 |
+
Annotations were created by UI experts with specialized knowledge of web interfaces. Each screenshot was paired with a natural language instruction describing an intended action, and a bounding box precisely matching HTML element boundaries.
|
| 89 |
+
All labels were hand-written or hand-reviewed. Intents were rewritten when needed to only contain non-ambiguous intents rather than visual descriptions. Screenshots were reviewed to avoid any personal information, with any identifiable data removed or anonymized.
|
| 90 |
|
| 91 |
## Citation
|
| 92 |
|
|
|
|
| 103 |
## Dataset Card Contact
|
| 104 |
|
| 105 |
research@hcompany.ai
|
| 106 |
+
|