harpreetsahota commited on
Commit
1051756
·
verified ·
1 Parent(s): 872b0fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -67
README.md CHANGED
@@ -90,135 +90,161 @@ dataset = load_from_hub("Voxel51/ScreenSpot-Pro")
90
  session = fo.launch_app(dataset)
91
  ```
92
 
93
-
94
  ## Dataset Details
95
 
96
  ### Dataset Description
97
 
98
- <!-- Provide a longer summary of what this dataset is. -->
99
-
100
 
101
-
102
- - **Curated by:** [More Information Needed]
103
- - **Funded by [optional]:** [More Information Needed]
104
- - **Shared by [optional]:** [More Information Needed]
105
- - **Language(s) (NLP):** en
106
  - **License:** [More Information Needed]
107
 
108
- ### Dataset Sources [optional]
109
-
110
- <!-- Provide the basic links for the dataset. -->
111
 
112
- - **Repository:** [More Information Needed]
113
- - **Paper [optional]:** [More Information Needed]
114
- - **Demo [optional]:** [More Information Needed]
115
 
116
  ## Uses
117
 
118
- <!-- Address questions around how the dataset is intended to be used. -->
119
-
120
  ### Direct Use
121
 
122
- <!-- This section describes suitable use cases for the dataset. -->
123
 
124
- [More Information Needed]
 
 
 
125
 
126
  ### Out-of-Scope Use
127
 
128
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
129
 
130
- [More Information Needed]
 
 
 
131
 
132
  ## Dataset Structure
133
 
134
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
135
 
136
- [More Information Needed]
137
 
138
  ## Dataset Creation
139
 
140
  ### Curation Rationale
141
 
142
- <!-- Motivation for the creation of this dataset. -->
143
-
144
- [More Information Needed]
145
 
146
  ### Source Data
147
 
148
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
149
-
150
  #### Data Collection and Processing
151
 
152
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
153
 
154
- [More Information Needed]
 
 
 
 
 
 
 
155
 
156
  #### Who are the source data producers?
157
 
158
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
159
-
160
- [More Information Needed]
161
 
162
- ### Annotations [optional]
163
-
164
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
165
 
166
  #### Annotation process
167
 
168
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
169
 
170
- [More Information Needed]
 
 
 
 
 
171
 
172
  #### Who are the annotators?
173
 
174
- <!-- This section describes the people or systems who created the annotations. -->
175
-
176
- [More Information Needed]
177
 
178
  #### Personal and Sensitive Information
179
 
180
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
181
-
182
- [More Information Needed]
183
 
184
  ## Bias, Risks, and Limitations
185
 
186
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
187
 
188
- [More Information Needed]
 
 
 
 
189
 
190
  ### Recommendations
191
 
192
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
193
-
194
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
195
 
196
- ## Citation [optional]
 
 
 
 
197
 
198
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
199
 
200
  **BibTeX:**
201
 
202
- [More Information Needed]
 
 
 
 
 
 
203
 
204
  **APA:**
205
 
206
- [More Information Needed]
207
-
208
- ## Glossary [optional]
209
-
210
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
211
-
212
- [More Information Needed]
213
-
214
- ## More Information [optional]
215
-
216
- [More Information Needed]
217
-
218
- ## Dataset Card Authors [optional]
219
-
220
- [More Information Needed]
221
 
222
  ## Dataset Card Contact
223
 
224
- [More Information Needed]
 
90
  session = fo.launch_app(dataset)
91
  ```
92
 
 
93
  ## Dataset Details
94
 
95
  ### Dataset Description
96
 
97
+ ScreenSpot-Pro is a novel benchmark designed to evaluate the GUI grounding capabilities of multimodal large language models (MLLMs) in high-resolution professional environments. Unlike previous benchmarks that focus on general and easy tasks, ScreenSpot-Pro captures the unique challenges posed by professional software applications, including higher screen resolutions, smaller relative target sizes, and complex interfaces. The benchmark comprises 1,581 authentic high-resolution screenshots spanning 23 professional applications across 5 industries (Development, Creative, CAD, Scientific, and Office) and 3 operating systems, each annotated with precise bounding boxes for target UI elements and corresponding natural language instructions.
 
98
 
99
+ - **Curated by:** Kaixin Li (National University of Singapore), Ziyang Meng (East China Normal University), Hongzhan Lin, Ziyang Luo, Yuchen Tian, Jing Ma (Hong Kong Baptist University), Zhiyong Huang, Tat-Seng Chua (National University of Singapore)
100
+ - **Language(s) (NLP):** en only
 
 
 
101
  - **License:** [More Information Needed]
102
 
103
+ ### Dataset Sources
 
 
104
 
105
+ - **Repository:** https://github.com/likaixin2000/ScreenSpot-Pro-GUI-Grounding and https://huggingface.co/datasets/likaixin/ScreenSpot-Pro
106
+ - **Paper :** [ScreenSpot-Pro: GUI Grounding for Professional High-Resolution Computer Use](https://arxiv.org/abs/2504.07981)
 
107
 
108
  ## Uses
109
 
 
 
110
  ### Direct Use
111
 
112
+ ScreenSpot-Pro is designed for evaluating and benchmarking GUI grounding capabilities of multimodal models in professional high-resolution environments. It can be used to:
113
 
114
+ 1. Assess the ability of models to locate specific UI elements based on natural language instructions
115
+ 2. Benchmark performance of GUI agents on professional software applications
116
+ 3. Develop and evaluate methods for handling high-resolution inputs in multimodal systems
117
+ 4. Research techniques for improving visual search and element identification in complex interfaces
118
 
119
  ### Out-of-Scope Use
120
 
121
+ The dataset is specifically designed for GUI grounding evaluation and should not be used for:
122
 
123
+ 1. Training or evaluating full agent planning and execution tasks
124
+ 2. Developing automated systems that might violate software licensing agreements
125
+ 3. Creating tools that could enable unauthorized automation of proprietary software
126
+ 4. Inferring user behaviors or patterns from the collected data
127
 
128
  ## Dataset Structure
129
 
130
+ The dataset consists of 1,581 instruction-image pairs across 23 applications. Each sample includes:
131
+
132
+ 1. A high-resolution screenshot (resolutions vary, with the most common being 2560×1440 at 32.4% of the data)
133
+ 2. A natural language instruction describing the target UI element
134
+ 3. A bounding box annotation specifying the target location
135
+ 4. The type of the target element (text or icon)
136
+ 5. The source application information
137
+
138
+ The dataset is categorized into 6 application types:
139
+ - Development and Programming (254 samples)
140
+ - Creative Software (306 samples)
141
+ - CAD and Engineering (261 samples)
142
+ - Scientific and Analytical (237 samples)
143
+ - Office Software (230 samples)
144
+ - Operating System Commons (196 samples)
145
+
146
+ Icons constitute 61.8% of the elements, with texts comprising the remainder. Target elements in ScreenSpot-Pro occupy only 0.07% of the screenshot area on average.
147
+
148
+ ## FiftyOne Dataset Structure
149
+
150
+ **Basic Info:** 1,581 desktop application screenshots with interaction annotations
151
+
152
+ **Core Fields:**
153
+
154
+ - `ui_id`: StringField - Unique identifier for the UI screen
155
+ - `instruction`: StringField - Natural language task description, note only the English instruction is parsed
156
+ - `application`: EmbeddedDocumentField(Classification) - Application name (e.g., "word")
157
+ - `group`: EmbeddedDocumentField(Classification) - Application category (e.g., "Office")
158
+ - `platform`: EmbeddedDocumentField(Classification) - Operating system (e.g., "macos")
159
+ - `action_detection`: EmbeddedDocumentField(Detection) - Target interaction element:
160
+ - `label`: Element type (e.g., "text")
161
+ - `bounding_box`: a list of relative bounding box coordinates in [0, 1] in the following format:`[<top-left-x>, <top-left-y>, <width>, <height>]`
162
+
163
+ The dataset captures desktop application interfaces across various platforms with natural language instructions and target interaction elements. It focuses on specific UI elements that should be interacted with to complete tasks in desktop applications like Microsoft Word, organized by application type and operating system.
164
 
 
165
 
166
  ## Dataset Creation
167
 
168
  ### Curation Rationale
169
 
170
+ ScreenSpot-Pro was created to address the limitations of existing GUI grounding benchmarks, which primarily focus on simple tasks and cropped screenshots. Professional applications introduce unique challenges for GUI perception models, including high-resolution displays, smaller target sizes, and complex environments that are not well-represented in current benchmarks. The dataset aims to provide a more rigorous evaluation framework that reflects real-world professional computing scenarios.
 
 
171
 
172
  ### Source Data
173
 
 
 
174
  #### Data Collection and Processing
175
 
176
+ The data collection prioritized authentic high-resolution screenshots from professional software usage:
177
 
178
+ 1. Experts with at least five years of experience using relevant applications were invited to record data
179
+ 2. Participants performed their regular work routines to ensure task authenticity
180
+ 3. A custom screen capture tool was developed, accessible via shortcut key, to minimize workflow disruption
181
+ 4. The tool allowed experts to take screenshots and label bounding boxes and instructions in real-time
182
+ 5. Screens with resolution greater than 1080p (1920×1080) were prioritized
183
+ 6. Monitor scaling was disabled during capture
184
+ 7. For dual-monitor setups, images were captured spanning both displays
185
+ 8. UI elements were classified as either "text" or "icon" based on refined criteria
186
 
187
  #### Who are the source data producers?
188
 
189
+ The source data producers are expert users with at least five years of experience using the relevant professional applications. They come from various professional domains including software development, creative design, engineering, scientific research, and office productivity.
 
 
190
 
191
+ ### Annotations
 
 
192
 
193
  #### Annotation process
194
 
195
+ The annotation process was designed to ensure high quality and authenticity:
196
 
197
+ 1. Experts used a custom screen capture tool that overlays the screenshot directly on their screen
198
+ 2. They labeled bounding boxes by dragging and providing instructions directly through the tool
199
+ 3. This real-time annotation eliminated the need to recall contexts after the fact
200
+ 4. Each instance was reviewed by at least two annotators to ensure correct instructions and target bounding boxes
201
+ 5. Ambiguous instructions were resolved to guarantee only one target per instruction
202
+ 6. Annotators precisely verified the interactable regions of GUI elements, excluding irrelevant areas
203
 
204
  #### Who are the annotators?
205
 
206
+ The annotators are the same expert users who produced the source data - professionals with at least five years of experience using the relevant applications. This ensures that annotations reflect domain expertise and understanding of professional software workflows.
 
 
207
 
208
  #### Personal and Sensitive Information
209
 
210
+ The dataset consists of screenshots of professional software interfaces and does not inherently contain personal or sensitive information. However, the paper does not explicitly address whether any potential personal content visible in the screenshots (like document text, filenames, etc.) was anonymized.
 
 
211
 
212
  ## Bias, Risks, and Limitations
213
 
214
+ The dataset has several limitations:
215
 
216
+ 1. It focuses exclusively on GUI grounding and excludes agent planning and execution tasks
217
+ 2. The extremely small relative size of targets (0.07% of screen area on average) presents a significant challenge
218
+ 3. The benchmark may not fully capture the diversity of professional software configurations and customizations
219
+ 4. The paper acknowledges legal considerations related to software licensing that limited certain aspects of data collection
220
+ 5. The dataset's focus on high-resolution professional applications may not generalize to other GUI contexts
221
 
222
  ### Recommendations
223
 
224
+ When using this dataset, researchers should:
 
 
225
 
226
+ 1. Be aware of the legal considerations regarding software licensing and automation
227
+ 2. Consider the extreme challenge posed by small target sizes in high-resolution images
228
+ 3. Recognize that performance on this benchmark may not directly translate to other GUI contexts
229
+ 4. Be cautious about potential biases in task selection or application representation
230
+ 5. Consider developing specialized methods for handling high-resolution inputs, as demonstrated by the authors' ScreenSeekeR approach
231
 
232
+ ## Citation
233
 
234
  **BibTeX:**
235
 
236
+ ```bibtex
237
+ @misc{li2024screenspot-pro,
238
+ title={ScreenSpot-Pro: GUI Grounding for Professional High-Resolution Computer Use},
239
+ author={Kaixin Li and Ziyang Meng and Hongzhan Lin and Ziyang Luo and Yuchen Tian and Jing Ma and Zhiyong Huang and Tat-Seng Chua},
240
+ year={2025},
241
+ }
242
+ ```
243
 
244
  **APA:**
245
 
246
+ Li, K., Meng, Z., Lin, H., Luo, Z., Tian, Y., Ma, J., Huang, Z., & Chua, T.-S. (2025). ScreenSpot-Pro: GUI Grounding for Professional High-Resolution Computer Use.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
247
 
248
  ## Dataset Card Contact
249
 
250
+ https://github.com/likaixin2000/ScreenSpot-Pro-GUI-Grounding