File size: 11,031 Bytes
dc65c80
 
a4ffa5f
 
 
 
dc65c80
a4ffa5f
dc65c80
 
a4ffa5f
 
dc65c80
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a4ffa5f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
733374e
 
a4ffa5f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
733374e
a4ffa5f
 
 
733374e
a4ffa5f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
733374e
a4ffa5f
 
 
733374e
a4ffa5f
733374e
a4ffa5f
733374e
a4ffa5f
733374e
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
---

# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}

---
<div align="center">
  <img src="https://user-images.githubusercontent.com/74038190/212284115-f47cd8ff-2ffb-4b04-b5bf-4d1c14c0247f.gif"
       width="100%" />
</div>

<br>

<table>
  <tr>
    <td width="25%" align="center" valign="middle">
      <img src="https://raw.githubusercontent.com/Tajamul21/MedSPOT/main/Images/medspot2.jpeg" width="100%" style="border-radius: 12px;" />
    </td>
    <td width="55%" align="center">
  
  <h1 align="center">🧠 MedSPOT</h1>
        
  <h3 align="center"><i>A Workflow-Aware Sequential Grounding Benchmark for Clinical GUI</i></h3>
        
  <h3 align="center">
    <a href="https://github.com/RozainMalik">Rozain Shakeel</a><sup>1</sup>, 
    <a href="https://www.linkedin.com/in/rxhman/">Abdul Rahman Mohammad Ali</a><sup>2</sup>, 
    <a href="https://www.linkedin.com/in/muneeb-ahmad-ganie">Muneeb Mushtaq</a><sup>1</sup>, 
    <a href="https://rozainmalik.github.io/MedSPOT_web/">Tausifa Jan Saleem</a><sup>3</sup>, 
    <b><a href="https://www.tajamulashraf.com/">Tajamul Ashraf</a><sup>1,4*</sup></b>
  </h3>
        
  <h4 align="center">
    <sup>*</sup> Corresponding author
  </h4>
        
  <h3 align="center">
    <sup>1</sup> Gaash Research Lab, National Institute of Technology Srinagar, India<br>
    <sup>2</sup> e&amp; Group, UAE<br>
    <sup>3</sup> Mohammad Bin Zayed University of Artificial Intelligence (MBZUAI), UAE<br>
    <sup>4</sup> King Abdullah University of Science and Technology (KAUST), Saudi Arabia
  </h3>
    </td>
  </tr>
</table>

## Dataset Summary
MedSPOT is a benchmark for evaluating Multimodal Large Language Models (MLLMs) on GUI grounding tasks in medical imaging software. It evaluates models on their ability to localize and interact with UI elements across 10 medical imaging applications including 3DSlicer, DICOMscope, Weasis, MITK, and others.

## Dataset Details

### Dataset Description
**MedSPOT** is a workflow-aware sequential GUI grounding benchmark designed to evaluate Multimodal Large Language Models (MLLMs) on their ability to interact with real-world clinical imaging software. Unlike conventional grounding benchmarks that evaluate isolated predictions, MedSPOT models grounding as a **temporally dependent sequence of spatial decisions** within evolving interface states β€” reflecting the procedural dependency structure inherent in clinical workflows.

The benchmark spans **10 open-source medical imaging platforms** covering three primary interface categories: DICOM/PACS viewers, segmentation and research tools, and web-based viewers. The platforms include 3D Slicer, DICOMscope, Weasis, MITK, ITK-SNAP, RadiAnt, MicroDICOM, Orthanc, Ginkgo-CADx, and BlueLight DICOM Viewer β€” supporting diverse imaging modalities including CT, MRI, PET, X-ray, and Ultrasound.

The benchmark comprises **216 video tasks** and **597 annotated keyframes**, with an average of 2–3 interdependent steps per task, across seven functional categories: View/Display, Import/Load, Export/Save, Navigate/Zoom, Annotate/Measure, Tools/Adjust, and Settings/Configuration.

#### Annotation Protocol
GUI interaction workflows were recorded as real video sequences from each platform. Decision frames were extracted to capture causally consistent state transitions. Each frame was then **manually annotated using Label Studio**, producing step-level annotations of the form:

- **Screenshot** β€” the GUI frame at each interaction step
- **Natural language instruction** β€” describing the required action
- **Semantic target** β€” description of the target UI element
- **Bounding box** β€” normalized coordinates $(x, y, w, h) \in [0, 100]^4$
- **Action type** β€” click

#### Evaluation Protocol
Evaluation follows a **strict sequential protocol** β€” if a model fails at any step, the task is terminated early. A task is considered complete only if all steps are predicted correctly in order. This transforms evaluation from independent step accuracy into a measure of causally consistent, workflow-aware grounding, penalizing early errors and emphasizing temporal consistency.

MedSPOT evaluates models across three metrics:
- **TCA** (Task Completion Accuracy) β€” fraction of fully completed tasks
- **SHR** (Step Hit Rate) β€” per-step grounding accuracy  
- **S1A** (Step 1 Accuracy) β€” accuracy on the first step of each task

### Dataset Sources

- **Repository:** [GitHub](https://github.com/Tajamul21/MedSPOT)
- **Dataset:** [HuggingFace](https://huggingface.co/datasets/Tajamul21/MedSPOT)
- **Paper:** [arXiv](https://arxiv.org/abs/2603.19993)
- **Website:** [Website]( https://rozainmalik.github.io/MedSPOT_web/)

## Uses

**MedSPOT** is designed as a research benchmark for evaluating the spatial grounding capabilities of multimodal large language models (MLLMs) in clinical GUI environments. The benchmark focuses on sequential GUI interactions, requiring models to correctly identify interface elements across evolving application states.

The dataset is intended strictly for research and evaluation purposes. Current model performance remains far below the reliability required for clinical deployment. For instance, the best-performing model in our evaluation achieves only **43.5% task completion accuracy (TCA)**, highlighting the substantial challenges that remain in developing reliable GUI-grounded reasoning systems.

### Direct Use

MedSPOT is a **benchmark dataset** intended strictly for **evaluation** of Multimodal Large Language Models (MLLMs) on GUI grounding tasks. Suitable use cases include:

- Benchmarking MLLMs on sequential GUI grounding in medical imaging software
- Evaluating cross-interface generalization across diverse clinical platforms and imaging modalities
- Studying workflow-aware and instruction-conditioned spatial localization in domain-specific environments
- Reproducing and comparing results from the MedSPOT paper

### Out-of-Scope Use

- **Training** β€” MedSPOT is a test-only benchmark and should not be used as training data
- **Clinical decision-making** β€” Not intended for use in real clinical or diagnostic settings
- **Autonomous clinical agents** β€” Should not be used to build unsupervised agents operating in real clinical environments.

## Dataset Structure

The dataset is organized hierarchically by software platform:

```
MedSPOT-Bench/
β”œβ”€β”€ Annotations/
β”‚   β”œβ”€β”€ 3DSlicer_Annotation.json
β”‚   β”œβ”€β”€ DICOMscope_Annotation.json
β”‚   β”œβ”€β”€ Weasis_Annotation.json
β”‚   └── ...
└── Images/
    β”œβ”€β”€ 3DSlicer/
    β”œβ”€β”€ DICOMscope/
    β”œβ”€β”€ Weasis/
    └── ...
```

Each annotation JSON contains a list of tasks, where each task is a temporally ordered sequence of steps. Every step includes:

| Field | Description |
|-------|-------------|
| `step_id` | Step index within the task |
| `image_path` | Path to the corresponding GUI screenshot |
| `instruction` | Natural language instruction for the step |
| `actions.type` | Action type (click) |
| `actions.target` | Semantic description of the target UI element |
| `actions.bbox` | Normalized bounding box `[x, y, w, h]` in percentage coordinates |

The dataset contains **216 tasks** and **597 annotated keyframes** across 10 medical imaging platforms, with no train/validation split β€” it is a **test-only benchmark**.

## Dataset Creation

### Curation Rationale

Existing GUI grounding benchmarks focus on general-purpose desktop or web applications and evaluate models on isolated, independent instruction-frame pairs. However, clinical imaging software presents unique challenges β€” complex domain-specific interfaces, modality-specific visualizations, and multi-step workflows where each action depends on the outcome of the previous one.

MedSPOT was created to address this gap by providing a benchmark that evaluates models on **sequential, workflow-aware GUI grounding** in real medical imaging environments, where errors compound across steps and reliability is critical.

### Source Data

#### Data Collection and Processing

Data was collected by directly recording real GUI interaction workflows on 10 open-source medical imaging platforms. The recording process involved:

- Simulating realistic clinical workflows including DICOM import, image navigation, annotation, measurement, and export tasks
- Extracting decision frames from each recorded video, retaining only frames that correspond to meaningful interaction points
- Annotating each frame manually using Label Studio with bounding boxes, instructions, and semantic target descriptions
- Verifying each annotation for causal consistency across the full task sequence

No external data sources, web scraping, or automated data collection methods were used. All data was generated directly by the authors through controlled GUI interaction sessions.

### Annotations

#### Annotation Process

Annotations were created through a structured multi-stage pipeline:

1. **Video Recording** β€” Real GUI interaction workflows were recorded as video sequences across all 10 medical imaging platforms, simulating realistic clinical tasks such as loading DICOM studies, navigating image series, applying transformations, performing measurements, and exporting results.

2. **Frame Extraction** β€” From each video, a minimal ordered subset of decision frames was extracted, where each frame corresponds to a meaningful interaction decision point and preserves temporal order.

3. **Manual Annotation** β€” Each extracted frame was manually annotated using **Label Studio** with the following fields:
   - Natural language instruction describing the required action
   - Semantic description of the target UI element
   - Normalized bounding box $(x, y, w, h) \in [0, 100]^4$ around the target element
   - Action type

4. **Verification** β€” Each annotated step was verified for correctness and causal consistency, ensuring that the sequence of steps forms a valid and complete clinical workflow.

In total, the benchmark comprises **216 video tasks** and **597 annotated keyframes** across 10 platforms, with an average of 2–3 interdependent steps per task.

#### Annotators

The annotations were created manually by the authors of the paper. Annotation was performed using Label Studio (open-source annotation tool), with each step verified for correctness and causal consistency across the interaction workflow.

---

## Citation

If you use MedSPOT in your research, please cite our paper:

```bibtex
@misc{medspot,
      title={MedSPOT: A Workflow-Aware Sequential Grounding Benchmark for Clinical GUI}, 
      author={Rozain Shakeel and Abdul Rahman Mohammad Ali and Muneeb Mushtaq and Tausifa Jan Saleem and Tajamul Ashraf},
      year={2026},
      eprint={2603.19993},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2603.19993}, 
}