Update README.md
#4
by rozain - opened
README.md
CHANGED
|
@@ -1,15 +1,12 @@
|
|
| 1 |
-
---
|
| 2 |
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
|
| 3 |
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
|
| 4 |
{}
|
| 5 |
-
---
|
| 6 |
|
| 7 |
# MedSPOT: A Workflow-Aware Sequential Grounding Benchmark for Clinical GUI
|
| 8 |
|
| 9 |
<div align="center">
|
| 10 |
-
<figure align="center"> <img src="https://raw.githubusercontent.com/Tajamul21/MedSPOT/main/Images/
|
| 11 |
</div>
|
| 12 |
-
<!-- Provide a quick summary of the dataset. -->
|
| 13 |
|
| 14 |
## Dataset Summary
|
| 15 |
MedSPOT is a benchmark for evaluating Multimodal Large Language Models (MLLMs) on GUI grounding tasks in medical imaging software. It evaluates models on their ability to localize and interact with UI elements across 10 medical imaging applications including 3DSlicer, DICOMscope, Weasis, MITK, and others.
|
|
@@ -17,8 +14,6 @@ MedSPOT is a benchmark for evaluating Multimodal Large Language Models (MLLMs) o
|
|
| 17 |
## Dataset Details
|
| 18 |
|
| 19 |
### Dataset Description
|
| 20 |
-
<!-- Provide a longer summary of what this dataset is. -->
|
| 21 |
-
|
| 22 |
**MedSPOT** is a workflow-aware sequential GUI grounding benchmark designed to evaluate Multimodal Large Language Models (MLLMs) on their ability to interact with real-world clinical imaging software. Unlike conventional grounding benchmarks that evaluate isolated predictions, MedSPOT models grounding as a **temporally dependent sequence of spatial decisions** within evolving interface states β reflecting the procedural dependency structure inherent in clinical workflows.
|
| 23 |
|
| 24 |
The benchmark spans **10 open-source medical imaging platforms** covering three primary interface categories: DICOM/PACS viewers, segmentation and research tools, and web-based viewers. The platforms include 3D Slicer, DICOMscope, Weasis, MITK, ITK-SNAP, RadiAnt, MicroDICOM, Orthanc, Ginkgo-CADx, and BlueLight DICOM Viewer β supporting diverse imaging modalities including CT, MRI, PET, X-ray, and Ultrasound.
|
|
@@ -42,18 +37,12 @@ MedSPOT evaluates models across three metrics:
|
|
| 42 |
- **SHR** (Step Hit Rate) β per-step grounding accuracy
|
| 43 |
- **S1A** (Step 1 Accuracy) β accuracy on the first step of each task
|
| 44 |
|
| 45 |
-
<!-- - **Curated by:** [More Information Needed]
|
| 46 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 47 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 48 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 49 |
-
- **License:** [More Information Needed]
|
| 50 |
-
-->
|
| 51 |
### Dataset Sources
|
| 52 |
-
<!-- Provide the basic links for the dataset. -->
|
| 53 |
|
| 54 |
- **Repository:** [GitHub](https://github.com/Tajamul21/MedSPOT)
|
| 55 |
- **Dataset:** [HuggingFace](https://huggingface.co/datasets/Tajamul21/MedSPOT)
|
| 56 |
-
- **Paper:**
|
|
|
|
| 57 |
|
| 58 |
## Uses
|
| 59 |
|
|
@@ -75,9 +64,11 @@ MedSPOT is a **benchmark dataset** intended strictly for **evaluation** of Multi
|
|
| 75 |
- **Training** β MedSPOT is a test-only benchmark and should not be used as training data
|
| 76 |
- **Clinical decision-making** β Not intended for use in real clinical or diagnostic settings
|
| 77 |
- **Autonomous clinical agents** β Should not be used to build unsupervised agents operating in real clinical environments.
|
|
|
|
| 78 |
## Dataset Structure
|
| 79 |
|
| 80 |
The dataset is organized hierarchically by software platform:
|
|
|
|
| 81 |
```
|
| 82 |
MedSPOT-Bench/
|
| 83 |
βββ Annotations/
|
|
@@ -105,8 +96,6 @@ Each annotation JSON contains a list of tasks, where each task is a temporally o
|
|
| 105 |
|
| 106 |
The dataset contains **216 tasks** and **597 annotated keyframes** across 10 medical imaging platforms, with no train/validation split β it is a **test-only benchmark**.
|
| 107 |
|
| 108 |
-
---
|
| 109 |
-
|
| 110 |
## Dataset Creation
|
| 111 |
|
| 112 |
### Curation Rationale
|
|
@@ -115,8 +104,6 @@ Existing GUI grounding benchmarks focus on general-purpose desktop or web applic
|
|
| 115 |
|
| 116 |
MedSPOT was created to address this gap by providing a benchmark that evaluates models on **sequential, workflow-aware GUI grounding** in real medical imaging environments, where errors compound across steps and reliability is critical.
|
| 117 |
|
| 118 |
-
---
|
| 119 |
-
|
| 120 |
### Source Data
|
| 121 |
|
| 122 |
#### Data Collection and Processing
|
|
@@ -130,10 +117,6 @@ Data was collected by directly recording real GUI interaction workflows on 10 op
|
|
| 130 |
|
| 131 |
No external data sources, web scraping, or automated data collection methods were used. All data was generated directly by the authors through controlled GUI interaction sessions.
|
| 132 |
|
| 133 |
-
<!-- #### Who are the source data producers?
|
| 134 |
-
|
| 135 |
-
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
|
| 136 |
-
|
| 137 |
### Annotations
|
| 138 |
|
| 139 |
#### Annotation Process
|
|
@@ -154,47 +137,23 @@ Annotations were created through a structured multi-stage pipeline:
|
|
| 154 |
|
| 155 |
In total, the benchmark comprises **216 video tasks** and **597 annotated keyframes** across 10 platforms, with an average of 2β3 interdependent steps per task.
|
| 156 |
|
| 157 |
-
####
|
| 158 |
|
| 159 |
The annotations were created manually by the authors of the paper. Annotation was performed using Label Studio (open-source annotation tool), with each step verified for correctness and causal consistency across the interaction workflow.
|
| 160 |
-
|
| 161 |
-
<!-- #### Personal and Sensitive Information
|
| 162 |
-
|
| 163 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
| 164 |
-
|
| 165 |
-
<!-- ## Bias, Risks, and Limitations
|
| 166 |
-
|
| 167 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 168 |
|
|
|
|
| 169 |
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations.Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
|
| 173 |
-
<!-- ## Citation
|
| 174 |
-
|
| 175 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section.
|
| 176 |
-
|
| 177 |
-
**BibTeX:**
|
| 178 |
-
-->
|
| 179 |
-
|
| 180 |
-
<!-- **APA:**
|
| 181 |
-
|
| 182 |
-
[More Information Needed]
|
| 183 |
-
|
| 184 |
-
## Glossary [optional]
|
| 185 |
-
|
| 186 |
-
If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card.
|
| 187 |
-
|
| 188 |
-
[More Information Needed]
|
| 189 |
-
|
| 190 |
-
## More Information [optional]
|
| 191 |
-
|
| 192 |
-
[More Information Needed]
|
| 193 |
-
|
| 194 |
-
## Dataset Card Authors [optional]
|
| 195 |
-
|
| 196 |
-
[More Information Needed]
|
| 197 |
|
| 198 |
-
|
| 199 |
|
| 200 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
|
| 2 |
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
|
| 3 |
{}
|
|
|
|
| 4 |
|
| 5 |
# MedSPOT: A Workflow-Aware Sequential Grounding Benchmark for Clinical GUI
|
| 6 |
|
| 7 |
<div align="center">
|
| 8 |
+
<figure align="center"> <img src="https://raw.githubusercontent.com/Tajamul21/MedSPOT/main/Images/medspot2.jpeg" width=65%> </figure>
|
| 9 |
</div>
|
|
|
|
| 10 |
|
| 11 |
## Dataset Summary
|
| 12 |
MedSPOT is a benchmark for evaluating Multimodal Large Language Models (MLLMs) on GUI grounding tasks in medical imaging software. It evaluates models on their ability to localize and interact with UI elements across 10 medical imaging applications including 3DSlicer, DICOMscope, Weasis, MITK, and others.
|
|
|
|
| 14 |
## Dataset Details
|
| 15 |
|
| 16 |
### Dataset Description
|
|
|
|
|
|
|
| 17 |
**MedSPOT** is a workflow-aware sequential GUI grounding benchmark designed to evaluate Multimodal Large Language Models (MLLMs) on their ability to interact with real-world clinical imaging software. Unlike conventional grounding benchmarks that evaluate isolated predictions, MedSPOT models grounding as a **temporally dependent sequence of spatial decisions** within evolving interface states β reflecting the procedural dependency structure inherent in clinical workflows.
|
| 18 |
|
| 19 |
The benchmark spans **10 open-source medical imaging platforms** covering three primary interface categories: DICOM/PACS viewers, segmentation and research tools, and web-based viewers. The platforms include 3D Slicer, DICOMscope, Weasis, MITK, ITK-SNAP, RadiAnt, MicroDICOM, Orthanc, Ginkgo-CADx, and BlueLight DICOM Viewer β supporting diverse imaging modalities including CT, MRI, PET, X-ray, and Ultrasound.
|
|
|
|
| 37 |
- **SHR** (Step Hit Rate) β per-step grounding accuracy
|
| 38 |
- **S1A** (Step 1 Accuracy) β accuracy on the first step of each task
|
| 39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
### Dataset Sources
|
|
|
|
| 41 |
|
| 42 |
- **Repository:** [GitHub](https://github.com/Tajamul21/MedSPOT)
|
| 43 |
- **Dataset:** [HuggingFace](https://huggingface.co/datasets/Tajamul21/MedSPOT)
|
| 44 |
+
- **Paper:** [arXiv](https://arxiv.org/abs/2603.19993)
|
| 45 |
+
- **Website:** [Website]( https://rozainmalik.github.io/MedSPOT_web/)
|
| 46 |
|
| 47 |
## Uses
|
| 48 |
|
|
|
|
| 64 |
- **Training** β MedSPOT is a test-only benchmark and should not be used as training data
|
| 65 |
- **Clinical decision-making** β Not intended for use in real clinical or diagnostic settings
|
| 66 |
- **Autonomous clinical agents** β Should not be used to build unsupervised agents operating in real clinical environments.
|
| 67 |
+
|
| 68 |
## Dataset Structure
|
| 69 |
|
| 70 |
The dataset is organized hierarchically by software platform:
|
| 71 |
+
|
| 72 |
```
|
| 73 |
MedSPOT-Bench/
|
| 74 |
βββ Annotations/
|
|
|
|
| 96 |
|
| 97 |
The dataset contains **216 tasks** and **597 annotated keyframes** across 10 medical imaging platforms, with no train/validation split β it is a **test-only benchmark**.
|
| 98 |
|
|
|
|
|
|
|
| 99 |
## Dataset Creation
|
| 100 |
|
| 101 |
### Curation Rationale
|
|
|
|
| 104 |
|
| 105 |
MedSPOT was created to address this gap by providing a benchmark that evaluates models on **sequential, workflow-aware GUI grounding** in real medical imaging environments, where errors compound across steps and reliability is critical.
|
| 106 |
|
|
|
|
|
|
|
| 107 |
### Source Data
|
| 108 |
|
| 109 |
#### Data Collection and Processing
|
|
|
|
| 117 |
|
| 118 |
No external data sources, web scraping, or automated data collection methods were used. All data was generated directly by the authors through controlled GUI interaction sessions.
|
| 119 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 120 |
### Annotations
|
| 121 |
|
| 122 |
#### Annotation Process
|
|
|
|
| 137 |
|
| 138 |
In total, the benchmark comprises **216 video tasks** and **597 annotated keyframes** across 10 platforms, with an average of 2β3 interdependent steps per task.
|
| 139 |
|
| 140 |
+
#### Annotators
|
| 141 |
|
| 142 |
The annotations were created manually by the authors of the paper. Annotation was performed using Label Studio (open-source annotation tool), with each step verified for correctness and causal consistency across the interaction workflow.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 143 |
|
| 144 |
+
---
|
| 145 |
|
| 146 |
+
## Citation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 147 |
|
| 148 |
+
If you use MedSPOT in your research, please cite our paper:
|
| 149 |
|
| 150 |
+
```bibtex
|
| 151 |
+
@misc{medspot,
|
| 152 |
+
title={MedSPOT: A Workflow-Aware Sequential Grounding Benchmark for Clinical GUI},
|
| 153 |
+
author={Rozain Shakeel and Abdul Rahman Mohammad Ali and Muneeb Mushtaq and Tausifa Jan Saleem and Tajamul Ashraf},
|
| 154 |
+
year={2026},
|
| 155 |
+
eprint={2603.19993},
|
| 156 |
+
archivePrefix={arXiv},
|
| 157 |
+
primaryClass={cs.CV},
|
| 158 |
+
url={https://arxiv.org/abs/2603.19993},
|
| 159 |
+
}
|