SurgMLLMBench / README.md
juyounpark's picture
Update README.md
8616780 verified
metadata
license: cc-by-nc-4.0
task_categories:
  - visual-question-answering
  - image-segmentation
tags:
  - surgical
  - medical
  - multimodal-llm
  - benchmark

SurgMLLMBench: A Multimodal Large Language Model Benchmark Dataset for Surgical Scene Understanding

This dataset was presented in the paper: SurgMLLMBench: A Multimodal Large Language Model Benchmark Dataset for Surgical Scene Understanding.

Dataset Overview

SurgMLLMBench is a multimodal benchmark designed for training and evaluating interactive multimodal large language models in surgical scene understanding. It integrates diverse surgical video datasets—including laparoscopic surgery, robot-assisted surgery, and micro-surgical training—into a unified framework with harmonized workflow labels, pixel-level instrument segmentation, and structured VQA annotations.

SurgMLLMBench integrates six surgical video datasets spanning multiple surgical domains:

These datasets collectively cover over 112 hours of surgical video and more than 560K annotated frames, providing rich supervision across multiple domains and procedures.

Unified Annotation Schema

Because existing surgical datasets differ widely in taxonomy, task definitions, frame rates, resolutions, and annotation formats, SurgMLLMBench applies a comprehensive standardization pipeline:

  1. Frame-level conversion: All videos are converted into frame-level images to unify temporal handling across datasets.

  2. Harmonized metadata schema: A COCO-style metadata structure is used to represent workflow and segmentation labels in a consistent format.

Each frame is stored with the following unified fields:

  • video_id
  • frame_id
  • stage
  • phase
  • step
  • instrument_action
  • segmentation

Missing labels are kept as empty fields to maintain structural consistency across datasets.

Multi-task Annotations

Across all datasets, SurgMLLMBench provides supervision for the following complementary tasks:

  • Stage recognition
  • Phase recognition
  • Step recognition
  • Instrument-centered action recognition
  • Instrument segmentation (pixel-level)

These tasks capture global workflow, fine-grained procedural steps, and pixel-level spatial understanding, enabling multimodal LLMs to learn both semantic reasoning and visual grounding.

VQA Prompt Generation

SurgMLLMBench augments the structured annotations above with template-based VQA annotations to support interactive, conversation-style models. These provide question–answer pairs that are tightly coupled with the underlying workflow. Each frame can be paired with one or more VQA samples, drawn from four template families:

  1. Workflow queries
  • Purpose: ask about stage, phase, and step for the current frame.
  • Example template: “Which stage, phase, and step are shown in this image?”
  1. Instrument count queries
  • Purpose: ask how many tools are currently visible.
  • Example template: “How many surgical tools are visible in this image?”
  1. Instrument type queries
  • Purpose: ask which instrument categories are present.
  • Example template: “Which instruments are present in this image?”
  1. Instrument action queries
  • Purpose: ask about the functional action of one or more tools when action labels are available (e.g., in EndoVis2018, GraSP, MISAW).
  • Example template: “What action is the needle holder performing in this image?”

Dataset Source Metadata Injection

Instead of treating dataset identity as a separate VQA task, SurgMLLMBench embeds this information directly at the beginning of each question for files ending in "with_metadata.json": This ensures models receive explicit domain cues without needing a separate query category. Files without the suffix with_metadata contain pure template-driven VQA questions.

Dataset Metadata prefix
Cholec80 This image is included in the cholec80 dataset. There is only phase information, so you answer step as None.
EndoVis2018 This image is included in the endovis2018 dataset.
AutoLaparo(phase) This image is included in the autolaparo dataset. There is only phase information, so you answer step as None.
AutoLaparo(tool) This image is included in the autolaparo dataset.
GraSP This image is included in the grasp dataset.
MISAW This image is included in the misaw dataset.
MAVIS This image is included in the mavis dataset.

Note — AutoLaparo provides phase labels at 1 FPS, whereas tool annotations are derived from segmentation masks sampled at 25 FPS. Therefore, the phase test set and tool test set necessarily use different frame indices, resulting in separate test .jsonl files for phase and tool tasks.

File Naming Convention

Filenames follow a consistent structure:

File Type Meaning
*_phase_train.json Phase-related VQA training samples
*_tool_train.json Tool-type VQA training samples
*_segmentation_train.p Segmentation training data
*_test_question.jsonl Test questions
*_test_answer.jsonl Test answers
*_with_metadata.json Questions include dataset-source prefixes

This naming scheme allows users to understand task, split, and metadata status directly from the filename.

Dataset Example

Dataset Structure

The dataset is structured as follows:
SurgMLLMBench/
├── AutoLaparo/
│   ├── train/
│   │   ├── autolaparo_phase_train.json
│   │   ├── autolaparo_tool_train.json
│   │   ├── autolaparo_segmentation_train.p
│   │   └── ...
│   │
│   └── test/
│       ├── autolaparo_phase_test_question.jsonl
│       ├── autolaparo_phase_test_answer.jsonl
│       ├── autolaparo_segmentation_test.p
│       └── ...
│
├── Cholec80/
│   ├── train/
│   │   ├── cholec80_phase_train.json
│   │   ├── cholec80_tool_train.json
│   │   └── ...
|   |
│   └── test/
│       ├── cholec80_test_question.jsonl
│       ├── cholec80_test_answer.jsonl
│       └── ...
│
├── EndoVis2018/
│   ├── train/
│   │   ├── endovis2018_train.json
│   │   ├── endovis2018_train_with_metadata.json
│   │   └── endovis2018_segmentation_train.p
|   |
│   └── test/
│       ├── endovis2018_test_question.jsonl
│       ├── endovis2018_test_answer.jsonl
│       ├── endovis2018_segmentation_test.p
│       └── ...
│
├── GraSP/
│   ├── train/
│   │   ├── grasp_phase_train.json
│   │   ├── grasp_tool_train.json
│   |   ├── grasp_segmentation_train.p
│   |   └── ...
|   |
│   └── test/
│       ├── grasp_test_question.jsonl
│       ├── grasp_test_answer.jsonl
│       ├── grasp_segmentation_test.p
│       └── ...
│
├── MAVIS/                             
│   ├── train/
│   │   ├── mavis_phase_train.json
│   │   ├── mavis_tool_train.json
│   │   ├── mavis_segmentation_train.p
│   │   └── ...
|   |   
│   └── test/
│       ├── mavis_test_question.jsonl
│       ├── mavis_test_answer.jsonl
│       ├── mavis_segmentation_test.p
│       └── ...
│
└── MISAW/
    ├── train/
    │   ├── misaw_phase_train.json
    │   ├── misaw_tool_train.json
    │   ├── misaw_segmentation_train.p
    │   └── ...
    |
    └── test/
        ├── misaw_test_question.jsonl
        ├── misaw_test_answer.jsonl
        ├── misaw_segmentation_test.p
        └── ...
    

Citation

If you use this dataset in your research, please cite:

@misc{choi2025surgmllmbenchmultimodallargelanguage,
      title={SurgMLLMBench: A Multimodal Large Language Model Benchmark Dataset for Surgical Scene Understanding}, 
      author={Tae-Min Choi and Tae Kyeong Jeong and Garam Kim and Jaemin Lee and Yeongyoon Koh and In Cheul Choi and Jae-Ho Chung and Jong Woong Park and Juyoun Park},
      year={2025},
      eprint={2511.21339},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2511.21339}, 
}