SMMILE-plusplus / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add metadata and comprehensive content
a6e02a4 verified
|
raw
history blame
4.44 kB
metadata
task_categories:
  - image-text-to-text
license: cc-by-4.0
language:
  - en
library_name: datasets
tags:
  - medical
  - multimodal
  - in-context-learning
  - benchmark
  - vqa
dataset_info:
  features:
    - name: answer
      dtype: string
    - name: image_url
      dtype: string
    - name: original_order
      dtype: string
    - name: parquet_path
      dtype: string
    - name: question
      dtype: string
    - name: speciality
      dtype: string
    - name: flag_answer_format
      dtype: string
    - name: flag_image_type
      dtype: string
    - name: flag_cognitive_process
      dtype: string
    - name: flag_rarity
      dtype: string
    - name: flag_difficulty_llms
      dtype: string
    - name: image
      dtype: image
    - name: original_problem_id
      dtype: string
    - name: permutation_number
      dtype: string
    - name: problem_id
      dtype: string
    - name: order
      dtype: int64
  splits:
    - name: train
      num_bytes: 1228986309.804
      num_examples: 5994
  download_size: 154747960
  dataset_size: 1228986309.804
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning

SMMILE Logo

Introduction

Multimodal in-context learning (ICL) remains underexplored despite significant potential for domains such as medicine. Clinicians routinely encounter diverse, specialized tasks requiring adaptation from limited examples, such as drawing insights from a few relevant prior cases or considering a constrained set of differential diagnoses. While multimodal large language models (MLLMs) have shown advances in medical visual question answering (VQA), their ability to learn multimodal tasks from context is largely unknown.

We introduce SMMILE, the first expert-driven multimodal ICL benchmark for medical tasks. Eleven medical experts curated problems, each including a multimodal query and multimodal in-context examples as task demonstrations. SMMILE encompasses 111 problems (517 question-image-answer triplets) covering 6 medical specialties and 13 imaging modalities. We further introduce SMMILE++, an augmented variant with 1038 permuted problems. A comprehensive evaluation of 15 MLLMs demonstrates that most models exhibit moderate to poor multimodal ICL ability in medical tasks. In open-ended evaluations, ICL contributes only 8% average improvement over zero-shot on SMMILE and 9.4% on SMMILE++. We observe a susceptibility for irrelevant in-context examples: even a single noisy or irrelevant example can degrade performance by up to 9.5%. Moreover, example ordering exhibits a recency bias, i.e., placing the most relevant example last can lead to substantial performance improvements by up to 71%. Our findings highlight critical limitations and biases in current MLLMs when learning multimodal medical tasks from context.

Dataset Access

The SMMILE dataset is available on Hugging Face. You can load it using the datasets library:

from datasets import load_dataset
load_dataset('smmile/SMMILE', token=YOUR_HF_TOKEN)
load_dataset('smmile/SMMILE-plusplus', token=YOUR_HF_TOKEN)

Note: You need to set your Hugging Face token as an environment variable:

export HF_TOKEN=your_token_here

Citation

If you find this dataset useful for your research, please cite the corresponding paper:

@article{rieff2025smmile,
      title={SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning},
      author={Rieff, Maximilian and Varma, Mayank and Rabow, Oliver and Adithan, Swetha and Kim, Jaehee and Chang, Kyeong and Lee, Han and Rohatgi, Nikhil and Bluethgen, Conrad and Muneer, Mohammed Shaheer and Delbrouck, Jean-Baptiste and Moor, Michael},
      journal={arXiv preprint arXiv:2506.21355},
      year={2025},
      url={https://arxiv.org/abs/2506.21355}
}