Improve dataset card: Add metadata, links, introduction, usage, and citation

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +62 -0
README.md CHANGED
@@ -1,4 +1,15 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: question
@@ -39,3 +50,54 @@ configs:
39
  - split: train
40
  path: data/train-*
41
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - image-text-to-text
5
+ language:
6
+ - en
7
+ tags:
8
+ - medical
9
+ - multimodal
10
+ - in-context-learning
11
+ - vqa
12
+ - benchmark
13
  dataset_info:
14
  features:
15
  - name: question
 
50
  - split: train
51
  path: data/train-*
52
  ---
53
+
54
+ # SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning
55
+
56
+ [Paper](https://huggingface.co/papers/2506.21355) | [Project page](https://smmile-benchmark.github.io) | [Code](https://github.com/smmile-benchmark/SMMILE)
57
+
58
+ <div align="center">
59
+ <img src="https://raw.githubusercontent.com/smmile-benchmark/smmile-benchmark.github.io/main/figures/logo_final.png" alt="SMMILE Logo" width="500"/>
60
+ </div>
61
+
62
+ ## Introduction
63
+
64
+ Multimodal in-context learning (ICL) remains underexplored despite the profound potential it could have in complex application domains such as medicine. Clinicians routinely face a long tail of tasks which they need to learn to solve from few examples, such as considering few relevant previous cases or few differential diagnoses. While MLLMs have shown impressive advances in medical visual question answering (VQA) or multi-turn chatting, their ability to learn multimodal tasks from context is largely unknown.
65
+
66
+ We introduce **SMMILE** (Stanford Multimodal Medical In-context Learning Evaluation), the first multimodal medical ICL benchmark. A set of clinical experts curated ICL problems to scrutinize MLLM's ability to learn multimodal tasks at inference time from context.
67
+
68
+ ## Dataset Access
69
+
70
+ The SMMILE dataset is available on HuggingFace:
71
+
72
+ ```python
73
+ from datasets import load_dataset
74
+ load_dataset('smmile/SMMILE', token=YOUR_HF_TOKEN)
75
+ load_dataset('smmile/SMMILE-plusplus', token=YOUR_HF_TOKEN)
76
+ ```
77
+
78
+ Note: You need to set your HuggingFace token as an environment variable:
79
+ ```bash
80
+ export HF_TOKEN=your_token_here
81
+ ```
82
+
83
+ ## License
84
+
85
+ This work is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
86
+
87
+ ## Citation
88
+
89
+ If you find our dataset useful for your research, please cite the following paper:
90
+
91
+ ```bibtex
92
+ @article{Rieff2025SMMILEAE,
93
+ title={SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning},
94
+ author={Max Rieff and Maithra Varma and Oliver Rabow and Sanjana Adithan and Joon Kim and Ken Chang and Haneol Lee and Naimish Rohatgi and Christian Bluethgen and Mustafa S. Muneer and Jean-Baptiste Delbrouck and Michael Moor},
95
+ journal={arXiv:2506.21355},
96
+ year={2025},
97
+ url={https://api.semanticscholar.org/CorpusID:270381659}
98
+ }
99
+ ```
100
+
101
+ ## Acknowledgments
102
+
103
+ We thank the clinical experts who contributed to curating the benchmark dataset.