Improve dataset card for MedBLINK: Add metadata, project page, and abstract

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +34 -1
README.md CHANGED
@@ -1,4 +1,14 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: task
@@ -24,4 +34,27 @@ configs:
24
  path: data/val-*
25
  ---
26
 
27
- Paper: https://arxiv.org/pdf/2508.02951
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ task_categories:
3
+ - image-text-to-text
4
+ language:
5
+ - en
6
+ tags:
7
+ - medical
8
+ - benchmark
9
+ - multimodal
10
+ - vision-language
11
+ - vqa
12
  dataset_info:
13
  features:
14
  - name: task
 
34
  path: data/val-*
35
  ---
36
 
37
+ # MedBLINK: Probing Basic Perception in Multimodal Language Models for Medicine
38
+
39
+ ## Paper
40
+ [Paper: https://arxiv.org/pdf/2508.02951](https://arxiv.org/pdf/2508.02951)
41
+
42
+ ## Project Page
43
+ [https://medblink-benchmark.github.io](https://medblink-benchmark.github.io)
44
+
45
+ ## Abstract
46
+ Multimodal language models (MLMs) show promise for clinical decision support and diagnostic reasoning, raising the prospect of end-to-end automated medical image interpretation. However, clinicians are highly selective in adopting AI tools; a model that makes errors on seemingly simple perception tasks such as determining image orientation or identifying whether a CT scan is contrast-enhance are unlikely to be adopted for clinical tasks. We introduce Medblink, a benchmark designed to probe these models for such perceptual abilities. Medblink spans eight clinically meaningful tasks across multiple imaging modalities and anatomical regions, totaling 1,429 multiple-choice questions over 1,605 images. We evaluate 19 state-of-the-art MLMs, including general purpose (GPT4o, Claude 3.5 Sonnet) and domain specific (Med Flamingo, LLaVA Med, RadFM) models. While human annotators achieve 96.4% accuracy, the best-performing model reaches only 65%. These results show that current MLMs frequently fail at routine perceptual checks, suggesting the need to strengthen their visual grounding to support clinical adoption. Data is available on our project page.
47
+
48
+ ## Dataset Description
49
+ MedBLINK is a specialized benchmark dataset designed to evaluate the basic perceptual abilities of Multimodal Language Models (MLMs) within a medical context. It addresses critical perception tasks that are fundamental for the safe and effective adoption of AI tools in clinical settings.
50
+
51
+ The dataset is composed of 1,429 multiple-choice questions over 1,605 medical images, covering eight clinically meaningful tasks across various imaging modalities and anatomical regions. Examples of tasks include determining image orientation or identifying contrast enhancement in CT scans.
52
+
53
+ The dataset structure includes the following features for each entry in the validation split:
54
+ - `task`: The specific clinical perception task category.
55
+ - `question_id`: A unique identifier for each question.
56
+ - `image`: The medical image pertinent to the question.
57
+ - `question`: The multiple-choice question text.
58
+ - `answer`: The correct answer to the given question.
59
+
60
+ This dataset serves as a valuable resource for researchers and developers aiming to strengthen the visual grounding and perceptual capabilities of MLMs for medical applications.