Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -24,6 +24,20 @@ The ability to distinguish subtle differences between visually similar images is
|
|
| 24 |
|
| 25 |
VLM-SubtleBench is a benchmark designed to evaluate VLMs on **subtle comparative reasoning** — detecting fine-grained differences between highly similar image pairs that are easy for humans but challenging for state-of-the-art VLMs. Unlike prior benchmarks restricted to natural image datasets, VLM-SubtleBench spans diverse domains including industrial, aerial, and medical imagery.
|
| 26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
## Medical Data (MIMIC-CXR)
|
| 28 |
|
| 29 |
The medical domain QA entries (362 attribute comparison pairs from MIMIC-CXR chest X-rays) are included in `qa.json`, but the corresponding images are not included due to [PhysioNet licensing requirements](https://physionet.org/content/mimic-cxr-jpg/2.1.0/).
|
|
|
|
| 24 |
|
| 25 |
VLM-SubtleBench is a benchmark designed to evaluate VLMs on **subtle comparative reasoning** — detecting fine-grained differences between highly similar image pairs that are easy for humans but challenging for state-of-the-art VLMs. Unlike prior benchmarks restricted to natural image datasets, VLM-SubtleBench spans diverse domains including industrial, aerial, and medical imagery.
|
| 26 |
|
| 27 |
+
## Benchmark Summary
|
| 28 |
+
|
| 29 |
+
| | |
|
| 30 |
+
|---|---|
|
| 31 |
+
| **Total QA pairs** | 12,923 |
|
| 32 |
+
| **Difference types** | 10 |
|
| 33 |
+
| **Image domains** | 6 (Natural, Industrial, Aerial, Synthetic, Medical) |
|
| 34 |
+
| **Data sources** | 14 |
|
| 35 |
+
| **Human captions** | 1,200 |
|
| 36 |
+
| **Splits** | test (11,688) / val (1,235) |
|
| 37 |
+
| **Task format** | Multiple-choice VQA + Image Difference Captioning |
|
| 38 |
+
|
| 39 |
+
> **Note**: Medical domain images (MIMIC-CXR, 362 pairs) are not included due to licensing restrictions, but their QA entries are included in `qa.json`. See [Medical Data](#medical-data-mimic-cxr) below for instructions on how to obtain the images.
|
| 40 |
+
|
| 41 |
## Medical Data (MIMIC-CXR)
|
| 42 |
|
| 43 |
The medical domain QA entries (362 attribute comparison pairs from MIMIC-CXR chest X-rays) are included in `qa.json`, but the corresponding images are not included due to [PhysioNet licensing requirements](https://physionet.org/content/mimic-cxr-jpg/2.1.0/).
|