--- license: cc-by-nc-4.0 task_categories: - visual-question-answering - image-to-text language: - en tags: - vlm - benchmark - comparative-reasoning - subtle-difference - image-comparison - multi-image size_categories: - 10K **Note**: Medical domain images (MIMIC-CXR, 362 pairs) are not included due to licensing restrictions, but their QA entries are included in `qa.json`. See [Medical Data](#medical-data-mimic-cxr) below for instructions on how to obtain the images. ## Medical Data (MIMIC-CXR) The medical domain QA entries (362 attribute comparison pairs from MIMIC-CXR chest X-rays, 664 unique images) are included in `qa.json`, but the corresponding images are not included due to [PhysioNet licensing requirements](https://physionet.org/content/mimic-cxr-jpg/2.1.0/). ### Step 1: Obtain PhysioNet Credentialed Access 1. Create an account at [PhysioNet](https://physionet.org/) 2. Complete the required [CITI training course](https://physionet.org/about/citi-course/) for "Data or Specimens Only Research" 3. Go to [MIMIC-CXR-JPG v2.1.0](https://physionet.org/content/mimic-cxr-jpg/2.1.0/) and sign the data use agreement 4. Wait for your access to be approved ### Step 2: Download Images We provide a script that automatically downloads only the 664 images required by `qa.json` and places them at the expected paths (`images/mimic/...`). ```bash python download_mimic.py --user --password ``` The script: - Parses `qa.json` to find all required MIMIC-CXR image paths - Downloads each image from PhysioNet via `wget` - Places them under `images/mimic/` preserving the original directory hierarchy (e.g., `images/mimic/p15/p15592981/s55194630/{hash}.jpg`) - Skips images that already exist, so it is safe to re-run You can also download individual images manually: ```bash wget --user --password \ https://physionet.org/files/mimic-cxr-jpg/2.1.0/files/p15/p15000170/s54385701/3ea0cd5d-b6ef4a9d-bd053deb-a611067c-284e4144.jpg \ -O images/mimic/p15/p15000170/s54385701/3ea0cd5d-b6ef4a9d-bd053deb-a611067c-284e4144.jpg ``` ## Download and Evaluation ### Download ```bash # Using huggingface_hub pip install huggingface_hub python -c "from huggingface_hub import snapshot_download; snapshot_download('KRAFTON/VLM-SubtleBench', repo_type='dataset', local_dir='VLM-SubtleBench')" ``` Or clone directly with Git LFS: ```bash git lfs install git clone https://huggingface.co/datasets/KRAFTON/VLM-SubtleBench ``` ### Evaluation For evaluation code and instructions, please refer to the official GitHub repository: https://github.com/krafton-ai/VLM-SubtleBench ## Citation ```bibtex @inproceedings{kim2026vlmsubtlebench, title={VLM-SubtleBench: How Far Are VLMs from Human-Level Subtle Comparative Reasoning?}, author={Kim, Minkyu and Lee, Sangheon and Park, Dongmin}, booktitle={International Conference on Learning Representations (ICLR)}, year={2026}, url={https://arxiv.org/abs/2603.07888} } ```