--- license: cc-by-nc-4.0 task_categories: - visual-question-answering - image-to-text language: - en tags: - vlm - benchmark - comparative-reasoning - subtle-difference - image-comparison - multi-image size_categories: - 10K **Note**: Medical domain images (MIMIC-CXR, 362 pairs) are not included due to licensing restrictions, but their QA entries are included in `qa.json`. See [Medical Data](#medical-data-mimic-cxr) below for instructions on how to obtain the images. ## Difference Types VLM-SubtleBench covers 10 representative difference types, spanning from low-level visual variations to high-level semantic changes: | Category | Description | Count | |----------|-------------|-------| | Attribute | Variations in object properties (color, size, shape) | 1,196 | | State | Object condition changes (breakage, cracks, peeling) | 1,148 | | Emotion | Comparative judgments of facial expression intensity | 1,108 | | Temporal | Identifying which image depicts an earlier/later event | 1,117 | | Spatial | Changes in arrangement or relative position | 1,235 | | Existence | Whether an object has appeared or disappeared | 1,204 | | Quantity | Changes in object count | 1,599 | | Quality | Degradations such as blur, noise, or overexposure | 1,109 | | Viewpoint | Camera perspective changes (pan, tilt, roll, orbit) | 1,734 | | Action | Differences in human/object poses or activities | 1,111 | ## Image Domains and Sources | Domain | Count | Sources | |--------|-------|---------| | Natural | 7,526 | CameraBench, ChangeIt, COCO, CREMA-D/RAVDESS/AFEW-VA/DAiSEE, MegaFruits, UCF-QNRF, VLM4D, YouTube-8M | | Synthetic | 3,190 | Procedurally generated primitive scenes (circles, squares, triangles on white backgrounds) | | Industrial | 1,202 | MVTec-AD, MVTec-LOCO | | Aerial | 643 | LEVIR-MCI, UBC | | Medical | 362* | MIMIC-CXR *(not included, see below)* | ## Dataset Structure ``` VLM-SubtleBench/ ├── README.md ├── qa.json # All QA pairs with metadata (including MIMIC) └── images/ ├── camerabench/ # CameraBench — viewpoint ├── changeit/ # ChangeIt — state, quantity │ ├── state/ │ └── quantity/ ├── coco/ # COCO + Gemini edits — attribute, existence │ ├── val2017/ │ ├── val2017_attribute_edit/ │ ├── val2017_existence_edit/ │ ├── train2017/ │ ├── train2017_attribute_edit/ │ └── train2017_existence_edit/ ├── emotion_videos/ # CREMA-D/RAVDESS/AFEW-VA/DAiSEE — emotion ├── levir/ # LEVIR-MCI — existence, quantity │ ├── existence/ │ └── quantity/ ├── megafruits/ # MegaFruits — quantity ├── mvtec_ad/ # MVTec-AD — attribute, state ├── mvtec_loco/ # MVTec-LOCO — quantity ├── synthetic/ # Synthetic primitives — attribute, existence, quantity, spatial, viewpoint │ ├── attribute/ │ ├── existence/ │ ├── quantity/ │ ├── spatial/ │ └── viewpoint/ ├── ubc/ # UBC — quantity ├── ucf_qnrf/ # UCF-QNRF — quantity ├── vlm4d/ # VLM4D — spatial, temporal │ ├── spatial/ │ └── temporal/ └── yt8m/ # YouTube-8M — action, quality, temporal ├── action/ ├── quality/ └── temporal/ ``` ## QA Entry Format Each entry in `qa.json`: ```json { "image_1": "images/camerabench/processed_frames/1018.1.7/frame_1.png", "image_2": "images/camerabench/processed_frames/1018.1.7/frame_2.png", "question": "In which direction does the camera move from the first image to the second image?", "answer": "backward", "distractors": ["forward"], "has_caption": false, "caption": null, "split": "test", "metadata": { "category": "viewpoint", "domain": "natural", "source": "camerabench", "source_id": "0", "raw_folder": "camera_pairs", "generation_info": { "movement_type": "dolly-out", "original_labels": ["minimal-shaking", "complex-motion", "regular-speed", "dolly-out", "lead-tracking"], "video_path": "videos_gif/1018.1.7.gif" } } } ``` | Field | Description | |-------|-------------| | `image_1`, `image_2` | Relative paths to the image pair | | `question` | The comparative question about the two images | | `answer` | Correct answer | | `distractors` | Incorrect answer choices | | `has_caption` | Whether a human-written difference caption is available | | `caption` | Human-written description of the difference between the images (null if unavailable) | | `split` | `test` or `val` | | `metadata.category` | One of the 10 difference types | | `metadata.domain` | Image domain (natural, industrial, aerial, synthetic, medical) | | `metadata.source` | Source dataset identifier | | `metadata.source_id` | Original ID within the source dataset | | `metadata.generation_info` | Source-specific metadata (varies by source, may be null) | ## Medical Data (MIMIC-CXR) The medical domain QA entries (362 attribute comparison pairs from MIMIC-CXR chest X-rays) are included in `qa.json`, but the corresponding images are not included due to [PhysioNet licensing requirements](https://physionet.org/content/mimic-cxr-jpg/2.1.0/). To obtain the medical images: 1. Obtain credentialed access to [MIMIC-CXR-JPG v2.1.0](https://physionet.org/content/mimic-cxr-jpg/2.1.0/) on PhysioNet 2. Download the required chest X-ray images 3. Place them under `images/mimic/` following the path structure referenced in `qa.json` (e.g., `images/mimic/p15/p15592981/s55194630/{hash}.jpg`) The image paths preserve the original MIMIC-CXR directory hierarchy, so files can be copied directly from a standard MIMIC-CXR-JPG download. ## Loading and Evaluation To be added. ## Citation To be added.