VLM-SubtleBench / README.md
eggfryonrice's picture
Add files using upload-large-folder tool
563890a verified
|
raw
history blame
8.07 kB
metadata
license: cc-by-nc-4.0
task_categories:
  - visual-question-answering
  - image-to-text
language:
  - en
tags:
  - vlm
  - benchmark
  - comparative-reasoning
  - subtle-difference
  - image-comparison
  - multi-image
size_categories:
  - 10K<n<100K

VLM-SubtleBench

VLM-SubtleBench: How Far Are VLMs from Human-Level Subtle Comparative Reasoning?

The ability to distinguish subtle differences between visually similar images is essential for diverse domains such as industrial anomaly detection, medical imaging, and aerial surveillance. While comparative reasoning benchmarks for vision-language models (VLMs) have recently emerged, they primarily focus on images with large, salient differences and fail to capture the nuanced reasoning required for real-world applications.

VLM-SubtleBench is a benchmark designed to evaluate VLMs on subtle comparative reasoning — detecting fine-grained differences between highly similar image pairs that are easy for humans but challenging for state-of-the-art VLMs. Unlike prior benchmarks restricted to natural image datasets, VLM-SubtleBench spans diverse domains including industrial, aerial, and medical imagery.

Key Findings

  • Proprietary VLMs still struggle with subtle visual comparison, leaving large gaps from human performance (best model: 77.8% vs. human: 95.5%).
  • Simple prompting strategies such as chain-of-thought, grid layouts, and overlapping images yield only limited improvements.
  • VLMs are highly sensitive to difficulty factors such as object size and count, with performance degrading sharply as scene complexity increases.
  • Explicit reasoning helps: models with stronger inherent reasoning capabilities achieve higher accuracy across all difference types.

Benchmark Summary

Total QA pairs 12,923
Difference types 10
Image domains 6 (Natural, Industrial, Aerial, Synthetic, Medical)
Data sources 14
Human captions 1,200
Splits test (11,688) / val (1,235)
Task format Multiple-choice VQA + Image Difference Captioning

Note: Medical domain images (MIMIC-CXR, 362 pairs) are not included due to licensing restrictions, but their QA entries are included in qa.json. See Medical Data below for instructions on how to obtain the images.

Difference Types

VLM-SubtleBench covers 10 representative difference types, spanning from low-level visual variations to high-level semantic changes:

Category Description Count
Attribute Variations in object properties (color, size, shape) 1,196
State Object condition changes (breakage, cracks, peeling) 1,148
Emotion Comparative judgments of facial expression intensity 1,108
Temporal Identifying which image depicts an earlier/later event 1,117
Spatial Changes in arrangement or relative position 1,235
Existence Whether an object has appeared or disappeared 1,204
Quantity Changes in object count 1,599
Quality Degradations such as blur, noise, or overexposure 1,109
Viewpoint Camera perspective changes (pan, tilt, roll, orbit) 1,734
Action Differences in human/object poses or activities 1,111

Image Domains and Sources

Domain Count Sources
Natural 7,526 CameraBench, ChangeIt, COCO, CREMA-D/RAVDESS/AFEW-VA/DAiSEE, MegaFruits, UCF-QNRF, VLM4D, YouTube-8M
Synthetic 3,190 Procedurally generated primitive scenes (circles, squares, triangles on white backgrounds)
Industrial 1,202 MVTec-AD, MVTec-LOCO
Aerial 643 LEVIR-MCI, UBC
Medical 362* MIMIC-CXR (not included, see below)

Dataset Structure

VLM-SubtleBench/
├── README.md
├── qa.json                          # All QA pairs with metadata (including MIMIC)
└── images/
    ├── camerabench/                 # CameraBench — viewpoint
    ├── changeit/                    # ChangeIt — state, quantity
    │   ├── state/
    │   └── quantity/
    ├── coco/                        # COCO + Gemini edits — attribute, existence
    │   ├── val2017/
    │   ├── val2017_attribute_edit/
    │   ├── val2017_existence_edit/
    │   ├── train2017/
    │   ├── train2017_attribute_edit/
    │   └── train2017_existence_edit/
    ├── emotion_videos/              # CREMA-D/RAVDESS/AFEW-VA/DAiSEE — emotion
    ├── levir/                       # LEVIR-MCI — existence, quantity
    │   ├── existence/
    │   └── quantity/
    ├── megafruits/                   # MegaFruits — quantity
    ├── mvtec_ad/                    # MVTec-AD — attribute, state
    ├── mvtec_loco/                  # MVTec-LOCO — quantity
    ├── synthetic/                   # Synthetic primitives — attribute, existence, quantity, spatial, viewpoint
    │   ├── attribute/
    │   ├── existence/
    │   ├── quantity/
    │   ├── spatial/
    │   └── viewpoint/
    ├── ubc/                         # UBC — quantity
    ├── ucf_qnrf/                    # UCF-QNRF — quantity
    ├── vlm4d/                       # VLM4D — spatial, temporal
    │   ├── spatial/
    │   └── temporal/
    └── yt8m/                        # YouTube-8M — action, quality, temporal
        ├── action/
        ├── quality/
        └── temporal/

QA Entry Format

Each entry in qa.json:

{
  "image_1": "images/camerabench/processed_frames/1018.1.7/frame_1.png",
  "image_2": "images/camerabench/processed_frames/1018.1.7/frame_2.png",
  "question": "In which direction does the camera move from the first image to the second image?",
  "answer": "backward",
  "distractors": ["forward"],
  "has_caption": false,
  "caption": null,
  "split": "test",
  "metadata": {
    "category": "viewpoint",
    "domain": "natural",
    "source": "camerabench",
    "source_id": "0",
    "raw_folder": "camera_pairs",
    "generation_info": {
      "movement_type": "dolly-out",
      "original_labels": ["minimal-shaking", "complex-motion", "regular-speed", "dolly-out", "lead-tracking"],
      "video_path": "videos_gif/1018.1.7.gif"
    }
  }
}
Field Description
image_1, image_2 Relative paths to the image pair
question The comparative question about the two images
answer Correct answer
distractors Incorrect answer choices
has_caption Whether a human-written difference caption is available
caption Human-written description of the difference between the images (null if unavailable)
split test or val
metadata.category One of the 10 difference types
metadata.domain Image domain (natural, industrial, aerial, synthetic, medical)
metadata.source Source dataset identifier
metadata.source_id Original ID within the source dataset
metadata.generation_info Source-specific metadata (varies by source, may be null)

Medical Data (MIMIC-CXR)

The medical domain QA entries (362 attribute comparison pairs from MIMIC-CXR chest X-rays) are included in qa.json, but the corresponding images are not included due to PhysioNet licensing requirements.

To obtain the medical images:

  1. Obtain credentialed access to MIMIC-CXR-JPG v2.1.0 on PhysioNet
  2. Download the required chest X-ray images
  3. Place them under images/mimic/ following the path structure referenced in qa.json (e.g., images/mimic/p15/p15592981/s55194630/{hash}.jpg)

The image paths preserve the original MIMIC-CXR directory hierarchy, so files can be copied directly from a standard MIMIC-CXR-JPG download.

Loading and Evaluation

To be added.

Citation

To be added.