Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -24,140 +24,6 @@ The ability to distinguish subtle differences between visually similar images is
|
|
| 24 |
|
| 25 |
VLM-SubtleBench is a benchmark designed to evaluate VLMs on **subtle comparative reasoning** — detecting fine-grained differences between highly similar image pairs that are easy for humans but challenging for state-of-the-art VLMs. Unlike prior benchmarks restricted to natural image datasets, VLM-SubtleBench spans diverse domains including industrial, aerial, and medical imagery.
|
| 26 |
|
| 27 |
-
## Key Findings
|
| 28 |
-
|
| 29 |
-
- **Proprietary VLMs still struggle** with subtle visual comparison, leaving large gaps from human performance (best model: 77.8% vs. human: 95.5%).
|
| 30 |
-
- **Simple prompting strategies** such as chain-of-thought, grid layouts, and overlapping images yield only limited improvements.
|
| 31 |
-
- **VLMs are highly sensitive** to difficulty factors such as object size and count, with performance degrading sharply as scene complexity increases.
|
| 32 |
-
- **Explicit reasoning helps**: models with stronger inherent reasoning capabilities achieve higher accuracy across all difference types.
|
| 33 |
-
|
| 34 |
-
## Benchmark Summary
|
| 35 |
-
|
| 36 |
-
| | |
|
| 37 |
-
|---|---|
|
| 38 |
-
| **Total QA pairs** | 12,923 |
|
| 39 |
-
| **Difference types** | 10 |
|
| 40 |
-
| **Image domains** | 6 (Natural, Industrial, Aerial, Synthetic, Medical) |
|
| 41 |
-
| **Data sources** | 14 |
|
| 42 |
-
| **Human captions** | 1,200 |
|
| 43 |
-
| **Splits** | test (11,688) / val (1,235) |
|
| 44 |
-
| **Task format** | Multiple-choice VQA + Image Difference Captioning |
|
| 45 |
-
|
| 46 |
-
> **Note**: Medical domain images (MIMIC-CXR, 362 pairs) are not included due to licensing restrictions, but their QA entries are included in `qa.json`. See [Medical Data](#medical-data-mimic-cxr) below for instructions on how to obtain the images.
|
| 47 |
-
|
| 48 |
-
## Difference Types
|
| 49 |
-
|
| 50 |
-
VLM-SubtleBench covers 10 representative difference types, spanning from low-level visual variations to high-level semantic changes:
|
| 51 |
-
|
| 52 |
-
| Category | Description | Count |
|
| 53 |
-
|----------|-------------|-------|
|
| 54 |
-
| Attribute | Variations in object properties (color, size, shape) | 1,196 |
|
| 55 |
-
| State | Object condition changes (breakage, cracks, peeling) | 1,148 |
|
| 56 |
-
| Emotion | Comparative judgments of facial expression intensity | 1,108 |
|
| 57 |
-
| Temporal | Identifying which image depicts an earlier/later event | 1,117 |
|
| 58 |
-
| Spatial | Changes in arrangement or relative position | 1,235 |
|
| 59 |
-
| Existence | Whether an object has appeared or disappeared | 1,204 |
|
| 60 |
-
| Quantity | Changes in object count | 1,599 |
|
| 61 |
-
| Quality | Degradations such as blur, noise, or overexposure | 1,109 |
|
| 62 |
-
| Viewpoint | Camera perspective changes (pan, tilt, roll, orbit) | 1,734 |
|
| 63 |
-
| Action | Differences in human/object poses or activities | 1,111 |
|
| 64 |
-
|
| 65 |
-
## Image Domains and Sources
|
| 66 |
-
|
| 67 |
-
| Domain | Count | Sources |
|
| 68 |
-
|--------|-------|---------|
|
| 69 |
-
| Natural | 7,526 | CameraBench, ChangeIt, COCO, CREMA-D/RAVDESS/AFEW-VA/DAiSEE, MegaFruits, UCF-QNRF, VLM4D, YouTube-8M |
|
| 70 |
-
| Synthetic | 3,190 | Procedurally generated primitive scenes (circles, squares, triangles on white backgrounds) |
|
| 71 |
-
| Industrial | 1,202 | MVTec-AD, MVTec-LOCO |
|
| 72 |
-
| Aerial | 643 | LEVIR-MCI, UBC |
|
| 73 |
-
| Medical | 362* | MIMIC-CXR *(not included, see below)* |
|
| 74 |
-
|
| 75 |
-
## Dataset Structure
|
| 76 |
-
|
| 77 |
-
```
|
| 78 |
-
VLM-SubtleBench/
|
| 79 |
-
├── README.md
|
| 80 |
-
├── qa.json # All QA pairs with metadata (including MIMIC)
|
| 81 |
-
└── images/
|
| 82 |
-
├── camerabench/ # CameraBench — viewpoint
|
| 83 |
-
├── changeit/ # ChangeIt — state, quantity
|
| 84 |
-
│ ├── state/
|
| 85 |
-
│ └── quantity/
|
| 86 |
-
├── coco/ # COCO + Gemini edits — attribute, existence
|
| 87 |
-
│ ├── val2017/
|
| 88 |
-
│ ├── val2017_attribute_edit/
|
| 89 |
-
│ ├── val2017_existence_edit/
|
| 90 |
-
│ ├── train2017/
|
| 91 |
-
│ ├── train2017_attribute_edit/
|
| 92 |
-
│ └── train2017_existence_edit/
|
| 93 |
-
├── emotion_videos/ # CREMA-D/RAVDESS/AFEW-VA/DAiSEE — emotion
|
| 94 |
-
├── levir/ # LEVIR-MCI — existence, quantity
|
| 95 |
-
│ ├── existence/
|
| 96 |
-
│ └── quantity/
|
| 97 |
-
├── megafruits/ # MegaFruits — quantity
|
| 98 |
-
├── mvtec_ad/ # MVTec-AD — attribute, state
|
| 99 |
-
├── mvtec_loco/ # MVTec-LOCO — quantity
|
| 100 |
-
├── synthetic/ # Synthetic primitives — attribute, existence, quantity, spatial, viewpoint
|
| 101 |
-
│ ├── attribute/
|
| 102 |
-
│ ├── existence/
|
| 103 |
-
│ ├── quantity/
|
| 104 |
-
│ ├── spatial/
|
| 105 |
-
│ └── viewpoint/
|
| 106 |
-
├── ubc/ # UBC — quantity
|
| 107 |
-
├── ucf_qnrf/ # UCF-QNRF — quantity
|
| 108 |
-
├── vlm4d/ # VLM4D — spatial, temporal
|
| 109 |
-
│ ├── spatial/
|
| 110 |
-
│ └── temporal/
|
| 111 |
-
└── yt8m/ # YouTube-8M — action, quality, temporal
|
| 112 |
-
├── action/
|
| 113 |
-
├── quality/
|
| 114 |
-
└── temporal/
|
| 115 |
-
```
|
| 116 |
-
|
| 117 |
-
## QA Entry Format
|
| 118 |
-
|
| 119 |
-
Each entry in `qa.json`:
|
| 120 |
-
|
| 121 |
-
```json
|
| 122 |
-
{
|
| 123 |
-
"image_1": "images/camerabench/processed_frames/1018.1.7/frame_1.png",
|
| 124 |
-
"image_2": "images/camerabench/processed_frames/1018.1.7/frame_2.png",
|
| 125 |
-
"question": "In which direction does the camera move from the first image to the second image?",
|
| 126 |
-
"answer": "backward",
|
| 127 |
-
"distractors": ["forward"],
|
| 128 |
-
"has_caption": false,
|
| 129 |
-
"caption": null,
|
| 130 |
-
"split": "test",
|
| 131 |
-
"metadata": {
|
| 132 |
-
"category": "viewpoint",
|
| 133 |
-
"domain": "natural",
|
| 134 |
-
"source": "camerabench",
|
| 135 |
-
"source_id": "0",
|
| 136 |
-
"raw_folder": "camera_pairs",
|
| 137 |
-
"generation_info": {
|
| 138 |
-
"movement_type": "dolly-out",
|
| 139 |
-
"original_labels": ["minimal-shaking", "complex-motion", "regular-speed", "dolly-out", "lead-tracking"],
|
| 140 |
-
"video_path": "videos_gif/1018.1.7.gif"
|
| 141 |
-
}
|
| 142 |
-
}
|
| 143 |
-
}
|
| 144 |
-
```
|
| 145 |
-
|
| 146 |
-
| Field | Description |
|
| 147 |
-
|-------|-------------|
|
| 148 |
-
| `image_1`, `image_2` | Relative paths to the image pair |
|
| 149 |
-
| `question` | The comparative question about the two images |
|
| 150 |
-
| `answer` | Correct answer |
|
| 151 |
-
| `distractors` | Incorrect answer choices |
|
| 152 |
-
| `has_caption` | Whether a human-written difference caption is available |
|
| 153 |
-
| `caption` | Human-written description of the difference between the images (null if unavailable) |
|
| 154 |
-
| `split` | `test` or `val` |
|
| 155 |
-
| `metadata.category` | One of the 10 difference types |
|
| 156 |
-
| `metadata.domain` | Image domain (natural, industrial, aerial, synthetic, medical) |
|
| 157 |
-
| `metadata.source` | Source dataset identifier |
|
| 158 |
-
| `metadata.source_id` | Original ID within the source dataset |
|
| 159 |
-
| `metadata.generation_info` | Source-specific metadata (varies by source, may be null) |
|
| 160 |
-
|
| 161 |
## Medical Data (MIMIC-CXR)
|
| 162 |
|
| 163 |
The medical domain QA entries (362 attribute comparison pairs from MIMIC-CXR chest X-rays) are included in `qa.json`, but the corresponding images are not included due to [PhysioNet licensing requirements](https://physionet.org/content/mimic-cxr-jpg/2.1.0/).
|
|
|
|
| 24 |
|
| 25 |
VLM-SubtleBench is a benchmark designed to evaluate VLMs on **subtle comparative reasoning** — detecting fine-grained differences between highly similar image pairs that are easy for humans but challenging for state-of-the-art VLMs. Unlike prior benchmarks restricted to natural image datasets, VLM-SubtleBench spans diverse domains including industrial, aerial, and medical imagery.
|
| 26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
## Medical Data (MIMIC-CXR)
|
| 28 |
|
| 29 |
The medical domain QA entries (362 attribute comparison pairs from MIMIC-CXR chest X-rays) are included in `qa.json`, but the corresponding images are not included due to [PhysioNet licensing requirements](https://physionet.org/content/mimic-cxr-jpg/2.1.0/).
|