Datasets:
metadata
license: mit
size_categories:
- 100M<n<1B
task_categories:
- image-text-to-text
- object-detection
language:
- en
tags:
- art
AmbiBench Metadata
This dataset provides the metadata used in the AmbiBench benchmark.
Each row corresponds to an ambiguous image or video sample paired with a question–answer pair, derived from the benchmark design.
Columns
question
- The natural-language question posed to the model.
answer
- The reference answer to the
question. - May include:
- A single object name (
"snake"), - A semicolon-separated pair for bistable/multi-scene (
"duck; rabbit"), - Or multiple objects/scenes (
"bear, deer, wolf, bird").
- A single object name (
file_name
- Cleaned filename of the associated image or video.
- Any inline bounding box tags in the original filename (e.g.
2120_netimg_hidden_face_illusion_[1152,807,1392,1263].jpg)
are removed, and the coordinates are stored separately inbbox. - ⚠️ Special case:
motion_videosamples — the stored file names end with.pngas placeholders.
For actual usage, replace.pngwith.mp4to access the corresponding video file.
bbox
- A list of bounding box coordinates
[x1, y1, x2, y2]if required by the question. - Empty list (
[]) if no bounding box is provided.
type
- Combined label = Ambiguity category (from the AmbiBench taxonomy) + Task type.
- Categories include:
Camouflaged, Bistable, Hybrid, Color, Multi-view, Geometric, Motion, Multi-scene, Mixed. - Task types include: open-ended, localization(bbox), region.
- Example values:
"camouflaged_open""mix_bbox""mix_highlighted"
Notes
- All entries include consistent fields:
question,answer,file_name,bbox,type. motion_videoentries must be linked to the corresponding.mp4files (replace.pngextension).- This metadata is suitable for training and evaluating vision–language models on ambiguous image understanding.