Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -9,4 +9,59 @@ language:
|
|
| 9 |
- en
|
| 10 |
tags:
|
| 11 |
- art
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
---
|
|
|
|
| 9 |
- en
|
| 10 |
tags:
|
| 11 |
- art
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# AmbiBench Metadata
|
| 15 |
+
|
| 16 |
+
This dataset provides the **metadata** used in the AmbiBench benchmark.
|
| 17 |
+
Each row corresponds to an ambiguous image or video sample paired with a question–answer pair, derived from the benchmark design.
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
## Columns
|
| 22 |
+
|
| 23 |
+
### `question`
|
| 24 |
+
- The natural-language question posed to the model.
|
| 25 |
+
- Questions are aligned with AmbiBench’s four task types:
|
| 26 |
+
- **Open-ended**: ask for all possible interpretations (e.g., “What hidden objects can you see in the image?”).
|
| 27 |
+
- **Multiple-choice**: ask to select all valid interpretations from candidates.
|
| 28 |
+
- **Ambiguous localization**: ask to provide bounding boxes for ambiguous regions.
|
| 29 |
+
- **Local region description**: ask to describe multiple interpretations of a highlighted region.
|
| 30 |
+
|
| 31 |
+
### `answer`
|
| 32 |
+
- The reference answer to the `question`.
|
| 33 |
+
- May include:
|
| 34 |
+
- A single object name (`"snake"`),
|
| 35 |
+
- A semicolon-separated pair for bistable/multi-scene (`"duck; rabbit"`),
|
| 36 |
+
- Or multiple objects/scenes (`"bear, deer, wolf, bird"`).
|
| 37 |
+
|
| 38 |
+
### `file_name`
|
| 39 |
+
- Cleaned filename of the associated image or video.
|
| 40 |
+
- Any inline bounding box tags in the original filename (e.g.
|
| 41 |
+
`2120_netimg_hidden_face_illusion_[1152,807,1392,1263].jpg`)
|
| 42 |
+
are removed, and the coordinates are stored separately in `bbox`.
|
| 43 |
+
- ⚠️ **Special case: `motion_video` samples** — the stored file names end with `.png` as placeholders.
|
| 44 |
+
For actual usage, replace `.png` with `.mp4` to access the corresponding video file.
|
| 45 |
+
|
| 46 |
+
### `bbox`
|
| 47 |
+
- A list of bounding box coordinates `[x1, y1, x2, y2]` if required by the question.
|
| 48 |
+
- Empty list (`[]`) if no bounding box is provided.
|
| 49 |
+
|
| 50 |
+
### `type`
|
| 51 |
+
- Combined label = **Ambiguity category** (*from the AmbiBench taxonomy*) + **Task type**.
|
| 52 |
+
- Categories include:
|
| 53 |
+
*Camouflaged, Bistable, Hybrid, Color, Multi-view, Geometric, Motion, Multi-scene, Mixed*.
|
| 54 |
+
- Task types include: *open-ended, localization(bbox), region*.
|
| 55 |
+
- Example values:
|
| 56 |
+
- `"camouflaged_open"`
|
| 57 |
+
- `"mix_bbox"`
|
| 58 |
+
- `"mix_highlighted"`
|
| 59 |
+
|
| 60 |
+
---
|
| 61 |
+
|
| 62 |
+
## Notes
|
| 63 |
+
- All entries include consistent fields: `question`, `answer`, `file_name`, `bbox`, `type`.
|
| 64 |
+
- `motion_video` entries must be linked to the corresponding `.mp4` files (replace `.png` extension).
|
| 65 |
+
- This metadata is suitable for training and evaluating vision–language models on ambiguous image understanding.
|
| 66 |
+
|
| 67 |
---
|