Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -28,4 +28,41 @@ size_categories:
|
|
| 28 |
- 1K<n<10K
|
| 29 |
---
|
| 30 |
|
| 31 |
-
# Grounded Visual Spatial Reasoning
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
- 1K<n<10K
|
| 29 |
---
|
| 30 |
|
| 31 |
+
# Grounded Visual Spatial Reasoning
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
## Dataset Summary
|
| 35 |
+
|
| 36 |
+
This dataset extends the [Visual Spatial Reasoning (VSP)](https://arxiv.org/pdf/2205.00363) dataset with **visual grounding annotations**: each caption is annotated with **COCO-category object mentions**, their **positions** , and **bounding boxes** in the image.
|
| 37 |
+
|
| 38 |
+
## Data instance
|
| 39 |
+
Each sample instance has the following structure:
|
| 40 |
+
| Field | Type | Description |
|
| 41 |
+
|---------------------------|------------------|--------------------------------------------------|
|
| 42 |
+
| `image_id` | `int` | COCO image ID |
|
| 43 |
+
| `image_file` | `string` | COCO-style image filename |
|
| 44 |
+
| `image_link` | `string` | Direct COCO image URL |
|
| 45 |
+
| `width` | `int` | Image width in pixels |
|
| 46 |
+
| `height` | `int` | Image height in pixels |
|
| 47 |
+
| `caption` | `string` | Caption with two COCO-category object mentions |
|
| 48 |
+
| `label` | `int` | Optional class label (e.g., binary task flag) |
|
| 49 |
+
| `relation` | `string` | Spatial relation (e.g., "on", "under", etc.) |
|
| 50 |
+
| `ref_exp.labels` | `list[string]` | List of object labels from COCO categories |
|
| 51 |
+
| `ref_exp.label_positions` | `list[list[int]]`| Position (start, end) of each label in caption sentence|
|
| 52 |
+
| `ref_exp.bboxes` | `list[list[float]]` | Bounding boxes `[x, y, w, h]` in pixels |
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
## Download Images
|
| 57 |
+
To download the images, follow the instructions from the [VSP official GitHub repo](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data).
|
| 58 |
+
|
| 59 |
+
## Citation
|
| 60 |
+
If you use this dataset, please cite the original **Visual Spatial Reasoning** paper:
|
| 61 |
+
```bibtex
|
| 62 |
+
@article{Liu2022VisualSR,
|
| 63 |
+
title={Visual Spatial Reasoning},
|
| 64 |
+
author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier},
|
| 65 |
+
journal={Transactions of the Association for Computational Linguistics},
|
| 66 |
+
year={2023},
|
| 67 |
+
}
|
| 68 |
+
```
|