Datasets:
Add files using upload-large-folder tool
Browse files- README.md +160 -0
- data/images/000000001355.jpg +3 -0
- data/images/000000003236.jpg +3 -0
- data/images/000000020489.jpg +3 -0
- data/images/000000022974.jpg +3 -0
- data/images/000000039811.jpg +3 -0
- data/images/000000074080.jpg +3 -0
- data/images/000000077417.jpg +3 -0
- data/images/000000077784.jpg +3 -0
- data/images/000000080691.jpg +3 -0
- data/images/000000100812.jpg +3 -0
- data/images/000000113205.jpg +3 -0
- data/images/000000129108.jpg +3 -0
- data/images/000000153368.jpg +3 -0
- data/images/000000160531.jpg +3 -0
- data/images/000000183260.jpg +3 -0
- data/images/000000195351.jpg +3 -0
- data/images/000000205960.jpg +3 -0
- data/images/000000211850.jpg +3 -0
- data/images/000000216531.jpg +3 -0
- data/images/000000224049.jpg +3 -0
- data/images/000000294957.jpg +3 -0
- data/images/000000296581.jpg +3 -0
- data/images/000000300070.jpg +3 -0
- data/images/000000301282.jpg +3 -0
- data/images/000000308265.jpg +3 -0
- data/images/000000313047.jpg +3 -0
- data/images/000000327413.jpg +3 -0
- data/images/000000334220.jpg +3 -0
- data/images/000000368409.jpg +3 -0
- data/images/000000369860.jpg +3 -0
- data/images/000000373945.jpg +3 -0
- data/images/000000379172.jpg +3 -0
- data/images/000000416356.jpg +3 -0
- data/images/000000418812.jpg +3 -0
- data/images/000000460266.jpg +3 -0
- data/images/000000465265.jpg +3 -0
- data/images/000000505188.jpg +3 -0
- data/images/000000516906.jpg +3 -0
- data/images/000000519432.jpg +3 -0
- data/images/000000525700.jpg +3 -0
- data/images/000000527283.jpg +3 -0
- data/images/000000527649.jpg +3 -0
- data/images/000000530030.jpg +3 -0
- data/images/000000532286.jpg +3 -0
- data/images/000000540840.jpg +3 -0
- data/images/b1bbfee4-8499-4c50-9de1-ade61d05d481_011570.jpeg +3 -0
- data/images/bae603bd-dc3d-46a4-90b7-9c1e2bcd3970_015099.jpeg +3 -0
- data/multihop_test_4500.json +0 -0
- data/multihop_train_6791.json +0 -0
README.md
ADDED
|
@@ -0,0 +1,160 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- visual-question-answering
|
| 5 |
+
- image-text-to-text
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
tags:
|
| 9 |
+
- spatial-reasoning
|
| 10 |
+
- multi-hop
|
| 11 |
+
- grounding
|
| 12 |
+
- vision-language
|
| 13 |
+
- benchmark
|
| 14 |
+
- VQA
|
| 15 |
+
- bounding-box
|
| 16 |
+
pretty_name: MultihopSpatial
|
| 17 |
+
size_categories:
|
| 18 |
+
- 10K<n<100K
|
| 19 |
+
configs:
|
| 20 |
+
- config_name: default
|
| 21 |
+
data_files:
|
| 22 |
+
- split: train
|
| 23 |
+
path: data/multihop_train_6791.json
|
| 24 |
+
- split: test
|
| 25 |
+
path: data/multihop_test_4500.json
|
| 26 |
+
---
|
| 27 |
+
|
| 28 |
+
# MultihopSpatial: Multi-hop Compositional Spatial Reasoning Benchmark for Vision-Language Models
|
| 29 |
+
|
| 30 |
+
<p align="center">
|
| 31 |
+
<img src="teaser_2.png" width="100%" alt="MultihopSpatial Benchmark Overview">
|
| 32 |
+
</p>
|
| 33 |
+
|
| 34 |
+
<p align="center">
|
| 35 |
+
<a href="https://youngwanlee.github.io/multihopspatial_private"><b>Project Page</b></a> |
|
| 36 |
+
<a href="https://arxiv.org/abs/2603.18892"><b>Paper</b></a>
|
| 37 |
+
</p>
|
| 38 |
+
|
| 39 |
+
## Overview
|
| 40 |
+
|
| 41 |
+
**MultihopSpatial** is a benchmark designed to evaluate whether vision-language models (VLMs) demonstrate robustness in **multi-hop compositional spatial reasoning**. Unlike existing benchmarks that only assess single-step spatial relations, MultihopSpatial features queries with **1 to 3 reasoning hops** paired with **visual grounding evaluation**, exposing a critical blind spot: models achieving high multiple-choice accuracy often lack proper spatial localization.
|
| 42 |
+
|
| 43 |
+
All 4,500 benchmark QA pairs and bounding boxes are **strictly annotated by ten trained human experts** with an inter-rater agreement of 90% (Krippendorff's α = 0.90).
|
| 44 |
+
|
| 45 |
+
## Key Features
|
| 46 |
+
|
| 47 |
+
- **Multi-hop Composition**: Tests 1-hop, 2-hop, and 3-hop sequential spatial reasoning, mirroring real-world embodied AI needs.
|
| 48 |
+
- **Grounded Evaluation**: Addresses the "lucky guess" problem — models must both select the correct answer AND localize it via bounding box (Acc@50IoU).
|
| 49 |
+
- **Perspective-taking**: Includes both ego-centric and exo-centric viewpoints.
|
| 50 |
+
- **Three Spatial Categories**: Attribute (ATT), Position (POS), and Relation (REL), composable into multi-hop questions.
|
| 51 |
+
- **Training Data**: MultihopSpatial-Train (6,791 samples) supports post-training via reinforcement learning (e.g., GRPO).
|
| 52 |
+
|
| 53 |
+
## Dataset Statistics
|
| 54 |
+
|
| 55 |
+
### MultihopSpatial
|
| 56 |
+
|
| 57 |
+
| | **Ego-centric** | **Exo-centric** | **Total** |
|
| 58 |
+
|---|:---:|:---:|:---:|
|
| 59 |
+
| **1-hop** | 750 | 750 | 1,500 |
|
| 60 |
+
| **2-hop** | 750 | 750 | 1,500 |
|
| 61 |
+
| **3-hop** | 750 | 750 | 1,500 |
|
| 62 |
+
| **Total** | 2,250 | 2,250 | **4,500** |
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
### Spatial Reasoning Compositions
|
| 66 |
+
|
| 67 |
+
| **Hop** | **Categories** |
|
| 68 |
+
|---|---|
|
| 69 |
+
| 1-hop | ATT, POS, REL |
|
| 70 |
+
| 2-hop | ATT+POS, ATT+REL, POS+REL |
|
| 71 |
+
| 3-hop | ATT+POS+REL |
|
| 72 |
+
|
| 73 |
+
## Data Fields
|
| 74 |
+
|
| 75 |
+
| Field | Type | Description |
|
| 76 |
+
|---|---|---|
|
| 77 |
+
| `id` | `int` | Unique sample identifier |
|
| 78 |
+
| `image_path` | `string` | Image filename (e.g., `000000303219.jpg` or `01ce4fd6-..._002114.jpeg`) |
|
| 79 |
+
| `image_resolution` | `string` | Image resolution in `WxH` format |
|
| 80 |
+
| `view` | `string` | Viewpoint type: `"ego"` (ego-centric) or `"exo"` (exo-centric) |
|
| 81 |
+
| `hop` | `string` | Reasoning complexity: `"1hop"`, `"2hop"`, or `"3hop"` |
|
| 82 |
+
| `question` | `string` | The spatial reasoning question in plain text with multiple-choice options |
|
| 83 |
+
| `question_tag` | `string` | Same question with spatial reasoning type tags (`<ATT>`, `<POS>`, `<REL>`) annotated inline |
|
| 84 |
+
| `answer` | `string` | The correct answer choice (e.g., `"(c) frame of the reed picture"`) |
|
| 85 |
+
| `bbox` | `list[float]` | Bounding box `[x, y, width, height]` of the answer object in pixel coordinates |
|
| 86 |
+
|
| 87 |
+
### `question` vs `question_tag`
|
| 88 |
+
|
| 89 |
+
- **`question`**: Clean natural language question, e.g.,
|
| 90 |
+
> *"From the perspective of the woman holding the remote control, which object is on her right?"*
|
| 91 |
+
|
| 92 |
+
- **`question_tag`**: Same question with spatial reasoning tags marking which type of reasoning each part requires, e.g.,
|
| 93 |
+
> *"From the perspective of the woman holding the remote control, which object is **\<POS\>on her right\</POS\>**?"*
|
| 94 |
+
|
| 95 |
+
Tags: `<ATT>...</ATT>` (Attribute), `<POS>...</POS>` (Position), `<REL>...</REL>` (Relation)
|
| 96 |
+
|
| 97 |
+
## Data Structure
|
| 98 |
+
|
| 99 |
+
```
|
| 100 |
+
MultihopSpatial/
|
| 101 |
+
├── README.md
|
| 102 |
+
├── teaser_2.png
|
| 103 |
+
├── data/
|
| 104 |
+
│ ├── multihop_test_4500.json
|
| 105 |
+
│ ├── multihop_train_6791.json
|
| 106 |
+
│ └── images/
|
| 107 |
+
│ ├── 000000303219.jpg
|
| 108 |
+
│ ├── 000000022612.jpg
|
| 109 |
+
│ ├── 01ce4fd6-197a-4792-8778-775b03780369_002114.jpeg
|
| 110 |
+
│ └── ...
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
## Usage
|
| 114 |
+
|
| 115 |
+
```python
|
| 116 |
+
from datasets import load_dataset
|
| 117 |
+
|
| 118 |
+
dataset = load_dataset("YOUR_HF_REPO/MultihopSpatial")
|
| 119 |
+
|
| 120 |
+
# Access splits
|
| 121 |
+
test_data = dataset["test"]
|
| 122 |
+
train_data = dataset["train"]
|
| 123 |
+
|
| 124 |
+
# Example
|
| 125 |
+
sample = test_data[0]
|
| 126 |
+
print(sample["question"])
|
| 127 |
+
# "From the perspective of the woman holding the remote control, which object is on her right? ..."
|
| 128 |
+
print(sample["answer"])
|
| 129 |
+
# "(c) frame of the reed picture"
|
| 130 |
+
print(sample["bbox"])
|
| 131 |
+
# [52.86, 38.7, 70.95, 97.83]
|
| 132 |
+
print(sample["hop"])
|
| 133 |
+
# "1hop"
|
| 134 |
+
```
|
| 135 |
+
|
| 136 |
+
|
| 137 |
+
## Image Sources & License
|
| 138 |
+
|
| 139 |
+
| Component | License | Source |
|
| 140 |
+
|---|---|---|
|
| 141 |
+
| **VQA Annotations** (questions, answers, bounding boxes) | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) | MultihopSpatial (this work) |
|
| 142 |
+
| **COCO Images** | [COCO Terms of Use](https://cocodataset.org/#termsofuse) | [MS-COCO](https://cocodataset.org/) |
|
| 143 |
+
| **PACO-Ego4D Images** | [Ego4D License](https://ego4ddataset.com/ego4d-data/license/) | [PACO](https://github.com/facebookresearch/paco) / [Ego4D](https://ego4ddataset.com/) |
|
| 144 |
+
|
| 145 |
+
> The images retain their original licenses. Our VQA annotations (questions, answers, bounding boxes, and metadata) are released under the Apache 2.0 License.
|
| 146 |
+
|
| 147 |
+
## Citation
|
| 148 |
+
|
| 149 |
+
```bibtex
|
| 150 |
+
@article{lee2025multihopspatial,
|
| 151 |
+
title={MultihopSpatial: Multi-hop Compositional Spatial Reasoning Benchmark for Vision-Language Models},
|
| 152 |
+
author={Lee, Youngwan and Jang, Soojin and Cho, Yoorhim and Lee, Seunghwan and Lee, Yong-Ju and Hwang, Sung Ju},
|
| 153 |
+
journal={arXiv preprint arXiv:2603.18892},
|
| 154 |
+
year={2025}
|
| 155 |
+
}
|
| 156 |
+
```
|
| 157 |
+
|
| 158 |
+
## Contact
|
| 159 |
+
|
| 160 |
+
For questions or issues, please visit the [Project Page](https://youngwanlee.github.io/multihopspatial_private) or open an issue in this repository.
|
data/images/000000001355.jpg
ADDED
|
Git LFS Details
|
data/images/000000003236.jpg
ADDED
|
Git LFS Details
|
data/images/000000020489.jpg
ADDED
|
Git LFS Details
|
data/images/000000022974.jpg
ADDED
|
Git LFS Details
|
data/images/000000039811.jpg
ADDED
|
Git LFS Details
|
data/images/000000074080.jpg
ADDED
|
Git LFS Details
|
data/images/000000077417.jpg
ADDED
|
Git LFS Details
|
data/images/000000077784.jpg
ADDED
|
Git LFS Details
|
data/images/000000080691.jpg
ADDED
|
Git LFS Details
|
data/images/000000100812.jpg
ADDED
|
Git LFS Details
|
data/images/000000113205.jpg
ADDED
|
Git LFS Details
|
data/images/000000129108.jpg
ADDED
|
Git LFS Details
|
data/images/000000153368.jpg
ADDED
|
Git LFS Details
|
data/images/000000160531.jpg
ADDED
|
Git LFS Details
|
data/images/000000183260.jpg
ADDED
|
Git LFS Details
|
data/images/000000195351.jpg
ADDED
|
Git LFS Details
|
data/images/000000205960.jpg
ADDED
|
Git LFS Details
|
data/images/000000211850.jpg
ADDED
|
Git LFS Details
|
data/images/000000216531.jpg
ADDED
|
Git LFS Details
|
data/images/000000224049.jpg
ADDED
|
Git LFS Details
|
data/images/000000294957.jpg
ADDED
|
Git LFS Details
|
data/images/000000296581.jpg
ADDED
|
Git LFS Details
|
data/images/000000300070.jpg
ADDED
|
Git LFS Details
|
data/images/000000301282.jpg
ADDED
|
Git LFS Details
|
data/images/000000308265.jpg
ADDED
|
Git LFS Details
|
data/images/000000313047.jpg
ADDED
|
Git LFS Details
|
data/images/000000327413.jpg
ADDED
|
Git LFS Details
|
data/images/000000334220.jpg
ADDED
|
Git LFS Details
|
data/images/000000368409.jpg
ADDED
|
Git LFS Details
|
data/images/000000369860.jpg
ADDED
|
Git LFS Details
|
data/images/000000373945.jpg
ADDED
|
Git LFS Details
|
data/images/000000379172.jpg
ADDED
|
Git LFS Details
|
data/images/000000416356.jpg
ADDED
|
Git LFS Details
|
data/images/000000418812.jpg
ADDED
|
Git LFS Details
|
data/images/000000460266.jpg
ADDED
|
Git LFS Details
|
data/images/000000465265.jpg
ADDED
|
Git LFS Details
|
data/images/000000505188.jpg
ADDED
|
Git LFS Details
|
data/images/000000516906.jpg
ADDED
|
Git LFS Details
|
data/images/000000519432.jpg
ADDED
|
Git LFS Details
|
data/images/000000525700.jpg
ADDED
|
Git LFS Details
|
data/images/000000527283.jpg
ADDED
|
Git LFS Details
|
data/images/000000527649.jpg
ADDED
|
Git LFS Details
|
data/images/000000530030.jpg
ADDED
|
Git LFS Details
|
data/images/000000532286.jpg
ADDED
|
Git LFS Details
|
data/images/000000540840.jpg
ADDED
|
Git LFS Details
|
data/images/b1bbfee4-8499-4c50-9de1-ade61d05d481_011570.jpeg
ADDED
|
Git LFS Details
|
data/images/bae603bd-dc3d-46a4-90b7-9c1e2bcd3970_015099.jpeg
ADDED
|
Git LFS Details
|
data/multihop_test_4500.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/multihop_train_6791.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|