Datasets:
Upload folder using huggingface_hub
Browse files- README.md +68 -0
- arkitscenes.zip +3 -0
- scannet.zip +3 -0
- scannetpp.zip +3 -0
- test-00000-of-00001.parquet +3 -0
README.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- visual-question-answering
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- Video
|
| 9 |
+
- Text
|
| 10 |
+
size_categories:
|
| 11 |
+
- 1K<n<10K
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
<a href="https://arxiv.org/abs/2412.14171" target="_blank">
|
| 15 |
+
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-thinking--in--space-red?logo=arxiv" height="20" />
|
| 16 |
+
</a>
|
| 17 |
+
<a href="https://vision-x-nyu.github.io/thinking-in-space.github.io/" target="_blank">
|
| 18 |
+
<img alt="Website" src="https://img.shields.io/badge/🌎_Website-thinking--in--space-blue.svg" height="20" />
|
| 19 |
+
</a>
|
| 20 |
+
<a href="https://github.com/vision-x-nyu/thinking-in-space" target="_blank" style="display: inline-block; margin-right: 10px;">
|
| 21 |
+
<img alt="GitHub Code" src="https://img.shields.io/badge/Code-thinking--in--space-white?&logo=github&logoColor=white" />
|
| 22 |
+
</a>
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
# Visual Spatial Intelligence Benchmark (VSI-Bench)
|
| 26 |
+
This repository contains the visual spatial intelligence benchmark (VSI-Bench), introduced in [Thinking in Space: How Multimodal Large Language Models See, Remember and Recall Spaces](https://arxiv.org/pdf/).
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
## Files
|
| 30 |
+
The `test-00000-of-00001.parquet` file contains the complete dataset annotations and pre-loaded images, ready for processing with HF Datasets. It can be loaded using the following code:
|
| 31 |
+
|
| 32 |
+
```python
|
| 33 |
+
from datasets import load_dataset
|
| 34 |
+
vsi_bench = load_dataset("nyu-visionx/VSI-Bench")
|
| 35 |
+
```
|
| 36 |
+
Additionally, we provide the videos in `*.zip`.
|
| 37 |
+
|
| 38 |
+
## Dataset Description
|
| 39 |
+
VSI-Bench quantitatively evaluates the visual-spatial intelligence of MLLMs from egocentric video. VSI-Bench comprises over 5,000 question-answer pairs derived from 288 real videos. These videos are sourced from the validation sets of the public indoor 3D scene reconstruction datasets `ScanNet`, `ScanNet++`, and `ARKitScenes`, and represent diverse environments -- including residential spaces, professional settings (e.g., offices, labs), and industrial spaces (e.g., factories) and multiple geographic regions. By repurposing these existing 3D reconstruction and understanding datasets, VSI-Bench benefits from accurate object-level annotations, which are used in question generation and could support future studies exploring the connection between MLLMs and 3D reconstruction.
|
| 40 |
+
|
| 41 |
+
The dataset contains the following fields:
|
| 42 |
+
|
| 43 |
+
| Field Name | Description |
|
| 44 |
+
| :--------- | :---------- |
|
| 45 |
+
| `idx` | Global index of the entry in the dataset |
|
| 46 |
+
| `dataset` | Video source: `scannet`, `arkitscenes` or `scannetpp` |
|
| 47 |
+
| `scene_name` | Scene (video) name for each question-answer pair |
|
| 48 |
+
| `question_type` | The type of task for question |
|
| 49 |
+
| `question` | Question asked about the video |
|
| 50 |
+
| `options` | Choices for the question (only for multiple choice questions) |
|
| 51 |
+
| `ground_truth` | Ground truth answer for the question |
|
| 52 |
+
|
| 53 |
+
## Evaluation
|
| 54 |
+
|
| 55 |
+
VSI-Bench evaluates performance using two metrics: for multiple-choice questions, we use `Accuracy`, calculated based on exact matches. For numerical-answer questions, we introduce a new metric, `MRA (Mean Relative Accuracy)`, to assess how closely model predictions align with ground truth values.
|
| 56 |
+
|
| 57 |
+
We provide an out-of-the-box evaluation of VSI-Bench in our [GitHub repository](https://github.com/vision-x-nyu/thinking-in-space), including the [metrics](https://github.com/vision-x-nyu/thinking-in-space/blob/main/lmms_eval/tasks/vsibench/utils.py#L109C1-L155C36) implementation used in our framework. For further detailes, users can refer to our paper and GitHub repository.
|
| 58 |
+
|
| 59 |
+
## Citation
|
| 60 |
+
|
| 61 |
+
```bibtex
|
| 62 |
+
@article{yang2024think,
|
| 63 |
+
title={{Thinking in Space: How Multimodal Large Language Models See, Remember and Recall Spaces}},
|
| 64 |
+
author={Yang, Jihan and Yang, Shusheng and Gupta, Anjali and Han, Rilyn and Fei-Fei, Li and Xie, Saining},
|
| 65 |
+
year={2024},
|
| 66 |
+
journal={arXiv preprint arXiv:2412.14171},
|
| 67 |
+
}
|
| 68 |
+
```
|
arkitscenes.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:005232fa20ccfa287255ca96c4d0c0c0863c24bdc1a40a89165b75f509bf4907
|
| 3 |
+
size 1812227830
|
scannet.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:787b0c061bde5c1f5e076012c1239340fdb1330787c644977c7cad5cdbe1d548
|
| 3 |
+
size 2885230719
|
scannetpp.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:164b2314107e070c7d8a652897404904adf36a8868c2293be04382727d9a19be
|
| 3 |
+
size 1030992424
|
test-00000-of-00001.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:64eb8a4ff3c705038d2c489fb97345c19e33f0a297f440a168e6940e76d329ca
|
| 3 |
+
size 160845
|