Datasets:
File size: 4,290 Bytes
a0e3289 0395002 fc6a178 0395002 fc6a178 a0e3289 f1a9a19 4fc6651 fc6a178 689fecc f1a9a19 c5d6834 f325549 c5d6834 964e0f1 c5d6834 f325549 c5d6834 964e0f1 c5d6834 6e01da1 c5d6834 f1a9a19 c5d6834 f325549 c5d6834 4c7f1ae 4156bc1 c5d6834 f325549 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
language:
- en
license: mit
size_categories:
- 1K<n<10K
task_categories:
- image-text-to-text
---
<h1>Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs</h1>
<a href='https://danielchyeh.github.io/All-Angles-Bench/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
<a href='https://huggingface.co/papers/2504.15280'><img src='https://img.shields.io/badge/Paper-PDF-orange'></a>
<a href='https://arxiv.org/abs/2504.15280'><img src='https://img.shields.io/badge/Arxiv-Page-purple'></a>
<a href="https://github.com/Chenyu-Wang567/All-Angles-Bench/tree/main"><img src='https://img.shields.io/badge/Code-Github-red'></a>
# Dataset Card for All-Angles Bench
## Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
The dataset presents a comprehensive benchmark consisting of over 2,100 human-annotated multi-view question-answer (QA) pairs, spanning 90 real-world scenes. Each scene is captured from multiple viewpoints, providing diverse perspectives and context for the associated questions.
## Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **[EgoHumans](https://github.com/rawalkhirodkar/egohumans)** - Egocentric multi-view human activity understanding dataset
- **[Ego-Exo4D](https://github.com/facebookresearch/Ego4d)** - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("ch-chenyu/All-Angles-Bench")
```
We provide the image files for the EgoHumans dataset. For the Ego-Exo4D dataset, due to licensing restrictions, you will need to first sign the license agreement from the official Ego-Exo4D repository at https://ego4ddataset.com/egoexo-license/. After signing the license, you can download the dataset and then use the preprocessing scripts provided in our GitHub repository to extract the corresponding images.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The JSON data contains the following key-value pairs:
| Key | Type | Description |
|------------------|------------|-----------------------------------------------------------------------------|
| `index` | Integer | Unique identifier for the data entry (e.g. `1221`) |
| `folder` | String | Directory name where the scene is stored (e.g. `"05_volleyball"`) |
| `category` | String | Task category (e.g. `"counting"`) |
| `pair_idx` | String | Index of a corresponding paired question (if applicable) |
| `image_path` | List | Array of input image paths |
| `question` | String | Natural language query about the scene |
| `A`/`B`/`C` | String | Multiple choice options |
| `answer` | String | Correct option label (e.g. `"B"`) |
| `sourced_dataset`| String | Source dataset name (e.g. `"EgoHumans"`) |
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```bibtex
@article{yeh2025seeing,
title={Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs},
author={Chun-Hsiao Yeh, Chenyu Wang, Shengbang Tong, Ta-Ying Cheng, Ruoyu Wang, Tianzhe Chu, Yuexiang Zhai, Yubei Chen, Shenghua Gao and Yi Ma},
journal={arXiv preprint arXiv:2504.15280},
year={2025}
}
```
## Acknowledgements
You may refer to related work that serves as foundations for our framework and code repository,
[EgoHumans](https://github.com/rawalkhirodkar/egohumans),
[Ego-Exo4D](https://github.com/facebookresearch/Ego4d),
[VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
Thanks for their wonderful work and data. |