|
|
--- |
|
|
language: |
|
|
- en |
|
|
license: mit |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
task_categories: |
|
|
- image-text-to-text |
|
|
--- |
|
|
|
|
|
<h1>Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs</h1> |
|
|
|
|
|
|
|
|
<a href='https://danielchyeh.github.io/All-Angles-Bench/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> |
|
|
<a href='https://huggingface.co/papers/2504.15280'><img src='https://img.shields.io/badge/Paper-PDF-orange'></a> |
|
|
<a href='https://arxiv.org/abs/2504.15280'><img src='https://img.shields.io/badge/Arxiv-Page-purple'></a> |
|
|
<a href="https://github.com/Chenyu-Wang567/All-Angles-Bench/tree/main"><img src='https://img.shields.io/badge/Code-Github-red'></a> |
|
|
|
|
|
# Dataset Card for All-Angles Bench |
|
|
|
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
The dataset presents a comprehensive benchmark consisting of over 2,100 human-annotated multi-view question-answer (QA) pairs, spanning 90 real-world scenes. Each scene is captured from multiple viewpoints, providing diverse perspectives and context for the associated questions. |
|
|
|
|
|
|
|
|
## Dataset Sources |
|
|
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
|
|
- **[EgoHumans](https://github.com/rawalkhirodkar/egohumans)** - Egocentric multi-view human activity understanding dataset |
|
|
- **[Ego-Exo4D](https://github.com/facebookresearch/Ego4d)** - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding |
|
|
|
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("ch-chenyu/All-Angles-Bench") |
|
|
``` |
|
|
|
|
|
We provide the image files for the EgoHumans dataset. For the Ego-Exo4D dataset, due to licensing restrictions, you will need to first sign the license agreement from the official Ego-Exo4D repository at https://ego4ddataset.com/egoexo-license/. After signing the license, you can download the dataset and then use the preprocessing scripts provided in our GitHub repository to extract the corresponding images. |
|
|
|
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
|
|
|
|
|
The JSON data contains the following key-value pairs: |
|
|
|
|
|
| Key | Type | Description | |
|
|
|------------------|------------|-----------------------------------------------------------------------------| |
|
|
| `index` | Integer | Unique identifier for the data entry (e.g. `1221`) | |
|
|
| `folder` | String | Directory name where the scene is stored (e.g. `"05_volleyball"`) | |
|
|
| `category` | String | Task category (e.g. `"counting"`) | |
|
|
| `pair_idx` | String | Index of a corresponding paired question (if applicable) | |
|
|
| `image_path` | List | Array of input image paths | |
|
|
| `question` | String | Natural language query about the scene | |
|
|
| `A`/`B`/`C` | String | Multiple choice options | |
|
|
| `answer` | String | Correct option label (e.g. `"B"`) | |
|
|
| `sourced_dataset`| String | Source dataset name (e.g. `"EgoHumans"`) | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Citation |
|
|
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
|
|
```bibtex |
|
|
@article{yeh2025seeing, |
|
|
title={Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs}, |
|
|
author={Chun-Hsiao Yeh, Chenyu Wang, Shengbang Tong, Ta-Ying Cheng, Ruoyu Wang, Tianzhe Chu, Yuexiang Zhai, Yubei Chen, Shenghua Gao and Yi Ma}, |
|
|
journal={arXiv preprint arXiv:2504.15280}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
## Acknowledgements |
|
|
You may refer to related work that serves as foundations for our framework and code repository, |
|
|
[EgoHumans](https://github.com/rawalkhirodkar/egohumans), |
|
|
[Ego-Exo4D](https://github.com/facebookresearch/Ego4d), |
|
|
[VLMEvalKit](https://github.com/open-compass/VLMEvalKit). |
|
|
Thanks for their wonderful work and data. |