Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -17,28 +17,6 @@ size_categories:
|
|
| 17 |
# Dataset Card for All-Angles Bench
|
| 18 |
|
| 19 |
|
| 20 |
-
## Dataset Description
|
| 21 |
-
|
| 22 |
-
<!-- Provide a longer summary of what this dataset is. -->
|
| 23 |
-
The dataset presents a comprehensive benchmark consistin---
|
| 24 |
-
license: mit
|
| 25 |
-
language:
|
| 26 |
-
- en
|
| 27 |
-
size_categories:
|
| 28 |
-
- 1K<n<10K
|
| 29 |
-
---
|
| 30 |
-
|
| 31 |
-
<h1>Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs</h1>
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
<a href='https://danielchyeh.github.io/All-Angles-Bench/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
|
| 35 |
-
<a href='https://arxiv.org/pdf/2504.15280'><img src='https://img.shields.io/badge/Paper-PDF-orange'></a>
|
| 36 |
-
<a href='https://arxiv.org/abs/2504.15280'><img src='https://img.shields.io/badge/Arxiv-Page-purple'></a>
|
| 37 |
-
<a href="https://github.com/Chenyu-Wang567/All-Angles-Bench/tree/main"><img src='https://img.shields.io/badge/Code-Github-red'></a>
|
| 38 |
-
|
| 39 |
-
# Dataset Card for All-Angles Bench
|
| 40 |
-
|
| 41 |
-
|
| 42 |
## Dataset Description
|
| 43 |
|
| 44 |
<!-- Provide a longer summary of what this dataset is. -->
|
|
@@ -53,15 +31,6 @@ The dataset presents a comprehensive benchmark consisting of over 2,100 human-an
|
|
| 53 |
- **[Ego-Exo4D](https://github.com/facebookresearch/Ego4d)** - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding
|
| 54 |
|
| 55 |
|
| 56 |
-
## Usage
|
| 57 |
-
|
| 58 |
-
```python
|
| 59 |
-
from datasets import load_dataset
|
| 60 |
-
|
| 61 |
-
dataset = load_dataset("ch-chenyu/All-Angles-Bench")
|
| 62 |
-
```
|
| 63 |
-
|
| 64 |
-
|
| 65 |
## Prepare Full Benchmark Data on Local Machine
|
| 66 |
|
| 67 |
1. **Set up Git lfs and clone the benchmark:**
|
|
@@ -97,7 +66,6 @@ $ python All-Angles-Bench/scripts/json2tsv_pair.py --input All-Angles-Bench/data
|
|
| 97 |
|
| 98 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
| 99 |
|
| 100 |
-
|
| 101 |
The JSON data contains the following key-value pairs:
|
| 102 |
|
| 103 |
| Key | Type | Description |
|
|
@@ -113,72 +81,6 @@ The JSON data contains the following key-value pairs:
|
|
| 113 |
| `sourced_dataset`| String | Source dataset name (e.g. `"EgoHumans"`) |
|
| 114 |
|
| 115 |
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
## Citation
|
| 120 |
-
|
| 121 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
| 122 |
-
|
| 123 |
-
```bibtex
|
| 124 |
-
@article{yeh2025seeing,
|
| 125 |
-
title={Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs},
|
| 126 |
-
author={Chun-Hsiao Yeh, Chenyu Wang, Shengbang Tong, Ta-Ying Cheng, Ruoyu Wang, Tianzhe Chu, Yuexiang Zhai, Yubei Chen, Shenghua Gao and Yi Ma},
|
| 127 |
-
journal={arXiv preprint arXiv:2504.15280},
|
| 128 |
-
year={2025}
|
| 129 |
-
}
|
| 130 |
-
```
|
| 131 |
-
|
| 132 |
-
## Acknowledgements
|
| 133 |
-
You may refer to related work that serves as foundations for our framework and code repository,
|
| 134 |
-
[EgoHumans](https://github.com/rawalkhirodkar/egohumans),
|
| 135 |
-
[Ego-Exo4D](https://github.com/facebookresearch/Ego4d),
|
| 136 |
-
[VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
|
| 137 |
-
Thanks for their wonderful work and data.g of over 2,100 human-annotated multi-view question-answer (QA) pairs, spanning 90 real-world scenes. Each scene is captured from multiple viewpoints, providing diverse perspectives and context for the associated questions.
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
## Dataset Sources
|
| 141 |
-
|
| 142 |
-
<!-- Provide the basic links for the dataset. -->
|
| 143 |
-
|
| 144 |
-
- **[EgoHumans](https://github.com/rawalkhirodkar/egohumans)** - Egocentric multi-view human activity understanding dataset
|
| 145 |
-
- **[Ego-Exo4D](https://github.com/facebookresearch/Ego4d)** - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
## Usage
|
| 149 |
-
|
| 150 |
-
```python
|
| 151 |
-
from datasets import load_dataset
|
| 152 |
-
|
| 153 |
-
dataset = load_dataset("ch-chenyu/All-Angles-Bench")
|
| 154 |
-
```
|
| 155 |
-
|
| 156 |
-
We provide the image files for the EgoHumans dataset. For the Ego-Exo4D dataset, due to licensing restrictions, you will need to first sign the license agreement from the official Ego-Exo4D repository at https://ego4ddataset.com/egoexo-license/. After signing the license, you can download the dataset and then use the preprocessing scripts provided in our GitHub repository to extract the corresponding images.
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
## Dataset Structure
|
| 160 |
-
|
| 161 |
-
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
The JSON data contains the following key-value pairs:
|
| 165 |
-
|
| 166 |
-
| Key | Type | Description |
|
| 167 |
-
|------------------|------------|-----------------------------------------------------------------------------|
|
| 168 |
-
| `index` | Integer | Unique identifier for the data entry (e.g. `1221`) |
|
| 169 |
-
| `folder` | String | Directory name where the scene is stored (e.g. `"05_volleyball"`) |
|
| 170 |
-
| `category` | String | Task category (e.g. `"counting"`) |
|
| 171 |
-
| `pair_idx` | String | Index of a corresponding paired question (if applicable) |
|
| 172 |
-
| `image_path` | List | Array of input image paths |
|
| 173 |
-
| `question` | String | Natural language query about the scene |
|
| 174 |
-
| `A`/`B`/`C` | String | Multiple choice options |
|
| 175 |
-
| `answer` | String | Correct option label (e.g. `"B"`) |
|
| 176 |
-
| `sourced_dataset`| String | Source dataset name (e.g. `"EgoHumans"`) |
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
## Citation
|
| 183 |
|
| 184 |
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
|
|
|
| 17 |
# Dataset Card for All-Angles Bench
|
| 18 |
|
| 19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
## Dataset Description
|
| 21 |
|
| 22 |
<!-- Provide a longer summary of what this dataset is. -->
|
|
|
|
| 31 |
- **[Ego-Exo4D](https://github.com/facebookresearch/Ego4d)** - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding
|
| 32 |
|
| 33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
## Prepare Full Benchmark Data on Local Machine
|
| 35 |
|
| 36 |
1. **Set up Git lfs and clone the benchmark:**
|
|
|
|
| 66 |
|
| 67 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
| 68 |
|
|
|
|
| 69 |
The JSON data contains the following key-value pairs:
|
| 70 |
|
| 71 |
| Key | Type | Description |
|
|
|
|
| 81 |
| `sourced_dataset`| String | Source dataset name (e.g. `"EgoHumans"`) |
|
| 82 |
|
| 83 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
## Citation
|
| 85 |
|
| 86 |
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|