File size: 9,315 Bytes
a0e3289
0395002
 
 
 
 
a0e3289
 
f1a9a19
 
 
4fc6651
689fecc
 
f1a9a19
c5d6834
 
 
 
4ed9329
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f325549
c5d6834
 
964e0f1
c5d6834
 
4ed9329
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f325549
c5d6834
 
 
964e0f1
 
 
c5d6834
 
 
 
 
 
 
 
 
6e01da1
 
c5d6834
 
 
 
 
f1a9a19
 
 
 
 
 
 
 
 
 
 
 
 
 
c5d6834
 
f325549
 
 
c5d6834
 
 
 
 
 
 
4c7f1ae
4156bc1
c5d6834
 
f325549
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
---
license: mit
language:
- en
size_categories:
- 1K<n<10K
---

<h1>Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs</h1>


<a href='https://danielchyeh.github.io/All-Angles-Bench/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> 
<a href='https://arxiv.org/pdf/2504.15280'><img src='https://img.shields.io/badge/Paper-PDF-orange'></a>
<a href='https://arxiv.org/abs/2504.15280'><img src='https://img.shields.io/badge/Arxiv-Page-purple'></a>
<a href="https://github.com/Chenyu-Wang567/All-Angles-Bench/tree/main"><img src='https://img.shields.io/badge/Code-Github-red'></a>

# Dataset Card for All-Angles Bench


## Dataset Description

<!-- Provide a longer summary of what this dataset is. -->
The dataset presents a comprehensive benchmark consistin---
license: mit
language:
- en
size_categories:
- 1K<n<10K
---

<h1>Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs</h1>


<a href='https://danielchyeh.github.io/All-Angles-Bench/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> 
<a href='https://arxiv.org/pdf/2504.15280'><img src='https://img.shields.io/badge/Paper-PDF-orange'></a>
<a href='https://arxiv.org/abs/2504.15280'><img src='https://img.shields.io/badge/Arxiv-Page-purple'></a>
<a href="https://github.com/Chenyu-Wang567/All-Angles-Bench/tree/main"><img src='https://img.shields.io/badge/Code-Github-red'></a>

# Dataset Card for All-Angles Bench


## Dataset Description

<!-- Provide a longer summary of what this dataset is. -->
The dataset presents a comprehensive benchmark consisting of over 2,100 human-annotated multi-view question-answer (QA) pairs, spanning 90 real-world scenes. Each scene is captured from multiple viewpoints, providing diverse perspectives and context for the associated questions.


## Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **[EgoHumans](https://github.com/rawalkhirodkar/egohumans)** - Egocentric multi-view human activity understanding dataset
- **[Ego-Exo4D](https://github.com/facebookresearch/Ego4d)** - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding


## Usage

```python
from datasets import load_dataset

dataset = load_dataset("ch-chenyu/All-Angles-Bench")
```


## Prepare Full Benchmark Data on Local Machine

1. **Set up Git lfs and clone the benchmark:**
```bash
$ conda install git-lfs
$ git lfs install

$ git lfs clone https://huggingface.co/datasets/ch-chenyu/All-Angles-Bench
```


2. **Download Ego4D-Exo dataset and extract the frames for the benchmark scenes:**

We provide the image files for the EgoHumans dataset. For the Ego-Exo4D dataset, due to licensing restrictions, you will need to first sign the license agreement from the official Ego-Exo4D repository at https://ego4ddataset.com/egoexo-license/. After signing the license, you can download the dataset (`downscaled_takes/448`) and then use the preprocessing scripts to extract the corresponding images.

```bash
$ pip install ego4d --upgrade
$ egoexo -o All-Angles-Bench/ --parts downscaled_takes/448

$ python All-Angles-Bench/scripts/process_ego4d_exo.py --input All-Angles-Bench
```

3. **Transform JSON metadata into benchmark TSV format:**

To convert the metadata from JSON format into a structured TSV format compatible with benchmark evaluation scripts in [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), run:
```bash
$ python All-Angles-Bench/scripts/json2tsv_pair.py --input All-Angles-Bench/data.json

```


## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->


The JSON data contains the following key-value pairs:

| Key              | Type       | Description                                                                 |
|------------------|------------|-----------------------------------------------------------------------------|
| `index`          | Integer    | Unique identifier for the data entry (e.g. `1221`)                          |
| `folder`         | String     | Directory name where the scene is stored (e.g. `"05_volleyball"`)           |
| `category`       | String     | Task category (e.g. `"counting"`)                                           |
| `pair_idx`       | String     | Index of a corresponding paired question (if applicable)                    |
| `image_path`     | List       | Array of input image paths                                                  |
| `question`       | String     | Natural language query about the scene                                      |
| `A`/`B`/`C`      | String     | Multiple choice options                                                     |
| `answer`         | String     | Correct option label (e.g. `"B"`)                                           |
| `sourced_dataset`| String     | Source dataset name (e.g. `"EgoHumans"`)                                    |





## Citation 

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

```bibtex
@article{yeh2025seeing,
  title={Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs},
  author={Chun-Hsiao Yeh, Chenyu Wang, Shengbang Tong, Ta-Ying Cheng, Ruoyu Wang, Tianzhe Chu, Yuexiang Zhai, Yubei Chen, Shenghua Gao and Yi Ma},
  journal={arXiv preprint arXiv:2504.15280},
  year={2025}
}
```

## Acknowledgements
You may refer to related work that serves as foundations for our framework and code repository, 
[EgoHumans](https://github.com/rawalkhirodkar/egohumans),
[Ego-Exo4D](https://github.com/facebookresearch/Ego4d),
[VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
Thanks for their wonderful work and data.g of over 2,100 human-annotated multi-view question-answer (QA) pairs, spanning 90 real-world scenes. Each scene is captured from multiple viewpoints, providing diverse perspectives and context for the associated questions.


## Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **[EgoHumans](https://github.com/rawalkhirodkar/egohumans)** - Egocentric multi-view human activity understanding dataset
- **[Ego-Exo4D](https://github.com/facebookresearch/Ego4d)** - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding


## Usage

```python
from datasets import load_dataset

dataset = load_dataset("ch-chenyu/All-Angles-Bench")
```

We provide the image files for the EgoHumans dataset. For the Ego-Exo4D dataset, due to licensing restrictions, you will need to first sign the license agreement from the official Ego-Exo4D repository at https://ego4ddataset.com/egoexo-license/. After signing the license, you can download the dataset and then use the preprocessing scripts provided in our GitHub repository to extract the corresponding images.


## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->


The JSON data contains the following key-value pairs:

| Key              | Type       | Description                                                                 |
|------------------|------------|-----------------------------------------------------------------------------|
| `index`          | Integer    | Unique identifier for the data entry (e.g. `1221`)                          |
| `folder`         | String     | Directory name where the scene is stored (e.g. `"05_volleyball"`)           |
| `category`       | String     | Task category (e.g. `"counting"`)                                           |
| `pair_idx`       | String     | Index of a corresponding paired question (if applicable)                    |
| `image_path`     | List       | Array of input image paths                                                  |
| `question`       | String     | Natural language query about the scene                                      |
| `A`/`B`/`C`      | String     | Multiple choice options                                                     |
| `answer`         | String     | Correct option label (e.g. `"B"`)                                           |
| `sourced_dataset`| String     | Source dataset name (e.g. `"EgoHumans"`)                                    |





## Citation 

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

```bibtex
@article{yeh2025seeing,
  title={Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs},
  author={Chun-Hsiao Yeh, Chenyu Wang, Shengbang Tong, Ta-Ying Cheng, Ruoyu Wang, Tianzhe Chu, Yuexiang Zhai, Yubei Chen, Shenghua Gao and Yi Ma},
  journal={arXiv preprint arXiv:2504.15280},
  year={2025}
}
```

## Acknowledgements
You may refer to related work that serves as foundations for our framework and code repository, 
[EgoHumans](https://github.com/rawalkhirodkar/egohumans),
[Ego-Exo4D](https://github.com/facebookresearch/Ego4d),
[VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
Thanks for their wonderful work and data.