Datasets:
Formats:
csv
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
video-question-answering
video-understanding
gameplay-understanding
multi-video
benchmark
ego-centric
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -159,13 +159,13 @@ configs:
|
|
| 159 |
<!-- MARK: Badges -->
|
| 160 |
<br>
|
| 161 |
<div align="center">
|
| 162 |
-
<a href="https://hats-ict.github.io/gameplayqa/"><img src="https://img.shields.io/static/v1?label=GameplayQA
|
| 163 |
-
<a href="
|
| 164 |
<a href="https://huggingface.co/datasets/wangyz1999/GameplayQA"><img src="https://img.shields.io/static/v1?label=Dataset&message=HuggingFace&color=FF6600&logo=huggingface" style="height: 25px;"></a>
|
| 165 |
<br>
|
| 166 |
-
<a href="https://github.com/wangyz1999/sync-video-label"><img src="https://img.shields.io/static/v1?label=Annotation
|
| 167 |
-
<a href="https://sync-video-label.vercel.app/"><img src="https://img.shields.io/static/v1?label=Annotation
|
| 168 |
-
<a href="https://www.youtube.com/watch?v=PKedELJ4XT0"><img src="https://img.shields.io/static/v1?label=Annotation
|
| 169 |
<a href="https://huggingface.co/datasets/wangyz1999/X-EGO-CS"><img src="https://img.shields.io/static/v1?label=Related&message=X-EGO-CS&color=FFCC00&logo=huggingface" style="height: 25px;"></a>
|
| 170 |
</div>
|
| 171 |
|
|
@@ -397,3 +397,15 @@ A total of 2,709 true labels were annotated across 2,219 seconds of footage, yie
|
|
| 397 |
## Citation
|
| 398 |
|
| 399 |
**If our research is helpful to you, please cite our paper:**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 159 |
<!-- MARK: Badges -->
|
| 160 |
<br>
|
| 161 |
<div align="center">
|
| 162 |
+
<a href="https://hats-ict.github.io/gameplayqa/"><img src="https://img.shields.io/static/v1?label=GameplayQA%20Project%20Homepage&message=Website&color=9a33fc&logo=githubpages" style="height: 25px;"></a>
|
| 163 |
+
<a href="https://arxiv.org/abs/2603.24329"><img src="https://img.shields.io/static/v1?label=Paper&message=arXiv&color=FF0066&logo=arxiv" style="height: 25px;"></a>
|
| 164 |
<a href="https://huggingface.co/datasets/wangyz1999/GameplayQA"><img src="https://img.shields.io/static/v1?label=Dataset&message=HuggingFace&color=FF6600&logo=huggingface" style="height: 25px;"></a>
|
| 165 |
<br>
|
| 166 |
+
<a href="https://github.com/wangyz1999/sync-video-label"><img src="https://img.shields.io/static/v1?label=Annotation%20Tool&message=Github&color=6699FF&logo=github" style="height: 25px;"></a>
|
| 167 |
+
<a href="https://sync-video-label.vercel.app/"><img src="https://img.shields.io/static/v1?label=Annotation%20Tool&message=Live%20Demo&color=33CCCC&logo=vercel" style="height: 25px;"></a>
|
| 168 |
+
<a href="https://www.youtube.com/watch?v=PKedELJ4XT0"><img src="https://img.shields.io/static/v1?label=Annotation%20Tool%20Demo&message=YouTube&color=FF0000&logo=youtube" style="height: 25px;"></a>
|
| 169 |
<a href="https://huggingface.co/datasets/wangyz1999/X-EGO-CS"><img src="https://img.shields.io/static/v1?label=Related&message=X-EGO-CS&color=FFCC00&logo=huggingface" style="height: 25px;"></a>
|
| 170 |
</div>
|
| 171 |
|
|
|
|
| 397 |
## Citation
|
| 398 |
|
| 399 |
**If our research is helpful to you, please cite our paper:**
|
| 400 |
+
|
| 401 |
+
```bibtex
|
| 402 |
+
@article{wang2026gameplayqa,
|
| 403 |
+
title = {GameplayQA: A Benchmarking Framework for Decision-Dense POV-Synced Multi-Video Understanding of 3D Virtual Agents},
|
| 404 |
+
author = {Wang, Yunzhe and Xu, Runhui and Zheng, Kexin and Zhang, Tianyi and Kogundi, Jayavibhav Niranjan and Hans, Soham and Ustun, Volkan},
|
| 405 |
+
year = {2026},
|
| 406 |
+
eprint = {2603.24329},
|
| 407 |
+
archivePrefix = {arXiv},
|
| 408 |
+
primaryClass = {cs.CL},
|
| 409 |
+
url = {https://arxiv.org/abs/2603.24329}
|
| 410 |
+
}
|
| 411 |
+
|