Datasets:

Modalities:
Image
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
kentosasaki-jp
chore: update README
9e58979
---
license: cc-by-nc-sa-4.0
task_categories:
- visual-question-answering
language:
- en
tags:
- autonomousdriving
size_categories:
- n<1K
---
# SpatialRGPT-Bench-Extended
[![AAAI 2026](https://img.shields.io/badge/AAAI%202026-Oral-red)](https://arxiv.org/abs/2508.10427)
[![Project Page](https://img.shields.io/badge/Project-Page-blue)](https://turingmotors.github.io/stride-qa/)
[![GitHub](https://img.shields.io/badge/GitHub-Code-black?logo=github)](https://github.com/turingmotors/STRIDE-QA-Dataset)
[![Dataset](https://img.shields.io/badge/πŸ€—%20HuggingFace-Dataset-yellow)](https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset)
[![Benchmark](https://img.shields.io/badge/πŸ€—%20HuggingFace-Benchmark-yellow)](https://huggingface.co/datasets/turing-motors/STRIDE-QA-Bench)
**SpatialRGPT-Bench-Extended** is an extension of [SpatialRGPT-Bench](https://huggingface.co/datasets/a8cheng/SpatialRGPT-Bench) that incorporates driving scene images from Japan. It augments object-centric QA (questions about two objects within an image) with ego-centric QA (questions about the relationship between the ego and a single object in the image). Each QA category contains 466 QA pairs. For further details, please refer to our paper: <https://arxiv.org/abs/2508.10427>.
## πŸ”— Related Links
- Project Page: <https://turingmotors.github.io/stride-qa>
- GitHub: <https://github.com/turingmotors/STRIDE-QA-Dataset>
- STRIDE-QA-Dataset: <https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset>
- STRIDE-QA-Bench: <https://huggingface.co/datasets/turing-motors/STRIDE-QA-Bench>
## πŸ“š Citation
```bibxtex
@article{cheng2024spatialrgpt,
title={Spatialrgpt: Grounded spatial reasoning in vision-language models},
author={Cheng, An-Chieh and Yin, Hongxu and Fu, Yang and Guo, Qiushan and Yang, Ruihan and Kautz, Jan and Wang, Xiaolong and Liu, Sifei},
journal={Advances in Neural Information Processing Systems},
volume={37},
pages={135062--135093},
year={2024}
}
@misc{strideqa2025,
title={STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes},
author={Keishi Ishihara and Kento Sasaki and Tsubasa Takahashi and Daiki Shiono and Yu Yamaguchi},
year={2025},
eprint={2508.10427},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.10427},
}
```
## πŸ“„ License
STRIDE-QA-Bench is released under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
## 🀝 Acknowledgements
This benchmark is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
We would like to acknowledge the use of the following open-source repositories:
- [SpatialRGPT](https://github.com/AnjieCheng/SpatialRGPT?tab=readme-ov-file) for building dataset generation pipeline
- [SAM 2.1](https://github.com/facebookresearch/sam2) for segmentation mask generation
- [dashcam-anonymizer](https://github.com/varungupta31/dashcam_anonymizer) for anonymization
## πŸ” Privacy Protection
To ensure privacy protection, human faces and license plates in the images were anonymized using the [Dashcam Anonymizer](https://github.com/varungupta31/dashcam_anonymizer).