Datasets:
Tasks:
Visual Question Answering
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
< 1K
ArXiv:
Tags:
autonomousdriving
License:
File size: 3,290 Bytes
fc82317 9e58979 fc82317 9e58979 fc82317 9e58979 fc82317 9e58979 fc82317 9e58979 fc82317 9e58979 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
license: cc-by-nc-sa-4.0
task_categories:
- visual-question-answering
language:
- en
tags:
- autonomousdriving
size_categories:
- n<1K
---
# SpatialRGPT-Bench-Extended
[](https://arxiv.org/abs/2508.10427)
[](https://turingmotors.github.io/stride-qa/)
[](https://github.com/turingmotors/STRIDE-QA-Dataset)
[](https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset)
[](https://huggingface.co/datasets/turing-motors/STRIDE-QA-Bench)
**SpatialRGPT-Bench-Extended** is an extension of [SpatialRGPT-Bench](https://huggingface.co/datasets/a8cheng/SpatialRGPT-Bench) that incorporates driving scene images from Japan. It augments object-centric QA (questions about two objects within an image) with ego-centric QA (questions about the relationship between the ego and a single object in the image). Each QA category contains 466 QA pairs. For further details, please refer to our paper: <https://arxiv.org/abs/2508.10427>.
## 🔗 Related Links
- Project Page: <https://turingmotors.github.io/stride-qa>
- GitHub: <https://github.com/turingmotors/STRIDE-QA-Dataset>
- STRIDE-QA-Dataset: <https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset>
- STRIDE-QA-Bench: <https://huggingface.co/datasets/turing-motors/STRIDE-QA-Bench>
## 📚 Citation
```bibxtex
@article{cheng2024spatialrgpt,
title={Spatialrgpt: Grounded spatial reasoning in vision-language models},
author={Cheng, An-Chieh and Yin, Hongxu and Fu, Yang and Guo, Qiushan and Yang, Ruihan and Kautz, Jan and Wang, Xiaolong and Liu, Sifei},
journal={Advances in Neural Information Processing Systems},
volume={37},
pages={135062--135093},
year={2024}
}
@misc{strideqa2025,
title={STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes},
author={Keishi Ishihara and Kento Sasaki and Tsubasa Takahashi and Daiki Shiono and Yu Yamaguchi},
year={2025},
eprint={2508.10427},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.10427},
}
```
## 📄 License
STRIDE-QA-Bench is released under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
## 🤝 Acknowledgements
This benchmark is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
We would like to acknowledge the use of the following open-source repositories:
- [SpatialRGPT](https://github.com/AnjieCheng/SpatialRGPT?tab=readme-ov-file) for building dataset generation pipeline
- [SAM 2.1](https://github.com/facebookresearch/sam2) for segmentation mask generation
- [dashcam-anonymizer](https://github.com/varungupta31/dashcam_anonymizer) for anonymization
## 🔏 Privacy Protection
To ensure privacy protection, human faces and license plates in the images were anonymized using the [Dashcam Anonymizer](https://github.com/varungupta31/dashcam_anonymizer).
|