Datasets:

Modalities:
Image
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
kentosasaki-jp commited on
Commit
b5a15b2
·
1 Parent(s): 91acb14

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md CHANGED
@@ -1,3 +1,54 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
4
+
5
+ # SpatialRGPT-Bench-Extended
6
+
7
+ **SpatialRGPT-Bench-Extended** is an extension of [SpatialRGPT-Bench](https://huggingface.co/datasets/a8cheng/SpatialRGPT-Bench) that incorporates driving scene images from Japan. It augments object-centric QA (questions about two objects within an image) with ego-centric QA (questions about the relationship between the ego and a single object in the image). Each QA category contains 466 QA pairs. For further details, please refer to our paper: [STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes](https://arxiv.org/abs/2508.10427).
8
+
9
+ ## 🔗 Related Links
10
+
11
+ - **Paper**: [arXiv:2508.10427](https://arxiv.org/abs/2508.10427)
12
+ - **GitHub**: [turingmotors/STRIDE-QA-Dataset](https://github.com/turingmotors/STRIDE-QA-Dataset)
13
+
14
+
15
+ ## 📚 Citation
16
+
17
+ ```
18
+ @article{cheng2024spatialrgpt,
19
+ title={Spatialrgpt: Grounded spatial reasoning in vision-language models},
20
+ author={Cheng, An-Chieh and Yin, Hongxu and Fu, Yang and Guo, Qiushan and Yang, Ruihan and Kautz, Jan and Wang, Xiaolong and Liu, Sifei},
21
+ journal={Advances in Neural Information Processing Systems},
22
+ volume={37},
23
+ pages={135062--135093},
24
+ year={2024}
25
+ }
26
+
27
+ @misc{strideqa2025,
28
+ title={STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes},
29
+ author={Keishi Ishihara and Kento Sasaki and Tsubasa Takahashi and Daiki Shiono and Yu Yamaguchi},
30
+ year={2025},
31
+ eprint={2508.10427},
32
+ archivePrefix={arXiv},
33
+ primaryClass={cs.CV},
34
+ url={https://arxiv.org/abs/2508.10427},
35
+ }
36
+ ```
37
+
38
+ ## 📄 License
39
+
40
+ STRIDE-QA-Bench is released under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
41
+
42
+ ## 🤝 Acknowledgements
43
+
44
+ This benchmark is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
45
+
46
+ We would like to acknowledge the use of the following open-source repositories:
47
+
48
+ - [SpatialRGPT](https://github.com/AnjieCheng/SpatialRGPT?tab=readme-ov-file) for building dataset generation pipeline
49
+ - [SAM 2.1](https://github.com/facebookresearch/sam2) for segmentation mask generation
50
+ - [dashcam-anonymizer](https://github.com/varungupta31/dashcam_anonymizer) for anonymization
51
+
52
+ ## 🔏 Privacy Protection
53
+
54
+ To ensure privacy protection, human faces and license plates in the images were anonymized using the [Dashcam Anonymizer](https://github.com/varungupta31/dashcam_anonymizer).