Datasets:

Modalities:
Image
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
kentosasaki-jp commited on
Commit
e787959
Β·
2 Parent(s): 8684603 fc82317

Merge branch 'main' of https://huggingface.co/datasets/turing-motors/SpatialRGPT-Bench-Extended

Browse files
Files changed (1) hide show
  1. README.md +62 -54
README.md CHANGED
@@ -1,54 +1,62 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
4
-
5
- # SpatialRGPT-Bench-Extended
6
-
7
- **SpatialRGPT-Bench-Extended** is an extension of [SpatialRGPT-Bench](https://huggingface.co/datasets/a8cheng/SpatialRGPT-Bench) that incorporates driving scene images from Japan. It augments object-centric QA (questions about two objects within an image) with ego-centric QA (questions about the relationship between the ego and a single object in the image). Each QA category contains 466 QA pairs. For further details, please refer to our paper: [STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes](https://arxiv.org/abs/2508.10427).
8
-
9
- ## πŸ”— Related Links
10
-
11
- - **Paper**: [arXiv:2508.10427](https://arxiv.org/abs/2508.10427)
12
- - **GitHub**: [turingmotors/STRIDE-QA-Dataset](https://github.com/turingmotors/STRIDE-QA-Dataset)
13
-
14
-
15
- ## πŸ“š Citation
16
-
17
- ```
18
- @article{cheng2024spatialrgpt,
19
- title={Spatialrgpt: Grounded spatial reasoning in vision-language models},
20
- author={Cheng, An-Chieh and Yin, Hongxu and Fu, Yang and Guo, Qiushan and Yang, Ruihan and Kautz, Jan and Wang, Xiaolong and Liu, Sifei},
21
- journal={Advances in Neural Information Processing Systems},
22
- volume={37},
23
- pages={135062--135093},
24
- year={2024}
25
- }
26
-
27
- @misc{strideqa2025,
28
- title={STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes},
29
- author={Keishi Ishihara and Kento Sasaki and Tsubasa Takahashi and Daiki Shiono and Yu Yamaguchi},
30
- year={2025},
31
- eprint={2508.10427},
32
- archivePrefix={arXiv},
33
- primaryClass={cs.CV},
34
- url={https://arxiv.org/abs/2508.10427},
35
- }
36
- ```
37
-
38
- ## πŸ“„ License
39
-
40
- STRIDE-QA-Bench is released under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
41
-
42
- ## 🀝 Acknowledgements
43
-
44
- This benchmark is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
45
-
46
- We would like to acknowledge the use of the following open-source repositories:
47
-
48
- - [SpatialRGPT](https://github.com/AnjieCheng/SpatialRGPT?tab=readme-ov-file) for building dataset generation pipeline
49
- - [SAM 2.1](https://github.com/facebookresearch/sam2) for segmentation mask generation
50
- - [dashcam-anonymizer](https://github.com/varungupta31/dashcam_anonymizer) for anonymization
51
-
52
- ## πŸ” Privacy Protection
53
-
54
- To ensure privacy protection, human faces and license plates in the images were anonymized using the [Dashcam Anonymizer](https://github.com/varungupta31/dashcam_anonymizer).
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - autonomousdriving
9
+ size_categories:
10
+ - n<1K
11
+ ---
12
+
13
+ # SpatialRGPT-Bench-Extended
14
+
15
+ **SpatialRGPT-Bench-Extended** is an extension of [SpatialRGPT-Bench](https://huggingface.co/datasets/a8cheng/SpatialRGPT-Bench) that incorporates driving scene images from Japan. It augments object-centric QA (questions about two objects within an image) with ego-centric QA (questions about the relationship between the ego and a single object in the image). Each QA category contains 466 QA pairs. For further details, please refer to our paper: [STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes](https://arxiv.org/abs/2508.10427).
16
+
17
+ ## πŸ”— Related Links
18
+
19
+ - **Paper**: [arXiv:2508.10427](https://arxiv.org/abs/2508.10427)
20
+ - **GitHub**: [turingmotors/STRIDE-QA-Dataset](https://github.com/turingmotors/STRIDE-QA-Dataset)
21
+
22
+
23
+ ## πŸ“š Citation
24
+
25
+ ```
26
+ @article{cheng2024spatialrgpt,
27
+ title={Spatialrgpt: Grounded spatial reasoning in vision-language models},
28
+ author={Cheng, An-Chieh and Yin, Hongxu and Fu, Yang and Guo, Qiushan and Yang, Ruihan and Kautz, Jan and Wang, Xiaolong and Liu, Sifei},
29
+ journal={Advances in Neural Information Processing Systems},
30
+ volume={37},
31
+ pages={135062--135093},
32
+ year={2024}
33
+ }
34
+
35
+ @misc{strideqa2025,
36
+ title={STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes},
37
+ author={Keishi Ishihara and Kento Sasaki and Tsubasa Takahashi and Daiki Shiono and Yu Yamaguchi},
38
+ year={2025},
39
+ eprint={2508.10427},
40
+ archivePrefix={arXiv},
41
+ primaryClass={cs.CV},
42
+ url={https://arxiv.org/abs/2508.10427},
43
+ }
44
+ ```
45
+
46
+ ## πŸ“„ License
47
+
48
+ STRIDE-QA-Bench is released under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
49
+
50
+ ## 🀝 Acknowledgements
51
+
52
+ This benchmark is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
53
+
54
+ We would like to acknowledge the use of the following open-source repositories:
55
+
56
+ - [SpatialRGPT](https://github.com/AnjieCheng/SpatialRGPT?tab=readme-ov-file) for building dataset generation pipeline
57
+ - [SAM 2.1](https://github.com/facebookresearch/sam2) for segmentation mask generation
58
+ - [dashcam-anonymizer](https://github.com/varungupta31/dashcam_anonymizer) for anonymization
59
+
60
+ ## πŸ” Privacy Protection
61
+
62
+ To ensure privacy protection, human faces and license plates in the images were anonymized using the [Dashcam Anonymizer](https://github.com/varungupta31/dashcam_anonymizer).