Datasets:

Modalities:
Image
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
kentosasaki-jp commited on
Commit
9e58979
ยท
1 Parent(s): e787959

chore: update README

Browse files
Files changed (1) hide show
  1. README.md +13 -6
README.md CHANGED
@@ -12,17 +12,24 @@ size_categories:
12
 
13
  # SpatialRGPT-Bench-Extended
14
 
15
- **SpatialRGPT-Bench-Extended** is an extension of [SpatialRGPT-Bench](https://huggingface.co/datasets/a8cheng/SpatialRGPT-Bench) that incorporates driving scene images from Japan. It augments object-centric QA (questions about two objects within an image) with ego-centric QA (questions about the relationship between the ego and a single object in the image). Each QA category contains 466 QA pairs. For further details, please refer to our paper: [STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes](https://arxiv.org/abs/2508.10427).
 
 
 
 
16
 
17
- ## ๐Ÿ”— Related Links
18
 
19
- - **Paper**: [arXiv:2508.10427](https://arxiv.org/abs/2508.10427)
20
- - **GitHub**: [turingmotors/STRIDE-QA-Dataset](https://github.com/turingmotors/STRIDE-QA-Dataset)
21
 
 
 
 
 
22
 
23
  ## ๐Ÿ“š Citation
24
 
25
- ```
26
  @article{cheng2024spatialrgpt,
27
  title={Spatialrgpt: Grounded spatial reasoning in vision-language models},
28
  author={Cheng, An-Chieh and Yin, Hongxu and Fu, Yang and Guo, Qiushan and Yang, Ruihan and Kautz, Jan and Wang, Xiaolong and Liu, Sifei},
@@ -59,4 +66,4 @@ We would like to acknowledge the use of the following open-source repositories:
59
 
60
  ## ๐Ÿ” Privacy Protection
61
 
62
- To ensure privacy protection, human faces and license plates in the images were anonymized using the [Dashcam Anonymizer](https://github.com/varungupta31/dashcam_anonymizer).
 
12
 
13
  # SpatialRGPT-Bench-Extended
14
 
15
+ [![AAAI 2026](https://img.shields.io/badge/AAAI%202026-Oral-red)](https://arxiv.org/abs/2508.10427)
16
+ [![Project Page](https://img.shields.io/badge/Project-Page-blue)](https://turingmotors.github.io/stride-qa/)
17
+ [![GitHub](https://img.shields.io/badge/GitHub-Code-black?logo=github)](https://github.com/turingmotors/STRIDE-QA-Dataset)
18
+ [![Dataset](https://img.shields.io/badge/๐Ÿค—%20HuggingFace-Dataset-yellow)](https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset)
19
+ [![Benchmark](https://img.shields.io/badge/๐Ÿค—%20HuggingFace-Benchmark-yellow)](https://huggingface.co/datasets/turing-motors/STRIDE-QA-Bench)
20
 
21
+ **SpatialRGPT-Bench-Extended** is an extension of [SpatialRGPT-Bench](https://huggingface.co/datasets/a8cheng/SpatialRGPT-Bench) that incorporates driving scene images from Japan. It augments object-centric QA (questions about two objects within an image) with ego-centric QA (questions about the relationship between the ego and a single object in the image). Each QA category contains 466 QA pairs. For further details, please refer to our paper: <https://arxiv.org/abs/2508.10427>.
22
 
23
+ ## ๐Ÿ”— Related Links
 
24
 
25
+ - Project Page: <https://turingmotors.github.io/stride-qa>
26
+ - GitHub: <https://github.com/turingmotors/STRIDE-QA-Dataset>
27
+ - STRIDE-QA-Dataset: <https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset>
28
+ - STRIDE-QA-Bench: <https://huggingface.co/datasets/turing-motors/STRIDE-QA-Bench>
29
 
30
  ## ๐Ÿ“š Citation
31
 
32
+ ```bibxtex
33
  @article{cheng2024spatialrgpt,
34
  title={Spatialrgpt: Grounded spatial reasoning in vision-language models},
35
  author={Cheng, An-Chieh and Yin, Hongxu and Fu, Yang and Guo, Qiushan and Yang, Ruihan and Kautz, Jan and Wang, Xiaolong and Liu, Sifei},
 
66
 
67
  ## ๐Ÿ” Privacy Protection
68
 
69
+ To ensure privacy protection, human faces and license plates in the images were anonymized using the [Dashcam Anonymizer](https://github.com/varungupta31/dashcam_anonymizer).