Datasets:
license: cc-by-nc-sa-4.0
task_categories:
- question-answering
language:
- en
tags:
- turing
STRIDE-QA-Dataset-Mini
STRIDE-QA is a large-scale visual question answering (VQA) dataset for physically grounded spatiotemporal reasoning in autonomous driving. Constructed from 100 hours of multi-sensor driving data in Tokyo, it offers 16 M QA pairs over 270 K frames with dense annotations including 3D bounding boxes, segmentation masks, and multi-object tracks.
⚠️ Note: STRIDE-QA-Dataset-Mini is provided as a preliminary version and does not fully match the format of the final dataset. For the final dataset, please refer to: https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset.
🔑 Key Features
| Category | Description |
|---|---|
| Object-centric Spatial QA | Spatial relations between two surrounding agents (single frame). Includes qualitative (e.g., relative position) and quantitative (e.g., distance, angle) questions. |
| Ego-centric Spatial QA | Spatial relations between the ego vehicle and a surrounding agent (single frame). Covers distance, direction, and size comparisons. |
| Ego-centric Spatiotemporal QA | Short-term prediction using 4 context frames (2 Hz). Forecasts distance, heading angle, and velocity at t ∈ {1, 2, 3} s. |
🗂️ Data Fields
| Field | Type | Description |
|---|---|---|
id |
str |
Unique sample ID. |
image |
str |
File name of the key frame used in the prompt. |
images |
list[str] |
File names for the four consicutive image frames. Only avaiable in Ego-centric Spatiotemporal QA category. |
conversations |
list[dict] |
Dialogue in VILA format ("from": "human" / "gpt"). |
bbox |
list[list[float]] |
Bounding boxes [x₁, y₁, x₂, y₂] for referenced regions. |
rle |
list[dict] |
COCO-style run-length masks for regions. |
region |
list[list[int]] |
Region tags mentioned in the prompt. |
qa_info |
list |
Meta data for each message turn in dialogue. |
📊 Dataset Statistics
| Category | Source file | QA pairs |
|---|---|---|
| Object-centric Spatial QA | object_centric_spatial_qa.json |
19,895 |
| Ego-centric Spatial QA | ego_centric_spatial_qa.json |
54,390 |
| Ego-centric Spatio-temporal QA | ego_centric_spatiotemporal_qa_short_answer.json |
28,935 |
| Images | images/*.jpg |
5,539 files |
🔗 Related Links
- Project Page: https://turingmotors.github.io/stride-qa
- GitHub: https://github.com/turingmotors/STRIDE-QA-Dataset
- STRIDE-QA-Dataset: https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset
- STRIDE-QA-Bench: https://huggingface.co/datasets/turing-motors/STRIDE-QA-Bench
📚 Citation
@misc{strideqa2025,
title={STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes},
author={Keishi Ishihara and Kento Sasaki and Tsubasa Takahashi and Daiki Shiono and Yu Yamaguchi},
year={2025},
eprint={2508.10427},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.10427},
}
📄 License
STRIDE-QA-Bench is released under the CC BY-NC-SA 4.0.
🤝 Acknowledgements
This benchmark is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
We would like to acknowledge the use of the following open-source repositories:
- SpatialRGPT for building dataset generation pipeline
- SAM 2.1 for segmentation mask generation
- dashcam-anonymizer for anonymization
🔏 Privacy Protection
To ensure privacy protection, human faces and license plates in the images were anonymized using the Dashcam Anonymizer.