Datasets:
Tasks:
Question Answering
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
turing
License:
File size: 4,688 Bytes
352909a 71d2938 352909a 154318d 2995856 154318d 2995856 154318d 2995856 154318d 2995856 154318d b31f5de 154318d 2995856 154318d 2995856 154318d b31f5de 154318d b31f5de 2995856 b31f5de 2995856 154318d b31f5de 71d2938 2995856 b31f5de 154318d b31f5de 154318d b31f5de 154318d b31f5de 154318d b31f5de 154318d d711e64 b31f5de 2995856 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
---
license: cc-by-nc-sa-4.0
task_categories:
- question-answering
language:
- en
tags:
- turing
---
# STRIDE-QA-Dataset-Mini
[](https://arxiv.org/abs/2508.10427)
[](https://turingmotors.github.io/stride-qa/)
[](https://github.com/turingmotors/STRIDE-QA-Dataset)
[](https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset)
[](https://huggingface.co/datasets/turing-motors/STRIDE-QA-Bench)
**STRIDE-QA** is a large-scale visual question answering (VQA) dataset for physically grounded spatiotemporal reasoning in autonomous driving. Constructed from 100 hours of multi-sensor driving data in Tokyo, it offers **16 M QA pairs** over **270 K frames** with dense annotations including 3D bounding boxes, segmentation masks, and multi-object tracks.
⚠️ **Note**: **STRIDE-QA-Dataset-Mini** is provided as a preliminary version and does not fully match the format of the final dataset.
For the final dataset, please refer to: <https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset>.
## 🔑 Key Features
| Category | Description |
| --- | --- |
| **Object-centric Spatial QA** | Spatial relations between two surrounding agents (single frame). Includes qualitative (e.g., relative position) and quantitative (e.g., distance, angle) questions. |
| **Ego-centric Spatial QA** | Spatial relations between the ego vehicle and a surrounding agent (single frame). Covers distance, direction, and size comparisons. |
| **Ego-centric Spatiotemporal QA** | Short-term prediction using 4 context frames (2 Hz). Forecasts distance, heading angle, and velocity at t ∈ {1, 2, 3} s. |
## 🗂️ Data Fields
| Field | Type | Description |
| --- | --- | --- |
| `id` | `str` | Unique sample ID. |
| `image` | `str` | File name of the key frame used in the prompt. |
| `images` | `list[str]` | File names for the four consicutive image frames. Only avaiable in Ego-centric Spatiotemporal QA category. |
| `conversations` | `list[dict]` | Dialogue in VILA format (`"from": "human"` / `"gpt"`). |
| `bbox` | `list[list[float]]` | Bounding boxes \[x₁, y₁, x₂, y₂] for referenced regions. |
| `rle` | `list[dict]` | COCO-style run-length masks for regions. |
| `region` | `list[list[int]]` | Region tags mentioned in the prompt. |
| `qa_info` | `list` | Meta data for each message turn in dialogue. |
## 📊 Dataset Statistics
| Category | Source file | QA pairs |
| --- | --- | --- |
| Object-centric Spatial QA | `object_centric_spatial_qa.json` | **19,895** |
| Ego-centric Spatial QA | `ego_centric_spatial_qa.json` | **54,390** |
| Ego-centric Spatio-temporal QA | `ego_centric_spatiotemporal_qa_short_answer.json` | **28,935** |
| Images | `images/*.jpg` | **5,539** files |
## 🔗 Related Links
- Project Page: <https://turingmotors.github.io/stride-qa>
- GitHub: <https://github.com/turingmotors/STRIDE-QA-Dataset>
- STRIDE-QA-Dataset: <https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset>
- STRIDE-QA-Bench: <https://huggingface.co/datasets/turing-motors/STRIDE-QA-Bench>
## 📚 Citation
```bibtex
@misc{strideqa2025,
title={STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes},
author={Keishi Ishihara and Kento Sasaki and Tsubasa Takahashi and Daiki Shiono and Yu Yamaguchi},
year={2025},
eprint={2508.10427},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.10427},
}
```
## 📄 License
STRIDE-QA-Bench is released under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
## 🤝 Acknowledgements
This benchmark is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
We would like to acknowledge the use of the following open-source repositories:
- [SpatialRGPT](https://github.com/AnjieCheng/SpatialRGPT?tab=readme-ov-file) for building dataset generation pipeline
- [SAM 2.1](https://github.com/facebookresearch/sam2) for segmentation mask generation
- [dashcam-anonymizer](https://github.com/varungupta31/dashcam_anonymizer) for anonymization
## 🔏 Privacy Protection
To ensure privacy protection, human faces and license plates in the images were anonymized using the [Dashcam Anonymizer](https://github.com/varungupta31/dashcam_anonymizer).
|