Datasets:
Tasks:
Question Answering
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
turing
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -11,7 +11,7 @@ tags:
|
|
| 11 |
|
| 12 |
# STRIDE-QA-Mini
|
| 13 |
|
| 14 |
-
|
| 15 |
|
| 16 |
**STRIDE-QA-Mini** (**S**patio**T**emporal **R**easoning **I**n **D**riving **E**nvironments for Visual Question Answering) is a compact subset of the STRIDE-QA corpus, built from real urban-driving footage collected by our in-house data-collection vehicles. It is designed for studying spatio-temporal reasoning in autonomous-driving scenes with Vision-Language-Models (VLMs).
|
| 17 |
|
|
@@ -29,7 +29,7 @@ Building on the ego-aware layer, we introduce an additional subset of queries th
|
|
| 29 |
|
| 30 |
Together these elements make STRIDE-QA-Mini a concise yet challenging dataset that challenges VLMs to handle not only what they *see* but also what they must predict, skills essential for safe and intelligent autonomous systems.
|
| 31 |
|
| 32 |
-
## Key Features
|
| 33 |
|
| 34 |
| Aspect | Details |
|
| 35 |
| --- | --- |
|
|
@@ -38,18 +38,7 @@ Together these elements make STRIDE-QA-Mini a concise yet challenging dataset th
|
|
| 38 |
| **Driving domain** | Real dash-cam footage collected in Tokyo (urban, suburban, highway, various weather). |
|
| 39 |
| **Privacy aware** | Faces and license plates are automatically blurred. |
|
| 40 |
|
| 41 |
-
##
|
| 42 |
-
|
| 43 |
-
| Category | Source file | QA pairs |
|
| 44 |
-
|----------|-------------|----------|
|
| 45 |
-
| Object-centric Spatial QA | `object_centric_spatial_qa.json` | **19,895** |
|
| 46 |
-
| Ego-centric Spatial QA | `ego_centric_spatial_qa.json` | **54,390** |
|
| 47 |
-
| Ego-centric Spatio-temporal QA | `ego_centric_spatiotemporal_qa_short_answer.json` | **28,935** |
|
| 48 |
-
| Images | `images/*.jpg` | **5,539** files |
|
| 49 |
-
|
| 50 |
-
**Total QA pairs:** 103,220
|
| 51 |
-
|
| 52 |
-
## Data Fields
|
| 53 |
|
| 54 |
| Field | Type | Description |
|
| 55 |
| --- | --- | --- |
|
|
@@ -62,21 +51,47 @@ Together these elements make STRIDE-QA-Mini a concise yet challenging dataset th
|
|
| 62 |
| `region` | `list[list[int]]` | Region tags mentioned in the prompt. |
|
| 63 |
| `qa_info` | `list` | Meta data for each message turn in dialogue. |
|
| 64 |
|
| 65 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
-
|
| 68 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
-
## License
|
| 71 |
|
| 72 |
-
STRIDE-QA-
|
| 73 |
|
| 74 |
-
## Acknowledgements
|
| 75 |
|
| 76 |
-
This
|
| 77 |
|
| 78 |
We would like to acknowledge the use of the following open-source repositories:
|
| 79 |
|
| 80 |
- [SpatialRGPT](https://github.com/AnjieCheng/SpatialRGPT?tab=readme-ov-file) for building dataset generation pipeline
|
| 81 |
-
- [
|
| 82 |
-
- [dashcam-anonymizer](https://github.com/varungupta31/dashcam_anonymizer) for anonymization
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
# STRIDE-QA-Mini
|
| 13 |
|
| 14 |
+
⚠️ **Note**: The contents of STRIDE-QA-Mini differ from those of the latest dataset described in our [arXiv paper](https://arxiv.org/abs/2508.10427).
|
| 15 |
|
| 16 |
**STRIDE-QA-Mini** (**S**patio**T**emporal **R**easoning **I**n **D**riving **E**nvironments for Visual Question Answering) is a compact subset of the STRIDE-QA corpus, built from real urban-driving footage collected by our in-house data-collection vehicles. It is designed for studying spatio-temporal reasoning in autonomous-driving scenes with Vision-Language-Models (VLMs).
|
| 17 |
|
|
|
|
| 29 |
|
| 30 |
Together these elements make STRIDE-QA-Mini a concise yet challenging dataset that challenges VLMs to handle not only what they *see* but also what they must predict, skills essential for safe and intelligent autonomous systems.
|
| 31 |
|
| 32 |
+
## 🔑 Key Features
|
| 33 |
|
| 34 |
| Aspect | Details |
|
| 35 |
| --- | --- |
|
|
|
|
| 38 |
| **Driving domain** | Real dash-cam footage collected in Tokyo (urban, suburban, highway, various weather). |
|
| 39 |
| **Privacy aware** | Faces and license plates are automatically blurred. |
|
| 40 |
|
| 41 |
+
## 🗂️ Data Fields
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
| Field | Type | Description |
|
| 44 |
| --- | --- | --- |
|
|
|
|
| 51 |
| `region` | `list[list[int]]` | Region tags mentioned in the prompt. |
|
| 52 |
| `qa_info` | `list` | Meta data for each message turn in dialogue. |
|
| 53 |
|
| 54 |
+
## 📊 Dataset Statistics
|
| 55 |
+
|
| 56 |
+
| Category | Source file | QA pairs |
|
| 57 |
+
|----------|-------------|----------|
|
| 58 |
+
| Object-centric Spatial QA | `object_centric_spatial_qa.json` | **19,895** |
|
| 59 |
+
| Ego-centric Spatial QA | `ego_centric_spatial_qa.json` | **54,390** |
|
| 60 |
+
| Ego-centric Spatio-temporal QA | `ego_centric_spatiotemporal_qa_short_answer.json` | **28,935** |
|
| 61 |
+
| Images | `images/*.jpg` | **5,539** files |
|
| 62 |
+
|
| 63 |
+
## 🔗 Related Links
|
| 64 |
+
|
| 65 |
+
- **Paper**: [arXiv:2508.10427](https://arxiv.org/abs/2508.10427)
|
| 66 |
|
| 67 |
+
## 📚 Citation
|
| 68 |
|
| 69 |
+
```
|
| 70 |
+
@misc{strideqa2025,
|
| 71 |
+
title={STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes},
|
| 72 |
+
author={Keishi Ishihara and Kento Sasaki and Tsubasa Takahashi and Daiki Shiono and Yu Yamaguchi},
|
| 73 |
+
year={2025},
|
| 74 |
+
eprint={2508.10427},
|
| 75 |
+
archivePrefix={arXiv},
|
| 76 |
+
primaryClass={cs.CV},
|
| 77 |
+
url={https://arxiv.org/abs/2508.10427},
|
| 78 |
+
}
|
| 79 |
+
```
|
| 80 |
|
| 81 |
+
## 📄 License
|
| 82 |
|
| 83 |
+
STRIDE-QA-Bench is released under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
|
| 84 |
|
| 85 |
+
## 🤝 Acknowledgements
|
| 86 |
|
| 87 |
+
This benchmark is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
|
| 88 |
|
| 89 |
We would like to acknowledge the use of the following open-source repositories:
|
| 90 |
|
| 91 |
- [SpatialRGPT](https://github.com/AnjieCheng/SpatialRGPT?tab=readme-ov-file) for building dataset generation pipeline
|
| 92 |
+
- [SAM 2.1](https://github.com/facebookresearch/sam2) for segmentation mask generation
|
| 93 |
+
- [dashcam-anonymizer](https://github.com/varungupta31/dashcam_anonymizer) for anonymization
|
| 94 |
+
|
| 95 |
+
## 🔏 Privacy Protection
|
| 96 |
+
|
| 97 |
+
To ensure privacy protection, human faces and license plates in the images were anonymized using the [Dashcam Anonymizer](https://github.com/varungupta31/dashcam_anonymizer).
|