Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
kentosasaki-jp commited on
Commit
2995856
·
1 Parent(s): b31f5de

chore: update

Browse files
README.md CHANGED
@@ -9,34 +9,26 @@ tags:
9
  ---
10
 
11
 
12
- # STRIDE-QA-Mini
13
 
14
- ⚠️ **Note**: The contents of STRIDE-QA-Mini differ from those of the latest dataset described in our [arXiv paper](https://arxiv.org/abs/2508.10427).
 
 
 
 
15
 
16
- **STRIDE-QA-Mini** (**S**patio**T**emporal **R**easoning **I**n **D**riving **E**nvironments for Visual Question Answering) is a compact subset of the STRIDE-QA corpus, built from real urban-driving footage collected by our in-house data-collection vehicles. It is designed for studying spatio-temporal reasoning in autonomous-driving scenes with Vision-Language-Models (VLMs).
17
 
18
- The dataset provides four-dimensinal context (3-D space plus time) and frames every question in the ego-vehicle coordinate system, encouraging models to reason about where surrounding agents will be in the next one to three seconds, not merely what is visible at the current instant.
19
-
20
- STRIDE-QA-Mini is structured around three successive design principles:
21
-
22
- 1. Object-centric queries
23
- The foundation layer asks questions about spatial relations and immediate interactions between pairs of non-ego objects, such as surrounding vehicles, pedestrians, and static infrastructure. These queries measure pure relational understanding that is independent of the ego vehicle.
24
- 2. Ego-aware queries
25
- Building on the object-centric layer, every question is phrased in the ego coordinate frame so that answers are directly actionable for planning and control.
26
- 3. Prediction-oriented queries
27
- Building on the ego-aware layer, we introduce an additional subset of queries that require the model to anticipate the ego vehicle’s spatial relations and interactions 1–3 seconds ahead, pushing evaluation beyond static perception toward short-horizon motion forecasting. For example:
28
- “What is the likely separation in meters and heading (clock position: 12 = front, 3 = right, 6 = rear, 9 = left) between the ego vehicle and Region [1] after 3 seconds?”
29
-
30
- Together these elements make STRIDE-QA-Mini a concise yet challenging dataset that challenges VLMs to handle not only what they *see* but also what they must predict, skills essential for safe and intelligent autonomous systems.
31
 
32
  ## 🔑 Key Features
33
 
34
- | Aspect | Details |
35
  | --- | --- |
36
- | **Spatio-temporal focus** | Questions probe object–object, ego–object, and future interaction reasoning. |
37
- | **Three QA categories** | 1) **Object-centric Spatial QA** relations between two external objects<br> 2) **Ego-centric Spatial QA** — relations between the ego vehicle and another object<br> 3) **Ego-centric Spatio-temporal QA** — future distance & orientation prediction tasks |
38
- | **Driving domain** | Real dash-cam footage collected in Tokyo (urban, suburban, highway, various weather). |
39
- | **Privacy aware** | Faces and license plates are automatically blurred. |
40
 
41
  ## 🗂️ Data Fields
42
 
@@ -54,7 +46,7 @@ Together these elements make STRIDE-QA-Mini a concise yet challenging dataset th
54
  ## 📊 Dataset Statistics
55
 
56
  | Category | Source file | QA pairs |
57
- |----------|-------------|----------|
58
  | Object-centric Spatial QA | `object_centric_spatial_qa.json` | **19,895** |
59
  | Ego-centric Spatial QA | `ego_centric_spatial_qa.json` | **54,390** |
60
  | Ego-centric Spatio-temporal QA | `ego_centric_spatiotemporal_qa_short_answer.json` | **28,935** |
@@ -62,11 +54,14 @@ Together these elements make STRIDE-QA-Mini a concise yet challenging dataset th
62
 
63
  ## 🔗 Related Links
64
 
65
- - **Paper**: [arXiv:2508.10427](https://arxiv.org/abs/2508.10427)
 
 
 
66
 
67
  ## 📚 Citation
68
 
69
- ```
70
  @misc{strideqa2025,
71
  title={STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes},
72
  author={Keishi Ishihara and Kento Sasaki and Tsubasa Takahashi and Daiki Shiono and Yu Yamaguchi},
@@ -94,4 +89,4 @@ We would like to acknowledge the use of the following open-source repositories:
94
 
95
  ## 🔏 Privacy Protection
96
 
97
- To ensure privacy protection, human faces and license plates in the images were anonymized using the [Dashcam Anonymizer](https://github.com/varungupta31/dashcam_anonymizer).
 
9
  ---
10
 
11
 
12
+ # STRIDE-QA-Dataset-Mini
13
 
14
+ [![AAAI 2026](https://img.shields.io/badge/AAAI%202026-Oral-red)](https://arxiv.org/abs/2508.10427)
15
+ [![Project Page](https://img.shields.io/badge/Project-Page-blue)](https://turingmotors.github.io/stride-qa/)
16
+ [![GitHub](https://img.shields.io/badge/GitHub-Code-black?logo=github)](https://github.com/turingmotors/STRIDE-QA-Dataset)
17
+ [![Dataset](https://img.shields.io/badge/🤗%20HuggingFace-Dataset-yellow)](https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset)
18
+ [![Benchmark](https://img.shields.io/badge/🤗%20HuggingFace-Benchmark-yellow)](https://huggingface.co/datasets/turing-motors/STRIDE-QA-Bench)
19
 
20
+ **STRIDE-QA** is a large-scale visual question answering (VQA) dataset for physically grounded spatiotemporal reasoning in autonomous driving. Constructed from 100 hours of multi-sensor driving data in Tokyo, it offers **16 M QA pairs** over **270 K frames** with dense annotations including 3D bounding boxes, segmentation masks, and multi-object tracks.
21
 
22
+ ⚠️ **Note**: **STRIDE-QA-Dataset-Mini** is provided as a preliminary version and does not fully match the format of the final dataset.
23
+ For the final dataset, please refer to: <https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset>.
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  ## 🔑 Key Features
26
 
27
+ | Category | Description |
28
  | --- | --- |
29
+ | **Object-centric Spatial QA** | Spatial relations between two surrounding agents (single frame). Includes qualitative (e.g., relative position) and quantitative (e.g., distance, angle) questions. |
30
+ | **Ego-centric Spatial QA** | Spatial relations between the ego vehicle and a surrounding agent (single frame). Covers distance, direction, and size comparisons. |
31
+ | **Ego-centric Spatiotemporal QA** | Short-term prediction using 4 context frames (2 Hz). Forecasts distance, heading angle, and velocity at t ∈ {1, 2, 3} s. |
 
32
 
33
  ## 🗂️ Data Fields
34
 
 
46
  ## 📊 Dataset Statistics
47
 
48
  | Category | Source file | QA pairs |
49
+ | --- | --- | --- |
50
  | Object-centric Spatial QA | `object_centric_spatial_qa.json` | **19,895** |
51
  | Ego-centric Spatial QA | `ego_centric_spatial_qa.json` | **54,390** |
52
  | Ego-centric Spatio-temporal QA | `ego_centric_spatiotemporal_qa_short_answer.json` | **28,935** |
 
54
 
55
  ## 🔗 Related Links
56
 
57
+ - Project Page: <https://turingmotors.github.io/stride-qa>
58
+ - GitHub: <https://github.com/turingmotors/STRIDE-QA-Dataset>
59
+ - STRIDE-QA-Dataset: <https://huggingface.co/datasets/turing-motors/STRIDE-QA-Dataset>
60
+ - STRIDE-QA-Bench: <https://huggingface.co/datasets/turing-motors/STRIDE-QA-Bench>
61
 
62
  ## 📚 Citation
63
 
64
+ ```bibtex
65
  @misc{strideqa2025,
66
  title={STRIDE-QA: Visual Question Answering Dataset for Spatiotemporal Reasoning in Urban Driving Scenes},
67
  author={Keishi Ishihara and Kento Sasaki and Tsubasa Takahashi and Daiki Shiono and Yu Yamaguchi},
 
89
 
90
  ## 🔏 Privacy Protection
91
 
92
+ To ensure privacy protection, human faces and license plates in the images were anonymized using the [Dashcam Anonymizer](https://github.com/varungupta31/dashcam_anonymizer).
ego_centric_spatial_qa.json → annotations/ego_centric_spatial_qa.json RENAMED
File without changes
ego_centric_spatiotemporal_qa_reasoning.json → annotations/ego_centric_spatiotemporal_qa_reasoning.json RENAMED
File without changes
ego_centric_spatiotemporal_qa_short_answer.json → annotations/ego_centric_spatiotemporal_qa_short_answer.json RENAMED
File without changes
object_centric_spatial_qa.json → annotations/object_centric_spatial_qa.json RENAMED
File without changes