Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
keishihara commited on
Commit
154318d
·
verified ·
1 Parent(s): 352909a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -1
README.md CHANGED
@@ -6,4 +6,80 @@ language:
6
  - en
7
  tags:
8
  - turing
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - en
7
  tags:
8
  - turing
9
+ ---
10
+
11
+
12
+ # STRIDE-QA-Mini
13
+
14
+ ## Dataset Description
15
+
16
+ **STRIDE-QA-Mini** (**S**patio**T**emporal **R**easoning **I**n **D**riving **E**nvironments for Visual Question Answering) is a compact subset of the STRIDE-QA corpus, built from real urban-driving footage collected by our in-house data-collection vehicles. It is designed for studying spatio-temporal reasoning in autonomous-driving scenes with Vision-Language-Models (VLMs).
17
+
18
+ The dataset provides four-dimensinal context (3-D space plus time) and frames every question in the ego-vehicle coordinate system, encouraging models to reason about where surrounding agents will be in the next one to three seconds, not merely what is visible at the current instant.
19
+
20
+ STRIDE-QA-Mini is structured around three successive design principles:
21
+
22
+ 1. Object-centric queries
23
+ The foundation layer asks questions about spatial relations and immediate interactions between pairs of non-ego objects, such as surrounding vehicles, pedestrians, and static infrastructure. These queries measure pure relational understanding that is independent of the ego vehicle.
24
+ 2. Ego-aware queries
25
+ Building on the object-centric layer, every question is phrased in the ego coordinate frame so that answers are directly actionable for planning and control.
26
+ 3. Prediction-oriented queries
27
+ Building on the ego-aware layer, we introduce an additional subset of queries that require the model to anticipate the ego vehicle’s spatial relations and interactions 1–3 seconds ahead, pushing evaluation beyond static perception toward short-horizon motion forecasting. For example:
28
+ *“What is the likely separation in meters and heading (clock position: 12 = front, 3 = right, 6 = rear, 9 = left) between the ego vehicle and Region [1] after 3 seconds?”*
29
+
30
+ Together these elements make STRIDE-QA-Mini a concise yet challenging dataset that challenges VLMs to handle not only what they *see* but also what they must predict*,* skills essential for safe and intelligent autonomous systems.
31
+
32
+ ## Key Features
33
+
34
+ | Aspect | Details |
35
+ | --- | --- |
36
+ | **Spatio-temporal focus** | Questions probe object–object, ego–object, and future interaction reasoning. |
37
+ | **Three QA categories** | 1) **Object-centric Spatial QA** — relations between two external objects<br> 2) **Ego-centric Spatial QA** — relations between the ego vehicle and another object<br> 3) **Ego-centric Spatio-temporal QA** — future distance & orientation prediction tasks |
38
+ | **Driving domain** | Real dash-cam footage collected on Tokyo (urban, suburban, highway, various weather). |
39
+ | **Privacy aware** | Faces and license plates are automatically blurred. |
40
+
41
+ ## Dataset Statistics
42
+
43
+ | Category | Source file | QA pairs |
44
+ |----------|-------------|----------|
45
+ | Object-centric Spatial QA | `object_centric_spatial_qa.json` | **19 895** |
46
+ | Ego-centric Spatial QA | `ego_centric_spatial_qa.json` | **54 390** |
47
+ | Ego-centric Spatio-temporal QA | `ego_centric_spatiotemporal_qa_short_answer.json` | **28 935** |
48
+ | Images | `images/*.jpg` | **5 539** files |
49
+
50
+ **Total QA pairs:** 103 220
51
+
52
+ ## Data Fields
53
+
54
+ | Field | Type | Description |
55
+ | --- | --- | --- |
56
+ | `id` | `str` | Unique sample ID. |
57
+ | `image` | `str` | File name of the key frame used in the prompt. |
58
+ | `images` | `list[str]` | File names for the four consicutive image frames. Only avaiable in Ego-centric Spatiotemporal QA category. |
59
+ | `conversations` | `list[dict]` | Dialogue in VILA format (`"from": "human"` / `"gpt"`). |
60
+ | `bbox` | `list[list[float]]` | Bounding boxes \[x₁, y₁, x₂, y₂] for referenced regions. |
61
+ | `rle` | `list[dict]` | COCO-style run-length masks for regions. |
62
+ | `region` | `list[list[int]]` | Region tags mentioned in the prompt. |
63
+ | `qa_info` | `list` | Meta data for each message turn in dialogue. |
64
+
65
+ ## Usage
66
+
67
+ A minimal loading example is provided in [`tutorial.ipynb`](https://www.notion.so/turing-motors/tutorial.ipynb)
68
+
69
+ ## Privacy Protection
70
+
71
+ To ensure privacy protection, human faces and license plates in STRIDE-QA-Mini images were anonymized using the [Dashcam Anonymizer]([github](https://github.com/varungupta31/dashcam_anonymizer)).
72
+
73
+ ## License
74
+
75
+ STRIDE-QA-Mini is released under the [**Apache License 2.0**](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE).
76
+
77
+ ## Acknowledgements
78
+
79
+ This dataset is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
80
+
81
+ We would like to acknowledge the use of the following open-source repositories:
82
+
83
+ - [**SAM 2**](https://github.com/facebookresearch/sam2) for segmentation mask generation
84
+ - [dashcam-anonymizer](https://github.com/varungupta31/dashcam_anonymizer) for anonymization
85
+ - [SpatialRGPT](https://github.com/AnjieCheng/SpatialRGPT?tab=readme-ov-file) for building dataset generation pipeline