Datasets:
Tasks:
Question Answering
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
turing
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -27,7 +27,7 @@ Building on the object-centric layer, every question is phrased in the ego coord
|
|
| 27 |
Building on the ego-aware layer, we introduce an additional subset of queries that require the model to anticipate the ego vehicle’s spatial relations and interactions 1–3 seconds ahead, pushing evaluation beyond static perception toward short-horizon motion forecasting. For example:
|
| 28 |
“What is the likely separation in meters and heading (clock position: 12 = front, 3 = right, 6 = rear, 9 = left) between the ego vehicle and Region [1] after 3 seconds?”
|
| 29 |
|
| 30 |
-
Together these elements make STRIDE-QA-Mini a concise yet challenging dataset that challenges VLMs to handle not only what they *see* but also what they must predict
|
| 31 |
|
| 32 |
## Key Features
|
| 33 |
|
|
@@ -69,7 +69,7 @@ To ensure privacy protection, human faces and license plates in STRIDE-QA-Mini i
|
|
| 69 |
|
| 70 |
## License
|
| 71 |
|
| 72 |
-
STRIDE-QA-Mini is released under the CC BY-NC-SA 4.0
|
| 73 |
|
| 74 |
## Acknowledgements
|
| 75 |
|
|
@@ -77,6 +77,6 @@ This dataset is based on results obtained from a project, JPNP20017, subsidized
|
|
| 77 |
|
| 78 |
We would like to acknowledge the use of the following open-source repositories:
|
| 79 |
|
| 80 |
-
- [
|
| 81 |
-
- [
|
| 82 |
-
- [
|
|
|
|
| 27 |
Building on the ego-aware layer, we introduce an additional subset of queries that require the model to anticipate the ego vehicle’s spatial relations and interactions 1–3 seconds ahead, pushing evaluation beyond static perception toward short-horizon motion forecasting. For example:
|
| 28 |
“What is the likely separation in meters and heading (clock position: 12 = front, 3 = right, 6 = rear, 9 = left) between the ego vehicle and Region [1] after 3 seconds?”
|
| 29 |
|
| 30 |
+
Together these elements make STRIDE-QA-Mini a concise yet challenging dataset that challenges VLMs to handle not only what they *see* but also what they must predict, skills essential for safe and intelligent autonomous systems.
|
| 31 |
|
| 32 |
## Key Features
|
| 33 |
|
|
|
|
| 69 |
|
| 70 |
## License
|
| 71 |
|
| 72 |
+
STRIDE-QA-Mini is released under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
|
| 73 |
|
| 74 |
## Acknowledgements
|
| 75 |
|
|
|
|
| 77 |
|
| 78 |
We would like to acknowledge the use of the following open-source repositories:
|
| 79 |
|
| 80 |
+
- [SpatialRGPT](https://github.com/AnjieCheng/SpatialRGPT?tab=readme-ov-file) for building dataset generation pipeline
|
| 81 |
+
- [**SAM 2.1**](https://github.com/facebookresearch/sam2) for segmentation mask generation
|
| 82 |
+
- [dashcam-anonymizer](https://github.com/varungupta31/dashcam_anonymizer) for anonymization
|