Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
KentoSasaki commited on
Commit
d711e64
·
verified ·
1 Parent(s): 71d2938

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -27,7 +27,7 @@ Building on the object-centric layer, every question is phrased in the ego coord
27
  Building on the ego-aware layer, we introduce an additional subset of queries that require the model to anticipate the ego vehicle’s spatial relations and interactions 1–3 seconds ahead, pushing evaluation beyond static perception toward short-horizon motion forecasting. For example:
28
  “What is the likely separation in meters and heading (clock position: 12 = front, 3 = right, 6 = rear, 9 = left) between the ego vehicle and Region [1] after 3 seconds?”
29
 
30
- Together these elements make STRIDE-QA-Mini a concise yet challenging dataset that challenges VLMs to handle not only what they *see* but also what they must predict*,* skills essential for safe and intelligent autonomous systems.
31
 
32
  ## Key Features
33
 
@@ -69,7 +69,7 @@ To ensure privacy protection, human faces and license plates in STRIDE-QA-Mini i
69
 
70
  ## License
71
 
72
- STRIDE-QA-Mini is released under the CC BY-NC-SA 4.0[https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en].
73
 
74
  ## Acknowledgements
75
 
@@ -77,6 +77,6 @@ This dataset is based on results obtained from a project, JPNP20017, subsidized
77
 
78
  We would like to acknowledge the use of the following open-source repositories:
79
 
80
- - [**SAM 2**](https://github.com/facebookresearch/sam2) for segmentation mask generation
81
- - [dashcam-anonymizer](https://github.com/varungupta31/dashcam_anonymizer) for anonymization
82
- - [SpatialRGPT](https://github.com/AnjieCheng/SpatialRGPT?tab=readme-ov-file) for building dataset generation pipeline
 
27
  Building on the ego-aware layer, we introduce an additional subset of queries that require the model to anticipate the ego vehicle’s spatial relations and interactions 1–3 seconds ahead, pushing evaluation beyond static perception toward short-horizon motion forecasting. For example:
28
  “What is the likely separation in meters and heading (clock position: 12 = front, 3 = right, 6 = rear, 9 = left) between the ego vehicle and Region [1] after 3 seconds?”
29
 
30
+ Together these elements make STRIDE-QA-Mini a concise yet challenging dataset that challenges VLMs to handle not only what they *see* but also what they must predict, skills essential for safe and intelligent autonomous systems.
31
 
32
  ## Key Features
33
 
 
69
 
70
  ## License
71
 
72
+ STRIDE-QA-Mini is released under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
73
 
74
  ## Acknowledgements
75
 
 
77
 
78
  We would like to acknowledge the use of the following open-source repositories:
79
 
80
+ - [SpatialRGPT](https://github.com/AnjieCheng/SpatialRGPT?tab=readme-ov-file) for building dataset generation pipeline
81
+ - [**SAM 2.1**](https://github.com/facebookresearch/sam2) for segmentation mask generation
82
+ - [dashcam-anonymizer](https://github.com/varungupta31/dashcam_anonymizer) for anonymization