Datasets:
Tasks:
Question Answering
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
turing
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -25,7 +25,7 @@ The foundation layer asks questions about spatial relations and immediate intera
|
|
| 25 |
Building on the object-centric layer, every question is phrased in the ego coordinate frame so that answers are directly actionable for planning and control.
|
| 26 |
3. Prediction-oriented queries
|
| 27 |
Building on the ego-aware layer, we introduce an additional subset of queries that require the model to anticipate the ego vehicle’s spatial relations and interactions 1–3 seconds ahead, pushing evaluation beyond static perception toward short-horizon motion forecasting. For example:
|
| 28 |
-
|
| 29 |
|
| 30 |
Together these elements make STRIDE-QA-Mini a concise yet challenging dataset that challenges VLMs to handle not only what they *see* but also what they must predict*,* skills essential for safe and intelligent autonomous systems.
|
| 31 |
|
|
@@ -62,10 +62,6 @@ Together these elements make STRIDE-QA-Mini a concise yet challenging dataset th
|
|
| 62 |
| `region` | `list[list[int]]` | Region tags mentioned in the prompt. |
|
| 63 |
| `qa_info` | `list` | Meta data for each message turn in dialogue. |
|
| 64 |
|
| 65 |
-
## Usage
|
| 66 |
-
|
| 67 |
-
A minimal loading example is provided in [`tutorial.ipynb`](https://www.notion.so/turing-motors/tutorial.ipynb)
|
| 68 |
-
|
| 69 |
## Privacy Protection
|
| 70 |
|
| 71 |
To ensure privacy protection, human faces and license plates in STRIDE-QA-Mini images were anonymized using the [Dashcam Anonymizer]([github](https://github.com/varungupta31/dashcam_anonymizer)).
|
|
|
|
| 25 |
Building on the object-centric layer, every question is phrased in the ego coordinate frame so that answers are directly actionable for planning and control.
|
| 26 |
3. Prediction-oriented queries
|
| 27 |
Building on the ego-aware layer, we introduce an additional subset of queries that require the model to anticipate the ego vehicle’s spatial relations and interactions 1–3 seconds ahead, pushing evaluation beyond static perception toward short-horizon motion forecasting. For example:
|
| 28 |
+
“What is the likely separation in meters and heading (clock position: 12 = front, 3 = right, 6 = rear, 9 = left) between the ego vehicle and Region [1] after 3 seconds?”
|
| 29 |
|
| 30 |
Together these elements make STRIDE-QA-Mini a concise yet challenging dataset that challenges VLMs to handle not only what they *see* but also what they must predict*,* skills essential for safe and intelligent autonomous systems.
|
| 31 |
|
|
|
|
| 62 |
| `region` | `list[list[int]]` | Region tags mentioned in the prompt. |
|
| 63 |
| `qa_info` | `list` | Meta data for each message turn in dialogue. |
|
| 64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
## Privacy Protection
|
| 66 |
|
| 67 |
To ensure privacy protection, human faces and license plates in STRIDE-QA-Mini images were anonymized using the [Dashcam Anonymizer]([github](https://github.com/varungupta31/dashcam_anonymizer)).
|