Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -40,12 +40,9 @@ We introduce STSBench, a scenario-based framework to benchmark the holistic unde
|
|
| 40 |
|
| 41 |
# STSnu
|
| 42 |
|
| 43 |
-
We
|
| 44 |
-
|
| 45 |
-
Since most expert driving models operate on the multi-view videos or LiDAR scans of NuScenes, we construct our benchmark on this large-scale autonomous driving dataset with rich 3D annotations in a multi-sensor setup. In particular, we automatically gather scenarios from all 150 scenes of the validation set, considering only annotated key frames. Therefore, we leverage manually annotated 3D tracks and class labels, ego-motion data (e.g., velocity), and lanes, lane boundaries, and road markings (\eg, crosswalks) from the available HD map data.
|
| 46 |
-
|
| 47 |
In contrast to prior benchmarks, focusing primarily on ego-vehicle actions that mainly occur in the front-view, STSnu evaluates spatio-temporal reasoning across a broader set of interactions and multiple views. This includes reasoning about other agents and their interactions with the ego-vehicle or with one another. To support this, we define four distinct scenario categories:
|
| 48 |
-
|
| 49 |
***Ego-vehicle scenarios.***
|
| 50 |
The first category includes all actions related exclusively to the ego-vehicle, such as acceleration/deceleration, left/right turn, or lane change. Important for control decisions and collision prevention, driving models must be aware of the ego-vehicle status and behavior. Although these scenarios are part of existing benchmarks in different forms and relatively straightforward to detect, they provide valuable negatives for scenarios with ego-agent interactions.
|
| 51 |
|
|
|
|
| 40 |
|
| 41 |
# STSnu
|
| 42 |
|
| 43 |
+
We leveraged our STSBench framework to construct a benchmark from the NuScenes dataset for current expert driving models about their spatio-temporal reasoning capabilities in traffic scenes.
|
| 44 |
+
In particular, we automatically gathered and manually verified scenarios from all 150 scenes of the validation set, considering only annotated key frames.
|
|
|
|
|
|
|
| 45 |
In contrast to prior benchmarks, focusing primarily on ego-vehicle actions that mainly occur in the front-view, STSnu evaluates spatio-temporal reasoning across a broader set of interactions and multiple views. This includes reasoning about other agents and their interactions with the ego-vehicle or with one another. To support this, we define four distinct scenario categories:
|
|
|
|
| 46 |
***Ego-vehicle scenarios.***
|
| 47 |
The first category includes all actions related exclusively to the ego-vehicle, such as acceleration/deceleration, left/right turn, or lane change. Important for control decisions and collision prevention, driving models must be aware of the ego-vehicle status and behavior. Although these scenarios are part of existing benchmarks in different forms and relatively straightforward to detect, they provide valuable negatives for scenarios with ego-agent interactions.
|
| 48 |
|