Upload dataset (auto)
Browse files
README.md
CHANGED
|
@@ -1 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# VVSim Dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--
|
| 2 |
+
* @Author: LOTEAT
|
| 3 |
+
* @Date: 2025-11-18 11:15:49
|
| 4 |
+
-->
|
| 5 |
# VVSim Dataset
|
| 6 |
+
|
| 7 |
+
**VVSim** is a large-scale dataset created for aerial–ground cooperative perception (AGCP). It integrates synchronized multimodal sensing data and state information collected simultaneously from vehicles and UAVs. The dataset contains **61K** fully annotated frames that cover **19** interaction scenarios (e.g., cut-in and lane change), along with **5** weather conditions (e.g., sunny, foggy, rainy, cloudy, snowy) and **11** scene types such as city, town, university, highway, and mountain environments. Beyond these frames, VVSim provides **255K** LiDAR sweeps and **3.5M** images (e.g., **1.2M** RGB images, **1.2M** semantic segmentation images, and **1.1M** depth images), accompanied by detailed annotations for 2D and 3D bounding boxes, object trajectories, and agent states.
|
| 8 |
+
|
| 9 |
+
## Data Annotation
|
| 10 |
+
|
| 11 |
+
VVSim provides comprehensive annotations for both 3D and 2D perception tasks. Each object in the scene is annotated with a **3D bounding box (986K)**, capturing its spatial extent and orientation in the world coordinate system. In addition, **2D bounding boxes** are generated via projection of the corresponding 3D boxes onto the image plane for camera-based tasks.
|
| 12 |
+
|
| 13 |
+
Beyond geometric information, VVSim records detailed object state attributes, including precise position `(x, y, z)`, rotation angles (roll, pitch, yaw), and velocity vectors. All annotated objects are categorized into one of **7** classes: **car, pedestrian, van, trailer, truck, traffic cone,** and **speed limit sign**, and the detailed size information is shown in the table below. These rich annotations enable comprehensive evaluation of both spatial localization and motion prediction, supporting a wide range of cooperative perception tasks across aerial and ground agents. The class distribution is illustrated in the figure below.
|
| 14 |
+
|
| 15 |
+
VVSim further provides **30** semantic segmentation categories that span road structures, dynamic agents, traffic infrastructure, vegetation, buildings, and diverse environmental objects.
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
## Coordinate Systems
|
| 19 |
+
|
| 20 |
+
VVSim employs several coordinate systems to represent sensor data:
|
| 21 |
+
|
| 22 |
+
- **LiDAR coordinate system (FLU):** x points forward, y points left, z points upward.
|
| 23 |
+
- **Ego-vehicle coordinate system (FRD):** x points forward, y points right, z points downward. All vehicle-mounted sensors are defined relative to this frame.
|
| 24 |
+
- **Global/world coordinate system (RFU):** x points right, y points forward, z points upward.
|
| 25 |
+
- **Camera coordinate system (FRD):** x points forward, y points right, z points downward.
|
| 26 |
+
|
| 27 |
+
## Sensor Parameters
|
| 28 |
+
|
| 29 |
+
In VVSim, both vehicle and aerial platforms are equipped with sensors to support diverse perception tasks. All sensors operate at a frame rate of 25 Hz.
|
| 30 |
+
|
| 31 |
+
**Vehicle sensors:**
|
| 32 |
+
|
| 33 |
+
- **Camera:** Four sets (front, rear, left, right), each consisting of RGB, depth, and semantic segmentation cameras. Resolution: 1600 × 900 px, FOV 90°. Approximate translations (m) relative to the vehicle origin: front (0.5, 0.0, -2.0), rear (-0.65, 0.0, -2.0), left (0.0, -0.5, -2.0), right (0.0, 0.5, -2.0). Camera orientations (yaw, pitch, roll, deg): front (0,0,0), rear (180,0,0), left (-90,0,0), right (90,0,0).
|
| 34 |
+
- **LiDAR:** One 64-beam LiDAR on top at (0.0, 0.0, -1.0) m, orientation (0,0,0), providing high-density 3D point clouds.
|
| 35 |
+
|
| 36 |
+
**UAV sensors:**
|
| 37 |
+
|
| 38 |
+
- **Camera:** Two downward-facing cameras providing RGB and semantic segmentation images. Resolution: 3840 × 2160 px, FOV 90°. Translation relative to drone origin: (0.0, 0.0, 0.2) m, orientation (0, -90, 0) deg (yaw, pitch, roll).
|