VVSim / README.md
LOTEAT's picture
Upload dataset (auto)
d15aea0
metadata
name: VVSim
tags:
  - aerial-ground cooperative perception
  - autonomous driving
  - trajectory prediction
license: cc-by-4.0
task_categories:
  - object-detection
  - image-segmentation
  - other
task_ids:
  - semantic-segmentation
  - vehicle-detection

VVSim Dataset

VVSim is a large-scale dataset created for aerial–ground cooperative perception (AGCP). It integrates synchronized multimodal sensing data and state information collected simultaneously from vehicles and UAVs. The dataset contains 61K fully annotated frames that cover 19 interaction scenarios (e.g., cut-in and lane change), along with 5 weather conditions (e.g., sunny, foggy, rainy, cloudy, snowy) and 11 scene types such as city, town, university, highway, and mountain environments. Beyond these frames, VVSim provides 255K LiDAR sweeps and 3.5M images (e.g., 1.2M RGB images, 1.2M semantic segmentation images, and 1.1M depth images), accompanied by detailed annotations for 2D and 3D bounding boxes, object trajectories, and agent states.

Data Annotation

VVSim provides comprehensive annotations for both 3D and 2D perception tasks. Each object in the scene is annotated with a 3D bounding box (986K), capturing its spatial extent and orientation in the world coordinate system. In addition, 2D bounding boxes are generated via projection of the corresponding 3D boxes onto the image plane for camera-based tasks.

Beyond geometric information, VVSim records detailed object state attributes, including precise position (x, y, z), rotation angles (roll, pitch, yaw), and velocity vectors. All annotated objects are categorized into one of 7 classes: car, pedestrian, van, trailer, truck, traffic cone, and speed limit sign, and the detailed size information is shown in the table below. These rich annotations enable comprehensive evaluation of both spatial localization and motion prediction, supporting a wide range of cooperative perception tasks across aerial and ground agents. The class distribution is illustrated in the figure below.

VVSim further provides 30 semantic segmentation categories that span road structures, dynamic agents, traffic infrastructure, vegetation, buildings, and diverse environmental objects.

Coordinate Systems

VVSim employs several coordinate systems to represent sensor data:

  • LiDAR coordinate system (FLU): x points forward, y points left, z points upward.
  • Ego-vehicle coordinate system (FRD): x points forward, y points right, z points downward. All vehicle-mounted sensors are defined relative to this frame.
  • Global/world coordinate system (RFU): x points right, y points forward, z points upward.
  • Camera coordinate system (FRD): x points forward, y points right, z points downward.

Sensor Parameters

In VVSim, both vehicle and aerial platforms are equipped with sensors to support diverse perception tasks. All sensors operate at a frame rate of 25 Hz.

Vehicle sensors:

  • Camera: Four sets (front, rear, left, right), each consisting of RGB, depth, and semantic segmentation cameras. Resolution: 1600 × 900 px, FOV 90°. Approximate translations (m) relative to the vehicle origin: front (0.5, 0.0, -2.0), rear (-0.65, 0.0, -2.0), left (0.0, -0.5, -2.0), right (0.0, 0.5, -2.0). Camera orientations (yaw, pitch, roll, deg): front (0,0,0), rear (180,0,0), left (-90,0,0), right (90,0,0).
  • LiDAR: One 64-beam LiDAR on top at (0.0, 0.0, -1.0) m, orientation (0,0,0), providing high-density 3D point clouds.

UAV sensors:

  • Camera: Two downward-facing cameras providing RGB and semantic segmentation images. Resolution: 3840 × 2160 px, FOV 90°. Translation relative to drone origin: (0.0, 0.0, 0.2) m, orientation (0, -90, 0) deg (yaw, pitch, roll).