Ethanliang99's picture
Add files using upload-large-folder tool
974e696 verified

Event-Guided Video Depth Estimation Competition

Overview

This challenge focuses on event-guided video depth estimation in low-light scenes.

Participants are asked to use low-light RGB frames together with event streams to predict dense depth maps for each scene. The benchmark is organized at the scene/video level so that temporal consistency is preserved across frames.

The official split follows a 6:2:2 train/val/test ratio by video count. The split assignment is encoded directly in the top-level directory structure.

Dataset

The competition dataset is a workshop-aligned mirror of the DVD event-guided depth data.

Each scene contains aligned RGB frames, event slices, and depth supervision in the following layout:

train/
  <scene_name>/
    low/
    normal/
val/
  <scene_name>/
    low/
    normal/
test/
  <scene_name>/
    low/
    normal/

Each scene directory includes a manifest.json file with the release and split metadata.

Training and validation scenes are intended for development and public benchmarking. Test labels should remain hidden during the competition phase.

Submission

Participants should submit one predicted depth sequence for each test scene.

Recommended submission layout:

submission/
  <scene_name>/
    normal/
      depth.npz

Each depth.npz file should contain a depth array aligned with the scene frame order. A 4D tensor shaped [T, H, W, 1] is preferred for compatibility with the existing DVD tooling, although [T, H, W] can also be supported by the evaluation script.

Evaluation

The current DVD evaluation code reports the following depth metrics:

  • AbsRel / abs_relative_difference
  • RMSE / rmse_linear
  • delta1 / delta1_acc
  • delta2 / delta2_acc
  • delta3 / delta3_acc
  • SILog / silog_rmse

For launch, we suggest selecting one primary ranking metric from the list above and reporting the rest as diagnostics.

Rules & Timeline

  • Only use the training data and any additional resources explicitly allowed by the organizers.
  • Do not inspect or annotate hidden test labels.
  • Keep scene-level temporal ordering intact when producing predictions.
  • Submission quota: TBD.
  • Development phase: TBD.
  • Final test phase: TBD.

Terms

  • Dataset license: TBD.
  • Code and baseline license: TBD.
  • By participating, teams agree to the competition rules and the platform terms.
  • The organizers may update the submission format or evaluation script with prior notice.