Datasets:
File size: 2,480 Bytes
974e696 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 | # Event-Guided Video Depth Estimation Competition
## Overview
This challenge focuses on event-guided video depth estimation in low-light scenes.
Participants are asked to use low-light RGB frames together with event streams to predict dense depth maps for each scene. The benchmark is organized at the scene/video level so that temporal consistency is preserved across frames.
The official split follows a 6:2:2 train/val/test ratio by video count. The split assignment is encoded directly in the top-level directory structure.
## Dataset
The competition dataset is a workshop-aligned mirror of the DVD event-guided depth data.
Each scene contains aligned RGB frames, event slices, and depth supervision in the following layout:
```text
train/
<scene_name>/
low/
normal/
val/
<scene_name>/
low/
normal/
test/
<scene_name>/
low/
normal/
```
Each scene directory includes a `manifest.json` file with the release and split metadata.
Training and validation scenes are intended for development and public benchmarking. Test labels should remain hidden during the competition phase.
## Submission
Participants should submit one predicted depth sequence for each test scene.
Recommended submission layout:
```text
submission/
<scene_name>/
normal/
depth.npz
```
Each `depth.npz` file should contain a depth array aligned with the scene frame order. A 4D tensor shaped `[T, H, W, 1]` is preferred for compatibility with the existing DVD tooling, although `[T, H, W]` can also be supported by the evaluation script.
## Evaluation
The current DVD evaluation code reports the following depth metrics:
- AbsRel / `abs_relative_difference`
- RMSE / `rmse_linear`
- delta1 / `delta1_acc`
- delta2 / `delta2_acc`
- delta3 / `delta3_acc`
- SILog / `silog_rmse`
For launch, we suggest selecting one primary ranking metric from the list above and reporting the rest as diagnostics.
## Rules & Timeline
- Only use the training data and any additional resources explicitly allowed by the organizers.
- Do not inspect or annotate hidden test labels.
- Keep scene-level temporal ordering intact when producing predictions.
- Submission quota: TBD.
- Development phase: TBD.
- Final test phase: TBD.
## Terms
- Dataset license: TBD.
- Code and baseline license: TBD.
- By participating, teams agree to the competition rules and the platform terms.
- The organizers may update the submission format or evaluation script with prior notice. |