task_categories:
- object-detection
- depth-estimation
tags:
- 3d-object-detection
- 3d-bounding-box
- point-cloud
- monocular-3d
pretty_name: WildDet3D Visualization Data
WildDet3D Visualization Data
This repository hosts the visualization data for the WildDet3D-Bench benchmark — a human-annotated evaluation set for monocular 3D object detection in the wild.
Dataset Overview
WildDet3D-Bench is a validation set of 2,470 images drawn from three source datasets, with 9,256 human-verified 3D bounding box annotations across 2,196 images.
| Source | Images | Description |
|---|---|---|
| COCO Val | 424 | MS-COCO 2017 validation |
| LVIS Train | 1,113 | LVIS v1.0 (COCO train images) |
| Objects365 Val | 933 | Objects365 v2 validation |
| Total | 2,470 |
Each annotation has exactly one human-selected 3D bounding box, chosen from candidates generated by multiple 3D estimation algorithms (LA3D, SAM3D, Algorithm, DetAny3D, 3D-MooD) and validated through a multi-stage pipeline of crowdsourced annotation, quality control, human rejection review, and geometric filtering.
Repository Structure
.
├── data/ # WildDet3D-Bench ground truth (for benchmark visualization)
│ ├── index.json # Master index with image metadata and scene hierarchy
│ ├── boxes/ # Per-image JSON: 2D/3D boxes, categories, quality flags
│ ├── images/ # Super-resolution images (4× upscaled)
│ ├── images_annotated/ # Thumbnails with pre-rendered 3D box overlays
│ ├── camera/ # Camera intrinsic parameters
│ └── pointclouds/ # PLY point clouds (~250k points each)
│
└── model/ # Model predictions on WildDet3D-Bench (for model comparison visualization)
├── images/ # Images with model prediction overlays
├── box/ # Per-image model prediction boxes
└── text/ # Per-image model prediction metadata
data/ — Benchmark Ground Truth
Contains the full WildDet3D-Bench validation set with human-annotated 3D bounding boxes:
- 2,196 images with at least one valid 3D annotation (274 images filtered out)
- Per-image box data includes: 2D boxes (in 4× SR coordinates), 3D boxes (10D: center + dimensions + quaternion), category names,
ignore3Dflags, human quality ratings - Point clouds reconstructed from monocular depth estimation
- Annotated thumbnails with 3D boxes projected onto images, colored by object category
model/ — Model Predictions
Contains predictions from different 3D detection models evaluated on the benchmark, used by a separate model comparison visualization server.
3D Box Format
Each 3D bounding box is represented as a 10-element array:
[cx, cy, cz, w, h, l, qw, qx, qy, qz]
| Field | Description |
|---|---|
cx, cy, cz |
Box center in camera coordinates (meters) |
w, h, l |
Box dimensions (meters) |
qw, qx, qy, qz |
Rotation as unit quaternion |
Coordinate system: OpenCV camera convention (X-right, Y-down, Z-forward).
Annotation Pipeline
- Monocular depth estimation — per-pixel depth maps
- 4× super-resolution — higher quality point clouds
- Multi-algorithm 3D box generation — candidate boxes per 2D detection
- VLM scoring — automated quality scoring (6 criteria, 0–12 total)
- Human annotation (Prolific) — workers select best candidate and rate quality
- Human rejection review — second-pass review of selected boxes
- Geometric filtering — GPT-estimated size validation and depth ratio checks
- Composite image removal — filter collage/grid images