license: cc-by-4.0
task_categories:
- object-detection
tags:
- 3d-object-detection
- 3d-bounding-box
- monocular-3d
- in-the-wild
- depth-estimation
pretty_name: WildDet3D-Data
size_categories:
- 1M<n<10M
WildDet3D-Data: Dataset Preparation Guide
Overview
WildDet3D-Data consists of 3D bounding box annotations for in-the-wild images from COCO, LVIS, Objects365, and V3Det. The dataset is split into:
| Split | Description | Annotation Source | Images | Annotations | Categories |
|---|---|---|---|---|---|
| Train (Human) | Human-reviewed annotations only | Human | 102,979 | 229,934 | 11,879 |
| Train (Essential) | Human + VLM-qualified small objects | Human + VLM | 102,979 | 412,711 | 12,064 |
| Train (Synthetic) | VLM auto-selected annotations | VLM | 896,004 | 3,483,292 | 11,896 |
For val/test benchmarks, see WildDet3D-Bench.
Directory Structure
After downloading and extracting, the dataset should be organized as:
WildDet3D-Data/
├── README.md
├── annotations/
│ ├── InTheWild_v3_train_human_only.json # Train (Human) — COCO, LVIS, Obj365
│ ├── InTheWild_v3_train_human.json # Train (Essential) — COCO, LVIS, Obj365
│ ├── InTheWild_v3_train_synthetic.json # Train (Synthetic) — COCO, LVIS, Obj365
│ ├── InTheWild_v3_v3det_human_only.json # Train (Human) — V3Det
│ ├── InTheWild_v3_v3det_human.json # Train (Essential) — V3Det
│ ├── InTheWild_v3_v3det_synthetic.json # Train (Synthetic) — V3Det
│ └── InTheWild_v3_*_class_map.json # Category mappings
├── depth/{split}/ # Monocular depth maps (extract from .tar.gz)
│ └── {source}_{formatted_id}.npz # float32 .npz at original resolution
├── camera/{split}/ # Camera parameters (extract from .tar.gz)
│ └── {source}_{formatted_id}.json # Camera intrinsics (K)
└── images/ # Downloaded separately (see Step 2)
├── coco_train/
├── obj365_train/
└── v3det_train/
Depth and Camera File Naming
Depth maps and camera parameters are named as {source}_{formatted_id}, where {source} is derived from the image's file_path field in the annotation JSON:
| file_path | Depth / Camera filename |
|---|---|
images/coco_val/000000000724.jpg |
coco_val_000000000724.npz/.json |
images/coco_train/000000262686.jpg |
coco_train_000000262686.npz/.json |
images/obj365_train/obj365_train_000000628903.jpg |
obj365_train_000000628903.npz/.json |
images/v3det_train/Q100507578/28_284_....jpg |
v3det_train_000000000915.npz/.json |
Note: Some images from COCO and LVIS share the same underlying image file (LVIS uses COCO images). These appear as separate entries in the annotation JSON (with different annotations) but map to the same depth/camera file. To load the depth/camera for an image entry, extract the source prefix from file_path.split("/")[1] and combine with formatted_id.
# Example: load depth and camera for an image
img = data["images"][0]
source = img["file_path"].split("/")[1] # e.g., "coco_train"
fid = img["formatted_id"] # e.g., "000000262686"
depth_mm = np.load(f"depth/{split}/{source}_{fid}.npz")["depth"] # float32, (H, W), in mm
depth_m = depth_mm / 1000.0 # convert to meters
camera = json.load(open(f"camera/{split}/{source}_{fid}.json"))
Depth Format
Each .npz file contains a single key "depth" with a float32 2D array at original image resolution. Values are in millimeters (mm). To convert to meters: depth_m = depth_mm / 1000.0.
Camera Format
Each .json file contains:
{
"K": [[fx, 0, cx], [0, fy, cy], [0, 0, 1]],
"image_size": [height, width]
}
K: Camera intrinsic matrix (3x3), at original image resolutionimage_size:[height, width]of the original image
Step 1: Download and Extract
pip install huggingface_hub
# Download only annotations
huggingface-cli download weikaih/WildDet3D-Data --repo-type dataset --include "annotations/*" --local-dir WildDet3D-Data
# Download specific splits (e.g., val only)
huggingface-cli download weikaih/WildDet3D-Data --repo-type dataset --include "packed/depth_val.tar.gz" "packed/camera_val.tar.gz" --local-dir WildDet3D-Data
# Download everything
huggingface-cli download weikaih/WildDet3D-Data --repo-type dataset --local-dir WildDet3D-Data
Extract Depth Maps
Depth maps are provided as compressed archives. Large splits are split into multiple parts.
mkdir -p depth && cd depth
# Train Human (2 parts)
tar xzf ../packed/depth_train_human_part000.tar.gz
tar xzf ../packed/depth_train_human_part001.tar.gz
# V3Det Human (single file)
tar xzf ../packed/depth_v3det_human.tar.gz
# V3Det Synthetic (7 parts)
for i in $(seq -w 0 6); do tar xzf ../packed/depth_v3det_synthetic_part0${i}.tar.gz; done
# Train Synthetic (16 parts)
for i in $(seq -w 0 15); do
part=$(printf "depth_train_synthetic_part%03d.tar.gz" $i)
tar xzf ../packed/$part
done
cd ..
Extract Camera Parameters
mkdir -p camera && cd camera
for f in ../packed/camera_*.tar.gz; do tar xzf "$f"; done
cd ..
After extraction, you should have depth/{split}/ and camera/{split}/ directories with individual files per image.
Step 2: Download Source Images
Images must be downloaded from their original sources and organized into the following structure:
images/
├── coco_train/ # COCO train2017 (includes LVIS images)
├── obj365_train/ # Objects365 training
└── v3det_train/ # V3Det training
COCO train2017
wget http://images.cocodataset.org/zips/train2017.zip
unzip train2017.zip
mkdir -p images/coco_train
mv train2017/* images/coco_train/
Objects365
# Objects365 — download from https://www.objects365.org/
mkdir -p images/obj365_train
# Images should be named: obj365_train_000000XXXXXX.jpg
V3Det
Used by: Train V3Det splits only
# V3Det — download from https://v3det.openxlab.org.cn/
mkdir -p images/v3det_train
# Directory structure: images/v3det_train/{category_folder}/{image}.jpg
# e.g., images/v3det_train/Q100507578/28_284_50119550013_7d06ded882_c.jpg
| Source | Directory |
|---|---|
| COCO train2017 | images/coco_train/ |
| Objects365 train | images/obj365_train/ |
| V3Det train | images/v3det_train/ |
Annotation Format (COCO3D)
Each annotation JSON follows the COCO3D format:
{
"info": {"name": "InTheWild_v3_val"},
"images": [{
"id": 0,
"width": 375,
"height": 500,
"file_path": "images/coco_val/000000000724.jpg",
"K": [[fx, 0, cx], [0, fy, cy], [0, 0, 1]]
}],
"categories": [{"id": 0, "name": "stop sign"}],
"annotations": [{
"id": 0,
"image_id": 0,
"category_id": 0,
"category_name": "stop sign",
"bbox2D_proj": [x1, y1, x2, y2],
"center_cam": [cx, cy, cz],
"dimensions": [width, height, length],
"R_cam": [[r00, r01, r02], [r10, r11, r12], [r20, r21, r22]],
"bbox3D_cam": [[x, y, z], ...],
"valid3D": true
}]
}
Image fields:
K: Camera intrinsic matrix (3x3), at original image resolutionfile_path: Relative path to the source image
Annotation fields:
valid3D:true= valid 3D annotation,false= 3D box is filtered out (see note below)center_cam: 3D box center in camera coordinates (meters)dimensions:[width, height, length]in meters (Omni3D convention)R_cam: 3x3 rotation matrix in camera coordinates (gravity-aligned, local Y = up)bbox3D_cam: 8 corner points of the 3D bounding box in camera coordinatesbbox2D_proj: 2D bounding box[x1, y1, x2, y2]at original image resolution
Important: valid3D filtering. Each annotation always has a valid 2D bounding box (bbox2D_proj), but the 3D box fields (center_cam, dimensions, R_cam, bbox3D_cam) should only be used when valid3D=true. Annotations with valid3D=false have 3D boxes that were filtered out due to quality checks (human rejection, size/geometry filtering, or depiction filtering) — their 3D fields contain placeholder values and should be ignored. The annotation counts in the overview table refer to valid3D=true annotations only.
For training, filter annotations by valid3D:
for ann in data["annotations"]:
if ann["valid3D"]:
# Use both 2D and 3D annotations
...
else:
# 2D box is still valid, but skip 3D box
...
Which Files to Use
| Use Case | Annotation Files |
|---|---|
| Train (Human only) | InTheWild_v3_train_human_only.json + InTheWild_v3_v3det_human_only.json |
| Train (Essential) | InTheWild_v3_train_human.json + InTheWild_v3_v3det_human.json |
| Train (Synthetic) | InTheWild_v3_train_synthetic.json + InTheWild_v3_v3det_synthetic.json |
| Train (All) | Essential + Synthetic (all 4 files) |
License
- Annotations: CC BY 4.0