The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
WildDet3D-Data: Dataset Preparation Guide
Overview
WildDet3D-Data consists of 3D bounding box annotations for in-the-wild images from COCO, LVIS, Objects365, and V3Det. The dataset is split into:
| Split | Description | Annotation Source | Images | Annotations | Categories |
|---|---|---|---|---|---|
| Val | Validation set | Human | 2,470 | 9,256 | 785 |
| Test | Test set | Human | 2,433 | 5,596 | 633 |
| Train (Human) | Human-reviewed annotations only | Human | 102,979 | 229,934 | 11,879 |
| Train (Essential) | Human + VLM-qualified small objects | Human + VLM | 102,979 | 412,711 | 12,064 |
| Train (Synthetic) | VLM auto-selected annotations | VLM | 896,004 | 3,483,292 | 11,896 |
| Total | 1,003,886 | 3,910,855 | 13,499 |
Directory Structure
After downloading and extracting, the dataset should be organized as:
WildDet3D-Data/
βββ README.md
βββ annotations/
β βββ InTheWild_v3_val.json # Val
β βββ InTheWild_v3_test.json # Test
β βββ InTheWild_v3_train_human_only.json # Train (Human) β COCO, LVIS, Obj365
β βββ InTheWild_v3_train_human.json # Train (Essential) β COCO, LVIS, Obj365
β βββ InTheWild_v3_train_synthetic.json # Train (Synthetic) β COCO, LVIS, Obj365
β βββ InTheWild_v3_v3det_human_only.json # Train (Human) β V3Det
β βββ InTheWild_v3_v3det_human.json # Train (Essential) β V3Det
β βββ InTheWild_v3_v3det_synthetic.json # Train (Synthetic) β V3Det
β βββ InTheWild_v3_*_class_map.json # Category mappings
βββ depth/{split}/ # Monocular depth maps (extract from .tar.gz)
β βββ {source}_{formatted_id}.npz # float32 .npz at original resolution
βββ camera/{split}/ # Camera parameters (extract from .tar.gz)
β βββ {source}_{formatted_id}.json # Camera intrinsics (K)
βββ images/ # Downloaded separately (see Step 2)
βββ coco_val/
βββ coco_train/
βββ obj365_val/
βββ obj365_train/
βββ v3det_train/
Depth and Camera File Naming
Depth maps and camera parameters are named as {source}_{formatted_id}, where {source} is derived from the image's file_path field in the annotation JSON:
| file_path | Depth / Camera filename |
|---|---|
images/coco_val/000000000724.jpg |
coco_val_000000000724.npz/.json |
images/coco_train/000000262686.jpg |
coco_train_000000262686.npz/.json |
images/obj365_train/obj365_train_000000628903.jpg |
obj365_train_000000628903.npz/.json |
images/v3det_train/Q100507578/28_284_....jpg |
v3det_train_000000000915.npz/.json |
Note: Some images from COCO and LVIS share the same underlying image file (LVIS uses COCO images). These appear as separate entries in the annotation JSON (with different annotations) but map to the same depth/camera file. To load the depth/camera for an image entry, extract the source prefix from file_path.split("/")[1] and combine with formatted_id.
# Example: load depth and camera for an image
img = data["images"][0]
source = img["file_path"].split("/")[1] # e.g., "coco_train"
fid = img["formatted_id"] # e.g., "000000262686"
depth = np.load(f"depth/{split}/{source}_{fid}.npz")["depth"] # float32, (H, W)
camera = json.load(open(f"camera/{split}/{source}_{fid}.json"))
Depth Format
Each .npz file contains a single key "depth" with a float32 2D array at original image resolution (meters).
Camera Format
Each .json file contains:
{
"K": [[fx, 0, cx], [0, fy, cy], [0, 0, 1]],
"image_size": [height, width]
}
K: Camera intrinsic matrix (3x3), at original image resolutionimage_size:[height, width]of the original image
Step 1: Download and Extract
pip install huggingface_hub
# Download only annotations
huggingface-cli download weikaih/WildDet3D-Data --repo-type dataset --include "annotations/*" --local-dir WildDet3D-Data
# Download specific splits (e.g., val only)
huggingface-cli download weikaih/WildDet3D-Data --repo-type dataset --include "packed/depth_val.tar.gz" "packed/camera_val.tar.gz" --local-dir WildDet3D-Data
# Download everything
huggingface-cli download weikaih/WildDet3D-Data --repo-type dataset --local-dir WildDet3D-Data
Extract Depth Maps
Depth maps are provided as compressed archives. Large splits are split into multiple parts.
# Val and Test (small, single file each)
mkdir -p depth && cd depth
tar xzf ../packed/depth_val.tar.gz
tar xzf ../packed/depth_test.tar.gz
# Train Human (2 parts)
tar xzf ../packed/depth_train_human_part000.tar.gz
tar xzf ../packed/depth_train_human_part001.tar.gz
# V3Det Human (single file)
tar xzf ../packed/depth_v3det_human.tar.gz
# V3Det Synthetic (7 parts)
for i in $(seq -w 0 6); do tar xzf ../packed/depth_v3det_synthetic_part0${i}.tar.gz; done
# Train Synthetic (16 parts)
for i in $(seq -w 0 15); do
part=$(printf "depth_train_synthetic_part%03d.tar.gz" $i)
tar xzf ../packed/$part
done
cd ..
Extract Camera Parameters
mkdir -p camera && cd camera
for f in ../packed/camera_*.tar.gz; do tar xzf "$f"; done
cd ..
After extraction, you should have depth/{split}/ and camera/{split}/ directories with individual files per image.
Step 2: Download Source Images
Images must be downloaded from their original sources and organized into the following structure:
images/
βββ coco_val/ # COCO val2017
βββ coco_train/ # COCO train2017 (includes LVIS images)
βββ obj365_val/ # Objects365 validation
βββ obj365_train/ # Objects365 training
βββ v3det_train/ # V3Det training
COCO (val2017 + train2017)
Used by: Val, Test, Train (all splits)
# COCO val2017 β used by Val and Test
wget http://images.cocodataset.org/zips/val2017.zip
unzip val2017.zip
mkdir -p images/coco_val
mv val2017/* images/coco_val/
# COCO train2017 β used by Val/Test for LVIS images, and all Train splits
wget http://images.cocodataset.org/zips/train2017.zip
unzip train2017.zip
mkdir -p images/coco_train
mv train2017/* images/coco_train/
Objects365
Used by: Val, Test (val split), Train (train split)
# Objects365 β download from https://www.objects365.org/
mkdir -p images/obj365_val
# Images should be named: obj365_val_000000XXXXXX.jpg
mkdir -p images/obj365_train
# Images should be named: obj365_train_000000XXXXXX.jpg
V3Det
Used by: Train V3Det splits only
# V3Det β download from https://v3det.openxlab.org.cn/
mkdir -p images/v3det_train
# Directory structure: images/v3det_train/{category_folder}/{image}.jpg
# e.g., images/v3det_train/Q100507578/28_284_50119550013_7d06ded882_c.jpg
| Source | Directory | Used By |
|---|---|---|
| COCO val2017 | images/coco_val/ |
Val, Test |
| COCO train2017 | images/coco_train/ |
Val, Test, Train |
| Objects365 val | images/obj365_val/ |
Val, Test |
| Objects365 train | images/obj365_train/ |
Train |
| V3Det train | images/v3det_train/ |
Train (V3Det) |
For evaluation only (Val + Test): you only need COCO val2017, COCO train2017, and Objects365 val.
Annotation Format (COCO3D)
Each annotation JSON follows the COCO3D format:
{
"info": {"name": "InTheWild_v3_val"},
"images": [{
"id": 0,
"width": 375,
"height": 500,
"file_path": "images/coco_val/000000000724.jpg",
"K": [[fx, 0, cx], [0, fy, cy], [0, 0, 1]]
}],
"categories": [{"id": 0, "name": "stop sign"}],
"annotations": [{
"id": 0,
"image_id": 0,
"category_id": 0,
"category_name": "stop sign",
"bbox2D_proj": [x1, y1, x2, y2],
"center_cam": [cx, cy, cz],
"dimensions": [width, height, length],
"R_cam": [[r00, r01, r02], [r10, r11, r12], [r20, r21, r22]],
"bbox3D_cam": [[x, y, z], ...],
"valid3D": true
}]
}
Image fields:
K: Camera intrinsic matrix (3x3), at original image resolutionfile_path: Relative path to the source image
Annotation fields:
valid3D:true= valid 3D annotation,false= ignored (use for training filtering)center_cam: 3D box center in camera coordinates (meters)dimensions:[width, height, length]in meters (Omni3D convention)R_cam: 3x3 rotation matrix in camera coordinates (gravity-aligned, local Y = up)bbox3D_cam: 8 corner points of the 3D bounding box in camera coordinatesbbox2D_proj: 2D bounding box[x1, y1, x2, y2]at original image resolution
Which Files to Use
| Use Case | Annotation Files |
|---|---|
| Evaluation | InTheWild_v3_val.json, InTheWild_v3_test.json |
| Train (Human only) | InTheWild_v3_train_human_only.json + InTheWild_v3_v3det_human_only.json |
| Train (Essential) | InTheWild_v3_train_human.json + InTheWild_v3_v3det_human.json |
| Train (Synthetic) | InTheWild_v3_train_synthetic.json + InTheWild_v3_v3det_synthetic.json |
| Train (All) | Essential + Synthetic (all 4 files) |
License
- Annotations: CC BY 4.0
- Downloads last month
- -