Warehouse Object Detection Dataset
Overview
This is a synthetic warehouse object detection dataset generated using NVIDIA Omniverse Replicator. The dataset contains high-quality RGB images with comprehensive annotations including 2D bounding boxes, instance segmentation masks, depth maps, and 3D primitive paths.
Version: 1.0.0 License: CC-BY-4.0 Total Images: 6,234 Total Annotations: 78,441 Image Resolution: 512x512 Number of Classes: 25
Dataset Statistics
Split Distribution
- Training Set: 4,363 images (70.0%)
- Validation Set: 935 images (15.0%)
- Test Set: 936 images (15.0%)
Annotation Statistics
- Average Annotations per Image: 12.6
- Total Bounding Boxes: 78,441
Top 10 Most Common Classes
- wall: 6,234 instances in 6,234 images
- floor: 6,232 instances in 6,232 images
- sign: 5,945 instances in 5,945 images
- floor_decal: 5,945 instances in 5,945 images
- pillar: 5,845 instances in 5,845 images
- rack: 5,824 instances in 5,824 images
- box: 5,789 instances in 5,789 images
- pallet: 5,743 instances in 5,743 images
- bracket: 4,857 instances in 4,857 images
- lamp: 4,451 instances in 4,451 images
Classes
The dataset includes 25 object classes organized into the following categories:
Container
- box: 5,789 annotations
- crate: 1,270 annotations
- barrel: 1,229 annotations
- bottle: 665 annotations
- bucket: 457 annotations
Storage
- pallet: 5,743 annotations
- rack: 5,824 annotations
Infrastructure
- bracket: 4,857 annotations
- pillar: 5,845 annotations
- emergency_board: 569 annotations
Equipment
- lamp: 4,451 annotations
- sign: 5,945 annotations
- wire: 3,985 annotations
- fuse_box: 1,154 annotations
- fire_extinguisher: 3,304 annotations
- forklift: 343 annotations
- cart: 427 annotations
- cone: 639 annotations
Markers
- floor_decal: 5,945 annotations
- barcode: 1,137 annotations
- paper_note: 2,125 annotations
- paper_shortcut: 390 annotations
Background
- wall: 6,234 annotations
- ceiling: 3,882 annotations
- floor: 6,232 annotations
Dataset Formats
This dataset is provided in three formats to support different use cases:
1. Raw Format (raw/)
Preserves all original Omniverse data:
- RGB images (PNG)
- 2D bounding boxes (NumPy
.npy) - Class labels (JSON)
- 3D primitive paths (JSON)
- Instance segmentation masks (PNG)
- Instance ID mappings (JSON)
- Depth/distance maps (NumPy
.npy)
Use for: Multi-task learning, depth estimation, 3D tasks, research
2. YOLO Format (yolo/)
Standard YOLO v5/v8 format:
- Images in
images/train/,images/val/,images/test/ - Labels in
labels/train/,labels/val/,labels/test/ - Each label file contains:
class_id x_center y_center width height(normalized 0-1)
Use for: Object detection with YOLO models (Ultralytics, Darknet)
3. COCO Format (coco/)
COCO JSON format:
- Images in
images/train/,images/val/,images/test/ - Annotations in
annotations/instances_{train,val,test}.json
Use for: Object detection/segmentation with detectron2, MMDetection, etc.
Directory Structure
warehouse_detection_dataset/
├── README.md # This file
├── data.yaml # YOLO configuration
├── dataset_info.json # HuggingFace metadata
├── class_mapping.json # Complete class information
├── raw/ # Original Omniverse format
│ ├── train/
│ │ ├── images/
│ │ ├── annotations/
│ │ ├── segmentation/
│ │ └── depth/
│ ├── val/
│ └── test/
├── yolo/ # YOLO format
│ ├── images/
│ │ ├── train/
│ │ ├── val/
│ │ └── test/
│ └── labels/
│ ├── train/
│ ├── val/
│ └── test/
└── coco/ # COCO format
├── images/
│ ├── train/
│ ├── val/
│ └── test/
└── annotations/
├── instances_train.json
├── instances_val.json
└── instances_test.json
Usage Examples
YOLO (Ultralytics)
from ultralytics import YOLO
# Train a model
model = YOLO('yolov8n.pt')
model.train(data='warehouse_detection_dataset/data.yaml', epochs=100)
# Validate
metrics = model.val()
# Predict
results = model('path/to/image.jpg')
COCO (detectron2)
from detectron2.data import DatasetCatalog, MetadataCatalog
from detectron2.data.datasets import load_coco_json
# Register dataset
DatasetCatalog.register(
"warehouse_train",
lambda: load_coco_json(
"warehouse_detection_dataset/coco/annotations/instances_train.json",
"warehouse_detection_dataset/coco/images/train"
)
)
# Train your model
# ... (standard detectron2 training code)
Raw Format (Custom)
import numpy as np
import json
from PIL import Image
# Load image
img = Image.open('raw/train/images/warehouse_000001.png')
# Load bounding boxes
bboxes = np.load('raw/train/annotations/warehouse_000001_bbox.npy')
# Load labels
with open('raw/train/annotations/warehouse_000001_labels.json') as f:
labels = json.load(f)
# Load segmentation mask
seg_mask = Image.open('raw/train/segmentation/warehouse_000001_seg.png')
# Load depth map
depth = np.load('raw/train/depth/warehouse_000001_depth.npy')
Citiation
If you use this dataset in your research, please cite:
@dataset{warehouse_detection_2025,
title={Warehouse Object Detection Dataset},
author={Howe, McCarthy and Phillips, Cassandra and Lee, Alfred and Sethi, Varun and Hall, Jada},
year={2025},
publisher={Clemson University Capstone},
note={In partnership with Capgemini Supply Chain},
version={1.0.0},
license={CC-BY-4.0}
}
License
This dataset is released under the CC-BY-4.0 license.
Team & Acknowledgments
Project Context: This dataset was created as part of the Clemson University Capstone 2025 project, in partnership with Capgemini Supply Chain.
Contributors:
- McCarthy Howe (mac@machowe.com)
- Cassandra Phillips
- Alfred Lee
- Varun Sethi
- Jada Hall
Tooling: Generated using NVIDIA Omniverse Replicator.
Contact
For questions, issues, or feedback, please contact McCarthy Howe at mac@machowe.com.