The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 2 was different:
0: struct<class: string>
1: struct<class: string>
2: struct<class: string>
3: struct<class: string>
4: struct<class: string>
5: struct<class: string>
6: struct<class: string>
7: struct<class: string>
8: struct<class: string>
9: struct<class: string>
10: struct<class: string>
11: struct<class: string>
12: struct<class: string>
13: struct<class: string>
14: struct<class: string>
15: struct<class: string>
16: struct<class: string>
17: struct<class: string>
18: struct<class: string>
vs
text: string
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 547, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 2 was different:
0: struct<class: string>
1: struct<class: string>
2: struct<class: string>
3: struct<class: string>
4: struct<class: string>
5: struct<class: string>
6: struct<class: string>
7: struct<class: string>
8: struct<class: string>
9: struct<class: string>
10: struct<class: string>
11: struct<class: string>
12: struct<class: string>
13: struct<class: string>
14: struct<class: string>
15: struct<class: string>
16: struct<class: string>
17: struct<class: string>
18: struct<class: string>
vs
text: stringNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Warehouse Object Detection Dataset
Overview
This is a synthetic warehouse object detection dataset generated using NVIDIA Omniverse Replicator. The dataset contains high-quality RGB images with comprehensive annotations including 2D bounding boxes, instance segmentation masks, depth maps, and 3D primitive paths.
Version: 1.0.0 License: CC-BY-4.0 Total Images: 6,234 Total Annotations: 78,441 Image Resolution: 512x512 Number of Classes: 25
Dataset Statistics
Split Distribution
- Training Set: 4,363 images (70.0%)
- Validation Set: 935 images (15.0%)
- Test Set: 936 images (15.0%)
Annotation Statistics
- Average Annotations per Image: 12.6
- Total Bounding Boxes: 78,441
Top 10 Most Common Classes
- wall: 6,234 instances in 6,234 images
- floor: 6,232 instances in 6,232 images
- sign: 5,945 instances in 5,945 images
- floor_decal: 5,945 instances in 5,945 images
- pillar: 5,845 instances in 5,845 images
- rack: 5,824 instances in 5,824 images
- box: 5,789 instances in 5,789 images
- pallet: 5,743 instances in 5,743 images
- bracket: 4,857 instances in 4,857 images
- lamp: 4,451 instances in 4,451 images
Classes
The dataset includes 25 object classes organized into the following categories:
Container
- box: 5,789 annotations
- crate: 1,270 annotations
- barrel: 1,229 annotations
- bottle: 665 annotations
- bucket: 457 annotations
Storage
- pallet: 5,743 annotations
- rack: 5,824 annotations
Infrastructure
- bracket: 4,857 annotations
- pillar: 5,845 annotations
- emergency_board: 569 annotations
Equipment
- lamp: 4,451 annotations
- sign: 5,945 annotations
- wire: 3,985 annotations
- fuse_box: 1,154 annotations
- fire_extinguisher: 3,304 annotations
- forklift: 343 annotations
- cart: 427 annotations
- cone: 639 annotations
Markers
- floor_decal: 5,945 annotations
- barcode: 1,137 annotations
- paper_note: 2,125 annotations
- paper_shortcut: 390 annotations
Background
- wall: 6,234 annotations
- ceiling: 3,882 annotations
- floor: 6,232 annotations
Dataset Formats
This dataset is provided in three formats to support different use cases:
1. Raw Format (raw/)
Preserves all original Omniverse data:
- RGB images (PNG)
- 2D bounding boxes (NumPy
.npy) - Class labels (JSON)
- 3D primitive paths (JSON)
- Instance segmentation masks (PNG)
- Instance ID mappings (JSON)
- Depth/distance maps (NumPy
.npy)
Use for: Multi-task learning, depth estimation, 3D tasks, research
2. YOLO Format (yolo/)
Standard YOLO v5/v8 format:
- Images in
images/train/,images/val/,images/test/ - Labels in
labels/train/,labels/val/,labels/test/ - Each label file contains:
class_id x_center y_center width height(normalized 0-1)
Use for: Object detection with YOLO models (Ultralytics, Darknet)
3. COCO Format (coco/)
COCO JSON format:
- Images in
images/train/,images/val/,images/test/ - Annotations in
annotations/instances_{train,val,test}.json
Use for: Object detection/segmentation with detectron2, MMDetection, etc.
Directory Structure
warehouse_detection_dataset/
βββ README.md # This file
βββ data.yaml # YOLO configuration
βββ dataset_info.json # HuggingFace metadata
βββ class_mapping.json # Complete class information
βββ raw/ # Original Omniverse format
β βββ train/
β β βββ images/
β β βββ annotations/
β β βββ segmentation/
β β βββ depth/
β βββ val/
β βββ test/
βββ yolo/ # YOLO format
β βββ images/
β β βββ train/
β β βββ val/
β β βββ test/
β βββ labels/
β βββ train/
β βββ val/
β βββ test/
βββ coco/ # COCO format
βββ images/
β βββ train/
β βββ val/
β βββ test/
βββ annotations/
βββ instances_train.json
βββ instances_val.json
βββ instances_test.json
Usage Examples
YOLO (Ultralytics)
from ultralytics import YOLO
# Train a model
model = YOLO('yolov8n.pt')
model.train(data='warehouse_detection_dataset/data.yaml', epochs=100)
# Validate
metrics = model.val()
# Predict
results = model('path/to/image.jpg')
COCO (detectron2)
from detectron2.data import DatasetCatalog, MetadataCatalog
from detectron2.data.datasets import load_coco_json
# Register dataset
DatasetCatalog.register(
"warehouse_train",
lambda: load_coco_json(
"warehouse_detection_dataset/coco/annotations/instances_train.json",
"warehouse_detection_dataset/coco/images/train"
)
)
# Train your model
# ... (standard detectron2 training code)
Raw Format (Custom)
import numpy as np
import json
from PIL import Image
# Load image
img = Image.open('raw/train/images/warehouse_000001.png')
# Load bounding boxes
bboxes = np.load('raw/train/annotations/warehouse_000001_bbox.npy')
# Load labels
with open('raw/train/annotations/warehouse_000001_labels.json') as f:
labels = json.load(f)
# Load segmentation mask
seg_mask = Image.open('raw/train/segmentation/warehouse_000001_seg.png')
# Load depth map
depth = np.load('raw/train/depth/warehouse_000001_depth.npy')
Citiation
If you use this dataset in your research, please cite:
@dataset{warehouse_detection_2025,
title={Warehouse Object Detection Dataset},
author={Howe, McCarthy and Phillips, Cassandra and Lee, Alfred and Sethi, Varun and Hall, Jada},
year={2025},
publisher={Clemson University Capstone},
note={In partnership with Capgemini Supply Chain},
version={1.0.0},
license={CC-BY-4.0}
}
License
This dataset is released under the CC-BY-4.0 license.
Team & Acknowledgments
Project Context: This dataset was created as part of the Clemson University Capstone 2025 project, in partnership with Capgemini Supply Chain.
Contributors:
- McCarthy Howe (mac@machowe.com)
- Cassandra Phillips
- Alfred Lee
- Varun Sethi
- Jada Hall
Tooling: Generated using NVIDIA Omniverse Replicator.
Contact
For questions, issues, or feedback, please contact McCarthy Howe at mac@machowe.com.
- Downloads last month
- 816