Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 2 was different: 
0: struct<class: string>
1: struct<class: string>
2: struct<class: string>
3: struct<class: string>
4: struct<class: string>
5: struct<class: string>
6: struct<class: string>
7: struct<class: string>
8: struct<class: string>
9: struct<class: string>
10: struct<class: string>
11: struct<class: string>
12: struct<class: string>
13: struct<class: string>
14: struct<class: string>
15: struct<class: string>
16: struct<class: string>
17: struct<class: string>
18: struct<class: string>
vs
text: string
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 547, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 2 was different: 
              0: struct<class: string>
              1: struct<class: string>
              2: struct<class: string>
              3: struct<class: string>
              4: struct<class: string>
              5: struct<class: string>
              6: struct<class: string>
              7: struct<class: string>
              8: struct<class: string>
              9: struct<class: string>
              10: struct<class: string>
              11: struct<class: string>
              12: struct<class: string>
              13: struct<class: string>
              14: struct<class: string>
              15: struct<class: string>
              16: struct<class: string>
              17: struct<class: string>
              18: struct<class: string>
              vs
              text: string

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Warehouse Object Detection Dataset

Overview

This is a synthetic warehouse object detection dataset generated using NVIDIA Omniverse Replicator. The dataset contains high-quality RGB images with comprehensive annotations including 2D bounding boxes, instance segmentation masks, depth maps, and 3D primitive paths.

Version: 1.0.0 License: CC-BY-4.0 Total Images: 6,234 Total Annotations: 78,441 Image Resolution: 512x512 Number of Classes: 25

Dataset Statistics

Split Distribution

  • Training Set: 4,363 images (70.0%)
  • Validation Set: 935 images (15.0%)
  • Test Set: 936 images (15.0%)

Annotation Statistics

  • Average Annotations per Image: 12.6
  • Total Bounding Boxes: 78,441

Top 10 Most Common Classes

  1. wall: 6,234 instances in 6,234 images
  2. floor: 6,232 instances in 6,232 images
  3. sign: 5,945 instances in 5,945 images
  4. floor_decal: 5,945 instances in 5,945 images
  5. pillar: 5,845 instances in 5,845 images
  6. rack: 5,824 instances in 5,824 images
  7. box: 5,789 instances in 5,789 images
  8. pallet: 5,743 instances in 5,743 images
  9. bracket: 4,857 instances in 4,857 images
  10. lamp: 4,451 instances in 4,451 images

Classes

The dataset includes 25 object classes organized into the following categories:

Container

  • box: 5,789 annotations
  • crate: 1,270 annotations
  • barrel: 1,229 annotations
  • bottle: 665 annotations
  • bucket: 457 annotations

Storage

  • pallet: 5,743 annotations
  • rack: 5,824 annotations

Infrastructure

  • bracket: 4,857 annotations
  • pillar: 5,845 annotations
  • emergency_board: 569 annotations

Equipment

  • lamp: 4,451 annotations
  • sign: 5,945 annotations
  • wire: 3,985 annotations
  • fuse_box: 1,154 annotations
  • fire_extinguisher: 3,304 annotations
  • forklift: 343 annotations
  • cart: 427 annotations
  • cone: 639 annotations

Markers

  • floor_decal: 5,945 annotations
  • barcode: 1,137 annotations
  • paper_note: 2,125 annotations
  • paper_shortcut: 390 annotations

Background

  • wall: 6,234 annotations
  • ceiling: 3,882 annotations
  • floor: 6,232 annotations

Dataset Formats

This dataset is provided in three formats to support different use cases:

1. Raw Format (raw/)

Preserves all original Omniverse data:

  • RGB images (PNG)
  • 2D bounding boxes (NumPy .npy)
  • Class labels (JSON)
  • 3D primitive paths (JSON)
  • Instance segmentation masks (PNG)
  • Instance ID mappings (JSON)
  • Depth/distance maps (NumPy .npy)

Use for: Multi-task learning, depth estimation, 3D tasks, research

2. YOLO Format (yolo/)

Standard YOLO v5/v8 format:

  • Images in images/train/, images/val/, images/test/
  • Labels in labels/train/, labels/val/, labels/test/
  • Each label file contains: class_id x_center y_center width height (normalized 0-1)

Use for: Object detection with YOLO models (Ultralytics, Darknet)

3. COCO Format (coco/)

COCO JSON format:

  • Images in images/train/, images/val/, images/test/
  • Annotations in annotations/instances_{train,val,test}.json

Use for: Object detection/segmentation with detectron2, MMDetection, etc.

Directory Structure

warehouse_detection_dataset/
β”œβ”€β”€ README.md                    # This file
β”œβ”€β”€ data.yaml                    # YOLO configuration
β”œβ”€β”€ dataset_info.json            # HuggingFace metadata
β”œβ”€β”€ class_mapping.json           # Complete class information
β”œβ”€β”€ raw/                         # Original Omniverse format
β”‚   β”œβ”€β”€ train/
β”‚   β”‚   β”œβ”€β”€ images/
β”‚   β”‚   β”œβ”€β”€ annotations/
β”‚   β”‚   β”œβ”€β”€ segmentation/
β”‚   β”‚   └── depth/
β”‚   β”œβ”€β”€ val/
β”‚   └── test/
β”œβ”€β”€ yolo/                        # YOLO format
β”‚   β”œβ”€β”€ images/
β”‚   β”‚   β”œβ”€β”€ train/
β”‚   β”‚   β”œβ”€β”€ val/
β”‚   β”‚   └── test/
β”‚   └── labels/
β”‚       β”œβ”€β”€ train/
β”‚       β”œβ”€β”€ val/
β”‚       └── test/
└── coco/                        # COCO format
    β”œβ”€β”€ images/
    β”‚   β”œβ”€β”€ train/
    β”‚   β”œβ”€β”€ val/
    β”‚   └── test/
    └── annotations/
        β”œβ”€β”€ instances_train.json
        β”œβ”€β”€ instances_val.json
        └── instances_test.json

Usage Examples

YOLO (Ultralytics)

from ultralytics import YOLO

# Train a model
model = YOLO('yolov8n.pt')
model.train(data='warehouse_detection_dataset/data.yaml', epochs=100)

# Validate
metrics = model.val()

# Predict
results = model('path/to/image.jpg')

COCO (detectron2)

from detectron2.data import DatasetCatalog, MetadataCatalog
from detectron2.data.datasets import load_coco_json

# Register dataset
DatasetCatalog.register(
    "warehouse_train",
    lambda: load_coco_json(
        "warehouse_detection_dataset/coco/annotations/instances_train.json",
        "warehouse_detection_dataset/coco/images/train"
    )
)

# Train your model
# ... (standard detectron2 training code)

Raw Format (Custom)

import numpy as np
import json
from PIL import Image

# Load image
img = Image.open('raw/train/images/warehouse_000001.png')

# Load bounding boxes
bboxes = np.load('raw/train/annotations/warehouse_000001_bbox.npy')

# Load labels
with open('raw/train/annotations/warehouse_000001_labels.json') as f:
    labels = json.load(f)

# Load segmentation mask
seg_mask = Image.open('raw/train/segmentation/warehouse_000001_seg.png')

# Load depth map
depth = np.load('raw/train/depth/warehouse_000001_depth.npy')

Citiation

If you use this dataset in your research, please cite:

@dataset{warehouse_detection_2025,
  title={Warehouse Object Detection Dataset},
  author={Howe, McCarthy and Phillips, Cassandra and Lee, Alfred and Sethi, Varun and Hall, Jada},
  year={2025},
  publisher={Clemson University Capstone},
  note={In partnership with Capgemini Supply Chain},
  version={1.0.0},
  license={CC-BY-4.0}
}

License

This dataset is released under the CC-BY-4.0 license.

Team & Acknowledgments

Project Context: This dataset was created as part of the Clemson University Capstone 2025 project, in partnership with Capgemini Supply Chain.

Contributors:

  • McCarthy Howe (mac@machowe.com)
  • Cassandra Phillips
  • Alfred Lee
  • Varun Sethi
  • Jada Hall

Tooling: Generated using NVIDIA Omniverse Replicator.

Contact

For questions, issues, or feedback, please contact McCarthy Howe at mac@machowe.com.

Downloads last month
816