Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ValueError
Message:      `file_name`, `*_file_name`, `file_names` or `*_file_names` must be present as dictionary key in metadata files
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 234, in compute_first_rows_from_streaming_response
                  iterable_dataset = load_dataset(
                                     ^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1705, in load_dataset
                  return builder_instance.as_streaming_dataset(split=split)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1189, in as_streaming_dataset
                  splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
                                                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 246, in _split_generators
                  raise ValueError(
              ValueError: `file_name`, `*_file_name`, `file_names` or `*_file_names` must be present as dictionary key in metadata files

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

OBayData Egocentric Dexterous Manipulation Demo

Dataset Summary

This is a demo dataset from OBayData, showcasing our egocentric dexterous manipulation data collection capabilities. It contains a curated subset of first-person human operation recordings captured with our multi-camera hardware setup across real-world scenarios.

This demo is intended for quality evaluation. Full-scale data collection is available as a managed service — see Production Data Service below.

Supported Tasks

  • Action Recognition — classify human manipulation actions from egocentric video
  • Hand Pose Estimation — predict 3D hand joint positions from first-person views
  • Robot Learning from Human Demonstrations — use as training signal for imitation learning and sim-to-real transfer
  • Video Understanding — temporal reasoning over long-horizon manipulation sequences
  • Language-Grounded Manipulation — map atomic language annotations to visual actions

Languages

All annotations are in English.

Dataset Structure

Data Fields

Field Type Description
video Video (MP4) 1080p egocentric video from head-mounted camera
wrist_left Video (MP4) 1080p video from left wrist camera
wrist_right Video (MP4) 1080p video from right wrist camera
hand_pose JSON Per-frame 3D hand joint positions (21 joints × 2 hands)
camera_extrinsics JSON Per-frame camera pose (rotation + translation)
action_labels JSON Temporal action segments with start/end timestamps and labels
language_annotations JSON Atomic-level natural language descriptions of sub-actions
metadata JSON Scenario type, session ID, operator ID, hardware config

Data Splits

Split Examples Purpose
train 500 Model training and development
test 100 Held-out evaluation

Data Instances

Each instance represents a single continuous manipulation session (typically 30 seconds to 5 minutes) with synchronized multi-view video and annotations.

Dataset Creation

Collection Process

Data was collected using OBayData's standardized hardware rig:

  • Head Camera — head-mounted, capturing the egocentric viewpoint at 1080p / 30fps
  • Dual Wrist Cameras — left and right wrist-mounted cameras for close-up hand views
  • Optional Tactile Gloves — Manus METAGLOVES PRO for finger-level force and contact data (not included in this demo)
  • Synchronization — all streams are hardware-synchronized and temporally aligned

Operators performed real manipulation tasks in authentic environments (not staged lab settings). Scenarios in this demo include kitchen tasks, object rearrangement, and tool use.

Annotation Pipeline

  1. Camera Extrinsics — estimated via structure-from-motion on the multi-view streams
  2. Hand Pose Reconstruction — 3D hand mesh fitting using multi-view optimization
  3. Action Segmentation — temporal boundaries annotated by trained human annotators
  4. Language Annotations — atomic-level descriptions written by annotators following a structured protocol (e.g., "grasp the red mug handle with right hand", "pour water into the bowl")

All annotations undergo multi-stage quality review.

Source Data

Recorded in real-world environments including residential kitchens, hotel rooms, and office spaces. No synthetic or simulated data.

Personal and Sensitive Information

All operators provided informed consent. Faces are not visible in egocentric recordings. No personally identifiable information is included in the released annotations.

Considerations for Using the Data

Intended Uses

  • Evaluating OBayData's data quality before commissioning large-scale collection
  • Research in egocentric vision, hand-object interaction, and robot learning
  • Benchmarking action recognition and hand pose estimation models
  • Prototyping language-grounded manipulation systems

Out-of-Scope Uses

  • This demo is not large enough for training production foundation models — contact us for scaled collection
  • Not suitable for surveillance or biometric applications
  • Not intended for commercial redistribution

Limitations

  • Demo contains a limited subset of scenarios (full service covers 1,000+ scenario types)
  • Tactile glove data is not included in this release
  • Annotation density may vary across sessions

Production Data Service

This demo represents a small fraction of OBayData's collection capabilities.

Full Service Specifications

Capability Details
Monthly Capacity 100,000 hours/month
Availability Deliveries starting May 2026
Scenario Coverage 1,000+ real-world scenarios (hospitality, manufacturing, retail, logistics, food service, etc.)
Pricing $5.50–$100/hour depending on volume, annotation complexity, and hardware config
Custom Scenarios Define your own tasks, environments, and skill requirements
Benchmark Alignment RoboTwin 2.0, PI Olympics, and custom benchmark protocols

Contact Us

Citation

@dataset{obaydata2026egocentric,
  title={OBayData Egocentric Dexterous Manipulation Demo},
  author={OBayData Team},
  year={2026},
  url={https://huggingface.co/datasets/obaydata/egocentric-dexterous-manipulation-demo},
  publisher={Hugging Face}
}

License

This demo dataset is released under CC BY-NC 4.0. Commercial data collection available under custom licensing — contact us for details.

Downloads last month
78