The dataset viewer is not available for this subset.
Exception: HfHubHTTPError
Message: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/resolve-cache/datasets/RoboXTechnologies/RoboX-EgoGrasp-v0.1/023d3a8bf65310d28235a920e8f1510449a47f7e/README.md (Request ID: Root=1-69d6ef58-2a2c176f6418f47e64cd8f99;247c0864-4941-4aac-b488-5e2761579677)
Internal Error - We're working hard to fix this as soon as possible!
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 268, in get_dataset_config_info
builder = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1315, in load_dataset_builder
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1114, in dataset_module_factory
dataset_readme_path = api.hf_hub_download(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 5483, in hf_hub_download
return hf_hub_download(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 1008, in hf_hub_download
return _hf_hub_download_to_cache_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 1159, in _hf_hub_download_to_cache_dir
_download_to_tmp_and_move(
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 1723, in _download_to_tmp_and_move
http_get(
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 420, in http_get
r = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 310, in _request_wrapper
hf_raise_for_status(response)
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/resolve-cache/datasets/RoboXTechnologies/RoboX-EgoGrasp-v0.1/023d3a8bf65310d28235a920e8f1510449a47f7e/README.md (Request ID: Root=1-69d6ef58-2a2c176f6418f47e64cd8f99;247c0864-4941-4aac-b488-5e2761579677)
Internal Error - We're working hard to fix this as soon as possible!Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
EgoGrasp
EgoGrasp is a crowdsourced egocentric video dataset of human grasping interactions, built for robotics imitation learning. Each clip captures a single grasp action filmed from a first-person perspective using a smartphone, covering 620+ unique everyday object categories.
What's Included Here
This repository contains a sample of 10 annotated clips from the full EgoGrasp dataset. The sample is intended to help researchers evaluate data quality, annotation depth, and compatibility with their pipelines before requesting access to the full collection.
To request access to the full dataset (1,800+ clips, 620+ object categories), visit robox.to.
Dataset Summary
- Sample clips (this repo): 10
- Full dataset: 1,800+ clips across 620+ object categories
- Perspective: First-person (egocentric), smartphone-captured
- Source: Crowdsourced via the RoboX mobile app
- Annotations: Multi-pass pipeline including hand keypoints, object bounding boxes and tracking, action segmentation, and spatial context labels
| Property | Value |
|---|---|
| Total clips | 10 |
| Total duration | 2 min (~0.0 hours) |
| Contributors | 2 (anonymized) |
| Clips with video | 10 |
| Verified clips | 10 |
| Campaign type | ego_grasp |
| Export date | 2026-04-08 |
| Schema version | 0.1 |
Collection Method
Videos are collected through the RoboX mobile app by distributed contributors following structured task prompts. Contributors record short clips of themselves picking up, holding, and placing common household and workplace objects. Quality filtering and review are applied before clips enter the annotation pipeline.
The app captures video with rich per-frame metadata including camera pose (6DoF), IMU data (200Hz), hand keypoints (21 joints), body pose, object detection, scene planes, optical flow, audio levels, navigation data, and quality metrics. On-device processing applies face detection and blurring before the video leaves the device.
Annotation Pipeline
Each clip is processed through a layered annotation pipeline:
- Hand keypoints — 2D joint positions for both hands across all frames
- Object detection and tracking — Bounding boxes with per-frame object identity tracking
- Action segmentation — Temporal labels for reach, grasp, lift, hold, place, and release phases
- Spatial context — Scene-level labels describing surface type, environment, and camera viewpoint
Use Cases
EgoGrasp is designed for researchers working on dexterous manipulation, grasp planning, hand-object interaction modeling, and policy learning from human demonstrations. The egocentric viewpoint and real-world diversity make it well suited for sim-to-real transfer and learning from unstructured environments.
Specific applications include:
- Robotic manipulation / grasping policy training via imitation learning
- Object recognition in egocentric settings
- Hand-object interaction understanding
- Benchmarking grasp detection and grip classification models
Dataset Structure
metadata/clips.json— Per-clip metadata (device, duration, quality, contributor)clips/— Video files (MP4, H.265)annotations/clips.jsonl— Dataset index: per-clip metadata, labels, narration, action segments, file referencesannotations/hand_keypoints/— Per-frame hand joint positions (21 keypoints per hand, grip type)annotations/object_tracks/— Per-frame detected objects with bounding boxesannotations/actions/— Temporal action segments (reach, grasp, idle) derived from grip state changesannotations/sensors/— Per-frame sensor data: IMU (accelerometer, gyro, magnetometer), 6DoF camera pose, camera intrinsics
Full Dataset Access
The complete EgoGrasp dataset is available upon request. Visit robox.to to learn more and submit an access request.
License
CC-BY-NC-SA-4.0 — Free for research and non-commercial use, with share-alike requirements.
Citation
If you use EgoGrasp in your research, please cite:
@dataset{robox_ego_grasp_2026,
title={RoboX-EgoGrasp-v0.1},
author={RoboX Team},
year={2026},
campaign={EgoGrasp}
}
- Downloads last month
- 351