EgoGrasp / README.md
robox-tech's picture
Update README.md
e0bf06c verified
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - robotics
  - video-classification
  - object-detection
tags:
  - egocentric
  - grasping
  - manipulation
  - imitation-learning
  - hand-object-interaction
  - robotics
  - crowdsourced
language:
  - en
pretty_name: EgoGrasp
size_categories:
  - n<1K

EgoGrasp

EgoGrasp is a crowdsourced egocentric video dataset of human grasping interactions, built for robotics imitation learning. Each clip captures a single grasp action filmed from a first-person perspective using a smartphone, covering 620+ unique everyday object categories.

What's Included Here

This repository contains a sample of 10 annotated clips from the full EgoGrasp dataset. The sample is intended to help researchers evaluate data quality, annotation depth, and compatibility with their pipelines before requesting access to the full collection.

To request access to the full dataset (1,800+ clips, 620+ object categories), visit robox.to.

Dataset Summary

  • Sample clips (this repo): 10
  • Full dataset: 1,800+ clips across 620+ object categories
  • Perspective: First-person (egocentric), smartphone-captured
  • Source: Crowdsourced via the RoboX mobile app
  • Annotations: Multi-pass pipeline including hand keypoints, object bounding boxes and tracking, action segmentation, and spatial context labels

Annotation Pipeline

Each clip is processed through a layered annotation pipeline:

  1. Hand keypoints — 2D joint positions for both hands across all frames
  2. Object detection and tracking — Bounding boxes with per-frame object identity tracking
  3. Action segmentation — Temporal labels for reach, grasp, lift, hold, place, and release phases
  4. Spatial context — Scene-level labels describing surface type, environment, and camera viewpoint

Use Cases

EgoGrasp is designed for researchers working on dexterous manipulation, grasp planning, hand-object interaction modeling, and policy learning from human demonstrations. The egocentric viewpoint and real-world diversity make it well suited for sim-to-real transfer and learning from unstructured environments.

Collection Method

Videos are collected through the RoboX mobile app by distributed contributors following structured task prompts. Contributors record short clips of themselves picking up, holding, and placing common household and workplace objects. Quality filtering and review are applied before clips enter the annotation pipeline.

Dataset Structure

Structure documentation will be updated as sample files are uploaded.

Full Dataset Access

The complete EgoGrasp dataset is available upon request. Visit robox.to to learn more and submit an access request.

Citation

If you use EgoGrasp in your research, please cite: