robox-tech commited on
Commit
e0bf06c
·
verified ·
1 Parent(s): bdb11ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -3
README.md CHANGED
@@ -1,3 +1,67 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ task_categories:
4
+ - robotics
5
+ - video-classification
6
+ - object-detection
7
+ tags:
8
+ - egocentric
9
+ - grasping
10
+ - manipulation
11
+ - imitation-learning
12
+ - hand-object-interaction
13
+ - robotics
14
+ - crowdsourced
15
+ language:
16
+ - en
17
+ pretty_name: EgoGrasp
18
+ size_categories:
19
+ - n<1K
20
+ ---
21
+
22
+ # EgoGrasp
23
+
24
+ EgoGrasp is a crowdsourced egocentric video dataset of human grasping interactions, built for robotics imitation learning. Each clip captures a single grasp action filmed from a first-person perspective using a smartphone, covering 620+ unique everyday object categories.
25
+
26
+ ## What's Included Here
27
+
28
+ This repository contains a **sample of 10 annotated clips** from the full EgoGrasp dataset. The sample is intended to help researchers evaluate data quality, annotation depth, and compatibility with their pipelines before requesting access to the full collection.
29
+
30
+ **To request access to the full dataset (1,800+ clips, 620+ object categories), visit [robox.to](https://robox.to).**
31
+
32
+ ## Dataset Summary
33
+
34
+ - **Sample clips (this repo):** 10
35
+ - **Full dataset:** 1,800+ clips across 620+ object categories
36
+ - **Perspective:** First-person (egocentric), smartphone-captured
37
+ - **Source:** Crowdsourced via the RoboX mobile app
38
+ - **Annotations:** Multi-pass pipeline including hand keypoints, object bounding boxes and tracking, action segmentation, and spatial context labels
39
+
40
+ ## Annotation Pipeline
41
+
42
+ Each clip is processed through a layered annotation pipeline:
43
+
44
+ 1. **Hand keypoints** — 2D joint positions for both hands across all frames
45
+ 2. **Object detection and tracking** — Bounding boxes with per-frame object identity tracking
46
+ 3. **Action segmentation** — Temporal labels for reach, grasp, lift, hold, place, and release phases
47
+ 4. **Spatial context** — Scene-level labels describing surface type, environment, and camera viewpoint
48
+
49
+ ## Use Cases
50
+
51
+ EgoGrasp is designed for researchers working on dexterous manipulation, grasp planning, hand-object interaction modeling, and policy learning from human demonstrations. The egocentric viewpoint and real-world diversity make it well suited for sim-to-real transfer and learning from unstructured environments.
52
+
53
+ ## Collection Method
54
+
55
+ Videos are collected through the RoboX mobile app by distributed contributors following structured task prompts. Contributors record short clips of themselves picking up, holding, and placing common household and workplace objects. Quality filtering and review are applied before clips enter the annotation pipeline.
56
+
57
+ ## Dataset Structure
58
+
59
+ > Structure documentation will be updated as sample files are uploaded.
60
+
61
+ ## Full Dataset Access
62
+
63
+ The complete EgoGrasp dataset is available upon request. Visit [robox.to](https://robox.to) to learn more and submit an access request.
64
+
65
+ ## Citation
66
+
67
+ If you use EgoGrasp in your research, please cite: