Raffael-Kultyshev's picture
Update README: correct task distribution (12 tasks), episode ranges
7d41b09 verified
|
raw
history blame
6.56 kB
metadata
license: cc-by-nc-4.0
task_categories:
  - robotics
tags:
  - lerobot
  - hand-pose
  - rgb-d
  - humanoid
  - manipulation
  - 6dof
  - mediapipe
  - egocentric
  - motion-semantics
size_categories:
  - 10K<n<100K
language:
  - en
pretty_name: Dynamic Intelligence - Egocentric Human Motion Annotation Dataset

Dynamic Intelligence - Egocentric Human Motion Annotation Dataset

RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for humanoid robot training. Includes 6-DoF hand pose trajectories, synchronized video, and semantic motion annotations.


Dataset Overview

Metric Value
Episodes 147
Total Frames ~72,000
FPS 30
Tasks 12 manipulation tasks
Total Duration ~40 minutes
Avg Episode Length ~16.3 seconds

Task Distribution

Task Description Episodes Count
1 Fold the t-shirt on the bed. 0–7 8
2 Pick up the two items on the floor and put them on the bed. 8–17 10
3 Fold the jeans on the bed. 18–27 10
4 Fold the underwear on the table. 28–37 10
5 Put the pillow in its correct place. 38–47 10
6 Place the tableware on the kitchen counter. 48–57 10
7 Get out of the room and close the door behind you. 58–66 9
8 Put the sandals in the right place. 67–76 10
9 Put the cleaning cloth in the laundry basket. 77–86 10
10 Screw the cap back on the bottle. 87–96 10
11 Tuck the chairs into the table. 97–127 31
12 Put the dishes in the sink. 128–146 19

Repository Structure

humanoid-robots-training-dataset/
│
├── data/
│   ├── chunk-000/                    # Parquet files (episodes 0-99)
│   │   ├── episode_000000.parquet
│   │   └── ...
│   └── chunk-001/                    # Parquet files (episodes 100-146)
│       ├── episode_000100.parquet
│       └── ...
│
├── videos/
│   ├── chunk-000/rgb/                # MP4 videos (episodes 0-99)
│   │   ├── episode_000000.mp4
│   │   └── ...
│   └── chunk-001/rgb/                # MP4 videos (episodes 100-146)
│       ├── episode_000100.mp4
│       └── ...
│
├── meta/                             # Metadata & Annotations
│   ├── info.json                     # Dataset configuration (LeRobot format)
│   ├── stats.json                    # Feature min/max/mean/std statistics
│   ├── events.json                   # Disturbance & recovery annotations
│   └── annotations_motion_v1_frames.json  # Motion semantic annotations
│
└── README.md

Data Schema

Parquet Columns (per frame)

Column Type Description
episode_index int64 Episode number (0–146)
frame_index int64 Frame within episode
timestamp float64 Time in seconds
language_instruction string Task description
observation.camera_pose float[6] Camera 6-DoF (x, y, z, roll, pitch, yaw)
observation.left_hand float[9] Left hand keypoints (wrist + thumb + index)
observation.right_hand float[9] Right hand keypoints (wrist + index + middle)
action.camera_delta float[6] Camera delta 6-DoF
action.left_hand_delta float[9] Left hand delta keypoints
action.right_hand_delta float[9] Right hand delta keypoints
rgb video Synchronized RGB video frame

6-DoF Format

Coordinate System:

  • Origin: Camera (iPhone TrueDepth)
  • X: Right (positive)
  • Y: Down (positive)
  • Z: Forward (positive, into scene)

Motion Semantics Annotations

File: meta/annotations_motion_v1_frames.json

Coarse temporal segmentation with motion intent, phase, and error labels.

Motion Types

grasp | pull | align | fold | smooth | insert | rotate | open | close | press | hold | release | place


Events Metadata

File: meta/events.json

Disturbances and recovery actions for select episodes.

Disturbance Types

Type Description
OCCLUSION Hand temporarily blocked from camera
TARGET_MOVED Object shifted unexpectedly
SLIP Object slipped during grasp
COLLISION Unintended contact
DEPTH_DROPOUT Depth sensor lost valid readings

Recovery Actions

Action Description
REGRASP Release and re-acquire object
REACH_ADJUST Modify approach trajectory
ABORT Stop current action
REPLAN Compute new action sequence

Usage

With LeRobot

from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

dataset = LeRobotDataset("DynamicIntelligence/humanoid-robots-training-dataset")

episode = dataset[0]
state = episode["observation.camera_pose"]    # [6] camera 6-DoF
rgb = episode["observation.images.rgb"]       # Video frame
task = episode["language_instruction"]        # "Fold the t-shirt on the bed."

Direct Parquet Access

import pandas as pd
from huggingface_hub import hf_hub_download

path = hf_hub_download(
    repo_id="DynamicIntelligence/humanoid-robots-training-dataset",
    filename="data/chunk-000/episode_000000.parquet",
    repo_type="dataset"
)
df = pd.read_parquet(path)
print(df["language_instruction"].iloc[0])  # "Fold the t-shirt on the bed."
print(f"Frames: {len(df)}")

Citation

If you use this dataset in your research, please cite:

@dataset{dynamic_intelligence_2025,
  author = {Dynamic Intelligence},
  title = {Egocentric Human Motion Annotation Dataset},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/DynamicIntelligence/humanoid-robots-training-dataset}
}

Contact

Email: shayan@dynamicintelligence.company Organization: Dynamic Intelligence


Hand Landmark Reference

Hand Landmarks

Each hand has tracked joints. The observation.left_hand and observation.right_hand contain 3D keypoints for key finger joints.


Visualizer

Explore the dataset interactively: DI Hand Pose Sample Dataset Viewer