Raffael-Kultyshev's picture
Update README: 147 episodes (removed 147-149), updated schema & structure
4688ec8 verified
|
raw
history blame
6.55 kB
metadata
license: cc-by-nc-4.0
task_categories:
  - robotics
tags:
  - lerobot
  - hand-pose
  - rgb-d
  - humanoid
  - manipulation
  - 6dof
  - mediapipe
  - egocentric
  - motion-semantics
size_categories:
  - 10K<n<100K
language:
  - en
pretty_name: Dynamic Intelligence - Egocentric Human Motion Annotation Dataset

Dynamic Intelligence - Egocentric Human Motion Annotation Dataset

RGB-D hand manipulation dataset captured with iPhone 13 TrueDepth sensor for humanoid robot training. Includes 6-DoF hand pose trajectories, synchronized video, and semantic motion annotations.


Dataset Overview

Metric Value
Episodes 147
Total Frames ~72,000
FPS 30
Tasks 10 manipulation tasks
Total Duration ~40 minutes
Avg Episode Length ~16.3 seconds

Task Distribution

Task ID Description Episodes
Task 1 Fold the white t-shirt on the bed 8
Task 2 Fold the jeans on the bed 10
Task 3 Fold two underwear and stack them 10
Task 4 Put the pillow on the right place 10
Task 5 Pick up plate and glass, put on stove 10
Task 6 Go out the door and close it 9
Task 7 Pick up sandals, put next to scale 10
Task 8 Put cloth in basket, close drawer 10
Task 9 Screw the cap on your bottle 10
Task 10 Pick up two objects, put on bed 10

Note: Task distribution is approximate and will be updated with per-episode language instructions.


Repository Structure

humanoid-robots-training-dataset/
│
├── data/
│   ├── chunk-000/                    # Parquet files (episodes 0-99)
│   │   ├── episode_000000.parquet
│   │   └── ...
│   └── chunk-001/                    # Parquet files (episodes 100-146)
│       ├── episode_000100.parquet
│       └── ...
│
├── videos/
│   ├── chunk-000/rgb/                # MP4 videos (episodes 0-99)
│   │   ├── episode_000000.mp4
│   │   └── ...
│   └── chunk-001/rgb/                # MP4 videos (episodes 100-146)
│       ├── episode_000100.mp4
│       └── ...
│
├── meta/                             # Metadata & Annotations
│   ├── info.json                     # Dataset configuration (LeRobot format)
│   ├── stats.json                    # Feature min/max/mean/std statistics
│   ├── events.json                   # Disturbance & recovery annotations
│   └── annotations_motion_v1_frames.json  # Motion semantic annotations
│
└── README.md

Data Schema

Parquet Columns (per frame)

Column Type Description
episode_index int64 Episode number (0-146)
frame_index int64 Frame within episode
timestamp float64 Time in seconds
language_instruction string Task description
observation.camera_pose float[6] Camera 6-DoF (x, y, z, roll, pitch, yaw)
observation.left_hand float[9] Left hand keypoints (wrist + thumb + index)
observation.right_hand float[9] Right hand keypoints (wrist + index + middle)
action.camera_delta float[6] Camera delta 6-DoF
action.left_hand_delta float[9] Left hand delta keypoints
action.right_hand_delta float[9] Right hand delta keypoints
rgb video Synchronized RGB video frame

6-DoF Format

Coordinate System:

  • Origin: Camera (iPhone TrueDepth)
  • X: Right (positive)
  • Y: Down (positive)
  • Z: Forward (positive, into scene)

Motion Semantics Annotations

File: meta/annotations_motion_v1_frames.json

Coarse temporal segmentation with motion intent, phase, and error labels.

Motion Types

grasp | pull | align | fold | smooth | insert | rotate | open | close | press | hold | release | place


Events Metadata

File: meta/events.json

Disturbances and recovery actions for select episodes.

Disturbance Types

Type Description
OCCLUSION Hand temporarily blocked from camera
TARGET_MOVED Object shifted unexpectedly
SLIP Object slipped during grasp
COLLISION Unintended contact
DEPTH_DROPOUT Depth sensor lost valid readings

Recovery Actions

Action Description
REGRASP Release and re-acquire object
REACH_ADJUST Modify approach trajectory
ABORT Stop current action
REPLAN Compute new action sequence

Usage

With LeRobot

from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

dataset = LeRobotDataset("DynamicIntelligence/humanoid-robots-training-dataset")

episode = dataset[0]
state = episode["observation.camera_pose"]    # [6] camera 6-DoF
rgb = episode["observation.images.rgb"]       # Video frame
task = episode["language_instruction"]        # Task description

Direct Parquet Access

import pandas as pd
from huggingface_hub import hf_hub_download

path = hf_hub_download(
    repo_id="DynamicIntelligence/humanoid-robots-training-dataset",
    filename="data/chunk-000/episode_000000.parquet",
    repo_type="dataset"
)
df = pd.read_parquet(path)
print(df.columns.tolist())
print(f"Frames: {len(df)}")

Citation

If you use this dataset in your research, please cite:

@dataset{dynamic_intelligence_2025,
  author = {Dynamic Intelligence},
  title = {Egocentric Human Motion Annotation Dataset},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/DynamicIntelligence/humanoid-robots-training-dataset}
}

Contact

Email: shayan@dynamicintelligence.company Organization: Dynamic Intelligence


Hand Landmark Reference

Hand Landmarks

Each hand has tracked joints. The observation.left_hand and observation.right_hand contain 3D keypoints for key finger joints.


Visualizer

Explore the dataset interactively: DI Hand Pose Sample Dataset Viewer

  • Enable plots: Click the white checkbox next to joint names to show data in the graph
  • Full data access: All joint data is available in the parquet files under Files and versions