thomasmundo's picture
Update README.md
b1780c0 verified
metadata
license: cc-by-4.0
task_categories:
  - robotics
  - keypoint-detection
tags:
  - LeRobot
  - embodied-ai
  - hand-tracking
  - hierarchical-task-annotations
  - vla
  - human-demonstrations
  - apple-vision-pro
pretty_name: AVP Hand Tracking & Hierarchical Task Annotations Sample
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/chunk-*/episode_*.parquet

Mundo AI: Egocentric Kinematic Dexterity with Hierarchical Task Annotations

A small sample of high-fidelity, egocentric human-interaction data, kitchen and office tasks, multiple operators, pairing wide-FOV RGB video with synchronized 3D hand tracking, head pose, and dense hierarchical task annotations.

This is the LeRobot v2.1-formatted release: 6 episodes, 15,815 frames, ready to load via LeRobotDataset(...). The full schema reference, calibration details, and annotation rules live in DOCS.md.

What makes this distinctive

To the best of our knowledge, this is the only dataset that includes Apple Vision Pro precision over long-horizon tasks. It also carries two layers absent from comparable releases: hierarchical task annotations with per-subtask reasoning, and structured failure-recovery via outcome, reflection, and resumes_subtask_id chains. The result is accurate kinematics across the full arc of real activity, including failures and retries that success-only behavioral cloning can't use.

The schema is intended to be compatible with other pre-training data: most datasets can be mixed in by populating just subtask_text and leaving reasoning and reflection empty. The release also serves as a 3D precision anchor that can be combined with lower-precision 2D egocentric data.

Positioning vs. Related Datasets

Dataset Hand Kinematics Hierarchical Reasoning Failure Recovery Wide-FOV Egocentric
Mundo (this release) 27-joint AVP-native (incl. forearm) task → subtask with per-subtask reasoning tactical diagnostics + retry chains 155° fisheye
EgoDex ✓ 25-joint AVP-native partial: single task description per recording none partial: ~100° AVP-native
OpenEgo partial: 21-joint MANO, mixed quality (some native AVP, some MediaPipe + depth back-projection) ✓ action primitives with actor labels none varies (six source datasets)
Ego4D post-hoc only partial: goal/step/substep on subset none headset-native
DROID teleop joints only flat instructions partial: binary outcome only third-person
Open X-Embodiment varies no standard none varies

Mundo subtasks sit at a similar temporal granularity to Ego4D's substeps, which simplifies harmonizing the two datasets at the action level.


Quickstart

This is a gated dataset. Once your access request has been approved, authenticate with your Hugging Face token before downloading:

huggingface-cli login    # paste your HF token when prompted
pip install lerobot

Loading via the LeRobot library

from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

dataset = LeRobotDataset("thomasmundo/MundoAI-Robotics-Sample")
sample = dataset[0]
# observation.images.cam_egocentric, observation.state (864-dim),
# observation.confidences (54-dim), observation.head_pose (16-dim),
# observation.camera_pose (16-dim), subtask_text, task_type, outcome,
# reasoning, reflection, subtask_id, parent_subtask_id, resumes_subtask_id, ...

Full column reference, dtypes, and the observation.state encoding convention are in DOCS.md.

Visualizing episodes locally

For browsable playback with the hierarchical annotations overlaid as a HUD, the dataset ships two LeRobot-native visualizers under tools/. Clone the repo and run them directly:

git clone https://huggingface.co/datasets/thomasmundo/MundoAI-Robotics-Sample
cd MundoAI-Robotics-Sample
pip install -r requirements.txt

# Full HUD: skeleton overlay + global task + subtask pills + reasoning + reflection
python3 tools/visualize.py \
  --dataset-root . \
  --episode 0 \
  --output episode_0_visualize.mp4

# Skeleton only (no HUD), useful for inspecting tracking quality
python3 tools/visualize_skeleton.py \
  --dataset-root . \
  --episode 0 \
  --output episode_0_skeleton.mp4

Both scripts read directly from the parquet, video, and per-device calibration files. Use --episode 0 through --episode 5 to render any of the six episodes. Output is a flat mp4 you can play in any video player or share directly.

Episode Index

Episode Task Frames Operator Location Class
0 Assemble the base and seat of an office chair 3,147 op_002 office assembly
1 Open a beer box and open a can of beer 390 op_001 kitchen other
2 Hand-wash a pan and several utensils at the sink 1,169 op_001 kitchen cleaning
3 Unbox the office chair parts 4,365 op_002 office assembly
4 Put away a bowl and several plates from the dishwasher 1,079 op_003 kitchen organization
5 Scrub the kitchen countertops with a soapy sponge 5,665 op_001 kitchen cleaning

A worked example: the failure-recovery layer

Episode 5 (Scrub the kitchen countertops with a soapy sponge) is the dense recovery example in this sample. The operator's first scrub leaves soap residue, scrubs again, residue remains, scrubs a third time, and the final pass is logged as complete_suboptimal. The full retry chain (st_015 → st_009, st_017 → st_015, st_019 → st_017) is reconstructable by group-by on subtask_id over the 1,933 frames where resumes_subtask_id != "".

To browse recovery moments in the HF dataset viewer, filter on resumes_subtask_id != "". The reasoning, reflection, and outcome fields tell the diagnostic story per attempt.

The full semantics of the recovery schema, when outcome flips, when reflection is required, and how parent_subtask_id and resumes_subtask_id relate to each other, are in DOCS.md §5.


File Layout

LeRobot v2.1 layout: per-frame data in parquet, video stored separately, static metadata in meta/.

.
├── meta/
│   ├── info.json                  # global metadata + features schema
│   ├── episodes.jsonl             # per-episode records (length, task, operator, etc.)
│   ├── episodes_stats.jsonl       # per-episode min/max/mean/std for normalization
│   ├── tasks.jsonl                # task_index to natural-language description
│   └── calibrations/
│       └── dev_003.json           # camera intrinsics + head-to-camera offset, keyed by device
├── data/chunk-000/
│   └── episode_NNNNNN.parquet     # 1 row per video frame
├── videos/chunk-000/observation.images.cam_egocentric/
│   └── episode_NNNNNN.mp4
└── tools/
    ├── visualize.py               # full HUD visualizer (LeRobot-native)
    └── visualize_skeleton.py      # skeleton-only visualizer

The full feature schema, including per-element names for the 864-dim observation.state so each float is self-describing, is recorded in meta/info.json. Calibration matrices are keyed by device_id so episodes can share a single physical capture rig's calibration without duplication.


Consent & Privacy

All operators and any bystanders appearing in this dataset have provided signed, informed consent for the use of their image and movements in research and downstream model development, including commercial applications. Participants were briefed on the intended use of the data prior to capture. Users of this dataset are expected to honor the spirit of that consent, do not attempt to re-identify individuals, and do not extract or publish biometric identifiers from the recordings.

License

This dataset is released under CC-BY-4.0. You are free to use, adapt, and redistribute the data, including for commercial purposes such as training models intended for production deployment, provided you give appropriate attribution to Mundo AI.

Acknowledgments

This release builds on conventions established by prior work in egocentric and embodied AI. Joint nomenclature follows EgoDex (Apple, 2025). The precision-anchor pretraining paradigm was demonstrated by EgoScale (2026), which scaled VLA training by mixing EgoDex with ~20K hours of broader egocentric data. The annotation framework is informed by Zawalski et al. (2024), Robotic Control via Embodied Chain-of-Thought Reasoning. Hierarchical task structure draws inspiration from Ego4D Goal-Step (Song et al., NeurIPS 2023). Format conventions follow LeRobot (Hugging Face, 2024).

Citation

@misc{mundo2026kinematic,
  title  = {Mundo AI: Egocentric Kinematic Dexterity with Dense Hierarchical Task Annotations},
  author = {{Mundo AI}},
  year   = {2026},
  note   = {Sample release v0.1 (LeRobot)},
  url    = {https://huggingface.co/datasets/thomasmundo/MundoAI-Robotics-Sample}
}

Access and Contact

This is a sample release. More details on capture methodology, annotation pipeline, and additional task categories are in DOCS.md and available on request.

For collaboration interest or technical questions: thomas@mundoai.world