thomasmundo's picture
Update README.md
3285498 verified
---
license: cc-by-nc-4.0
task_categories:
- robotics
- keypoint-detection
tags:
- embodied-ai
- hand-tracking
- ecot
- vla
- human-demonstrations
- apple-vision-pro
pretty_name: AVP Hand Tracking & ECoT Annotations Sample
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: "data/**/*.jsonl"
---
# Mundo AI — Egocentric Kinematic Dexterity with Dense ECoT Language Annotations
<p align="center">
<img src="assets/data_visual_image.jpg" alt="Mundo AI — 27-joint hand skeleton, fisheye RGB, and three-level ECoT caption with failure-recovery reflection" width="620"/>
</p>
<p align="center">
<em>Real frame from the release: two-hand tracking during a grip-slip failure. The reflection trace shows the operator's recovery reasoning.</em>
</p>
<p align="center">
<sub><em>Overlays are 2D projections through the fisheye model; underlying 3D kinematics are precise.</em></sub>
</p>
<p align="center">
<a href="https://www.youtube.com/watch?v=wMbZuU-tzW4">Watch the full walkthrough →</a>
</p>
## 1. Overview
This release contains office-scale samples of high-fidelity, egocentric human interaction data. Each trajectory pairs wide-FOV RGB video with synchronized 3D hand-tracking, head-pose data, and dense Embodied Chain-of-Thought (ECoT) language annotations with strong **emphasis on failure recovery and course-correction**.
The dataset is designed to address a specific gap in embodied AI training: while in-the-wild egocentric video is abundant, most of it lacks the 3D kinematic fidelity required to learn precise, dexterous manipulation. High-precision kinematic datasets of this type are intended to pair with and anchor broader in-the-wild footage during pretraining of Vision-Language-Action (VLA) models.
Relative to prior egocentric kinematic datasets, this release adds:
* **Dense ECoT Annotations** — timestamped, hierarchical breakdowns of global tasks into subtasks and action segments.
* **Explicit Failure & Reflection Traces** — schema fields capturing tactical diagnostics and recovery reasoning, enabling models to learn troubleshooting rather than pure imitation.
* **Wide-FOV RGB capture** paired with high-frequency kinematic tracking — providing rich environmental context alongside precise hand poses.
To our knowledge, the combination of precise kinematic tracking with ECoT reasoning and explicit failure-recovery annotation is not present in any other public dataset.
### What's Included in This Sample
| Modality | Description |
|:---|:---|
| **Egocentric RGB** | Wide-FOV fisheye video, head-mounted, captured during natural task execution |
| **Hand Kinematics** | Per-frame 27-joint poses for both hands in SE(3); confidence flags on tracking loss |
| **Head & Camera Pose** | SE(3) trajectories for the headset and rendering camera in ARKit world space |
| **Task Coverage** | Office-scale manipulation: pick-and-place, tool use, object reorganization, surface interaction |
| **ECoT Annotations** | Three-level hierarchy — global task, subtasks (manipulation/stationary/locomotion), action segments — with outcome labels at each level |
| **Failure & Recovery Traces** | Grip slips, mis-grasps, object drops, mis-aligned approaches; paired with operator-perspective reflection text describing the recovery strategy |
| **Bimanual Sequences** | Trajectories with coordinated two-hand interactions (stabilizing one object while manipulating another) |
| **Contact Annotations** | Per-action contact-state flags indicating when hands are in load-bearing contact with objects |
### Positioning Against Related Datasets
| Dataset | Hand Kinematics | Hierarchical Reasoning | Failure Recovery | Wide-FOV Egocentric |
|:---|:---|:---|:---|:---|
| **Mundo (this release)** | **✓** 27-joint AVP-native (incl. forearm) | **✓** task → subtask → action-segment; reasoning on failures | **✓** tactical diagnostics | **✓** 155° fisheye |
| EgoDex | ✓ 25-joint AVP-native | — single task description per recording | — | — ~100° AVP-native |
| Ego4D | — post-hoc only | partial — goal/step/substep on subset | — | — headset-native |
| DROID | — teleop joints | — flat instructions | partial — binary outcome only | — third-person |
| Open X-Embodiment | varies | — no standard | — | varies |
*Mundo's subtask layer matches Ego4D's substep and EgoDex's task-description granularity, enabling annotation alignment when combining datasets. The action-segment layer extends hierarchy downward for VLA action-chunk training.*
---
## 2. Quickstart
This is a gated dataset — authenticate with your Hugging Face Access Token when cloning.
```bash
git clone https://huggingface.co/datasets/thomasmundo/Sample-Robotics-Dataset
cd Sample-Robotics-Dataset
pip install -r requirements.txt
python3 tools/visualize.py \
--video data/movingitems/0.mp4 \
--hdf5 data/movingitems/0.hdf5 \
--json data/movingitems/0.json \
--output data/movingitems/0_visualize.mp4
```
The output MP4 contains the rendered timeline with spatial overlays.
---
## 3. File Layout
Each trajectory produces a triad of temporally aligned files sharing an ID:
```text
data/<task>/
├── 0.mp4 # 1920x1920 fisheye RGB, 30 FPS
├── 0.hdf5 # SE(3) pose arrays, confidences, camera calibration (float32)
├── 0.json # ECoT annotations + operator metadata
```
Top-level HDF5 groups: `timestamps`, `camera/`, `transforms/`, `confidences/`.
Transforms are `(N, 4, 4)` homogeneous matrices in ARKit world space (right-handed, +Y up).
Full field reference and loader examples are provided in `dataloaders/pytorch_dataset.py`.
---
## 4. ECoT Schema (summary)
The JSON sidecar contains a three-level hierarchy — global task, subtasks, and action segments — with outcome labels at each level and embedded reflection traces that distinguish successful execution from failure-recovery reasoning. Schema details are documented in the sample files themselves.
A full example JSON is included at `data/movingitems/0.json`.
---
## Known Limitations
This is a sample release intended to demonstrate the capture methodology and annotation schema. Researchers should be aware of the following bounds:
- **Single operator.** All trajectories in this release were captured by one operator; inter-operator variation is not represented.
- **Single environment.** Capture is restricted to indoor office-scale settings; lighting, clutter, and surface variation reflect that domain.
- **English-only annotations.** ECoT text is authored in English; multilingual reasoning is not yet supported.
- **No third-person auxiliary cameras.** Only the headset-mounted egocentric stream is provided; external viewpoints are out of scope.
- **Hand-tracking confidence drops under occlusion.** When hands leave the AVP tracking volume or are heavily occluded, the schema reports tracking loss explicitly via NaN; downstream models should respect these flags.
## License
This dataset is released under **CC-BY-NC-4.0**. You are free to use, adapt, and redistribute the data for non-commercial research purposes with attribution. Commercial use — including training models intended for commercial deployment — requires a separate license. Please reach out if your intended use falls in this category.
## Acknowledgments
This release builds on conventions established by prior work in egocentric and embodied AI. Joint nomenclature follows EgoDex (Apple, 2025). The ECoT annotation framework is informed by Zawalski et al. (2024), *Robotic Control via Embodied Chain-of-Thought Reasoning*. Hierarchical task structure draws inspiration from Ego4D Goal-Step (Song et al., NeurIPS 2023).
## Citation
If you use this dataset in your research, please cite:
```bibtex
@misc{mundo2026kinematic,
title = {Mundo AI: Egocentric Kinematic Dexterity with Dense ECoT Language Annotations},
author = {{Mundo AI}},
year = {2026},
note = {Sample release v0.1},
url = {https://huggingface.co/datasets/thomasmundo/Sample-Robotics-Dataset}
}
```
## Access & Contact
This is a sample release. More details on methods and use of this data can be seen in DOCS.md after access is granted.
Researchers interested in learning more about our capture methodology, annotation pipeline, or potential collaboration are welcome to reach out (thomas@mundoai.world)