|
|
--- |
|
|
license: cdla-permissive-2.0 |
|
|
tags: |
|
|
- humanoid |
|
|
- robotics |
|
|
- motion-tracking |
|
|
- teleoperation |
|
|
- reinforcement-learning |
|
|
- motion-retargeting |
|
|
pretty_name: MOSAIC Dataset |
|
|
--- |
|
|
|
|
|
# MOSAIC Dataset |
|
|
|
|
|
This repository releases the built-in **MOSAIC** multi-source motion dataset in the following paper: |
|
|
|
|
|
**[MOSAIC: Bridging the Sim-to-Real Gap in Generalist Humanoid Motion Tracking and Teleoperation with Rapid Residual Adaptation](https://arxiv.org/abs/2602.08594)** |
|
|
|
|
|
The dataset is organized into: |
|
|
- **Human motions** stored in an **[AMASS-style format](https://amass.is.tue.mpg.de/)** |
|
|
- **Unitree G1 motions** retargeted from human motions and converted to **NPZ** for training/visualization |
|
|
|
|
|
It includes motions from: |
|
|
1. **Optical MoCap (Vicon)** |
|
|
2. **Inertial MoCap (IO-AI)** |
|
|
3. **Generated motions from GENMO** |
|
|
|
|
|
Additionally, we provide a small amount of **Stage-II adaptor training data**: |
|
|
- ~30 min **PICO VR** data |
|
|
- ~30 min **Noitom** inertial mocap data |
|
|
|
|
|
We also release the **GENMO prompts** used for motion generation, and the same prompts were used as guidance during **optical MoCap capture**. |
|
|
|
|
|
--- |
|
|
|
|
|
## Data Formats & Processing Pipeline |
|
|
|
|
|
### 1) Human motions (AMASS-style) |
|
|
Human motion files follow an AMASS-style parameterization/layout for convenient reuse with common SMPL/AMASS tooling. |
|
|
|
|
|
### 2) G1 motions (retargeted + converted to NPZ) |
|
|
We retarget human motions to **Unitree G1** using: |
|
|
- **[GMR](https://github.com/YanjieZe/GMR)** for retargeting (human → humanoid robot) |
|
|
- **[BeyondMimic](https://github.com/HybridRobotics/whole_body_tracking)** for converting to NPZ and visualization |
|
|
|
|
|
Concretely: |
|
|
`Human (AMASS-style) → GMR retargeting → BeyondMimic conversion → *.npz` |
|
|
|
|
|
--- |
|
|
|
|
|
## Folder Structure |
|
|
|
|
|
<details> |
|
|
<summary><b>Click to expand tree</b></summary> |
|
|
|
|
|
```text |
|
|
MOSAIC_Dataset/ |
|
|
├── human/ |
|
|
│ ├── optical_mocap/ |
|
|
│ │ ├── human_DA*_stageii.npz |
|
|
│ │ └── ... |
|
|
│ ├── inertial_mocap/ |
|
|
│ │ ├── mocap_data/ # raw IO-AI recordings |
|
|
│ │ │ ├── *.mcap # recorded streams (incl. TF) |
|
|
│ │ │ └── ... |
|
|
│ │ ├── amass_data/ # converted AMASS-style files |
|
|
│ │ │ ├── *.npz |
|
|
│ │ │ └── ... |
|
|
│ │ └── mocap_to_amass.py # helper script for conversion |
|
|
│ └── generated_genmo/ |
|
|
│ ├── walk/ |
|
|
│ │ ├── 000/ |
|
|
│ │ │ └── hmr4d_results.pt |
|
|
│ │ └── ... |
|
|
│ ├── dance/ |
|
|
│ ├── daily_activaties/ |
|
|
│ └── ... |
|
|
│ |
|
|
├── G1/ |
|
|
│ ├── optical_mocap/ |
|
|
│ │ ├── g1_DA*_stageii.npz |
|
|
│ │ └── ... |
|
|
│ ├── inertial_mocap/ |
|
|
│ │ ├── g1_io_*.npz |
|
|
│ │ └── ... |
|
|
│ ├── generated_genmo/ |
|
|
│ │ ├── g1_genmo_*.npz |
|
|
│ │ └── ... |
|
|
│ └── adaptor_data/ |
|
|
│ ├── pico_VR/ |
|
|
│ │ ├── mocap_motion_*_smoothed.npz |
|
|
│ │ └── ... |
|
|
│ └── noitom_inertial_mocap/ |
|
|
│ ├── mocap_motion_*_smoothed_*.npz |
|
|
│ └── ... |
|
|
│ |
|
|
└── motion_prompt/ |
|
|
├── motion_prompt_*_.pdf |
|
|
└── ... |
|
|
``` |
|
|
</details> |
|
|
|
|
|
--- |
|
|
|
|
|
## How to Download |
|
|
|
|
|
### Option A: Download the whole dataset (recommended) |
|
|
``` |
|
|
from huggingface_hub import snapshot_download |
|
|
|
|
|
local_dir = snapshot_download( |
|
|
repo_id="BAAI-Humanoid/MOSAIC_Dataset", |
|
|
repo_type="dataset", |
|
|
) |
|
|
print("Downloaded to:", local_dir) |
|
|
``` |
|
|
|
|
|
### Option B: Download a subset (recommended for large datasets) |
|
|
Example: only download prompts and one subset folder. |
|
|
``` |
|
|
from huggingface_hub import snapshot_download |
|
|
|
|
|
local_dir = snapshot_download( |
|
|
repo_id="BAAI-Humanoid/MOSAIC_Dataset", |
|
|
repo_type="dataset", |
|
|
allow_patterns=[ |
|
|
"motion_prompt/*", |
|
|
"G1/adaptor_data/*", |
|
|
# add more patterns as needed, e.g.: |
|
|
# "human/optical_mocap/*", |
|
|
# "G1/optical_mocap/*", |
|
|
], |
|
|
) |
|
|
print("Downloaded to:", local_dir) |
|
|
``` |
|
|
### Option C: CLI download |
|
|
``` |
|
|
huggingface-cli download BAAI-Humanoid/MOSAIC_Dataset \ |
|
|
--repo-type dataset \ |
|
|
--local-dir MOSAIC_Dataset |
|
|
|
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## Visualization |
|
|
You can visualize the G1 NPZ motions using: |
|
|
|
|
|
- our [MOSAIC training repository](https://github.com/BAAI-Humanoid/MOSAIC), or |
|
|
|
|
|
- [BeyondMimic](https://github.com/HybridRobotics/whole_body_tracking) |
|
|
|
|
|
Tip: If you only want to inspect a few sequences quickly, download a small subset via allow_patterns and visualize those NPZ files first. |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
If you use this dataset, please cite our paper: |
|
|
``` |
|
|
@article{sun2026mosaic, |
|
|
title = {MOSAIC: Bridging the Sim-to-Real Gap in Generalist Humanoid Motion Tracking and Teleoperation with Rapid Residual Adaptation}, |
|
|
author = {Zhenguo Sun and Bo-Sheng Huang and Yibo Peng and Xukun Li and Jingyu Ma and Yu Sun and Zhe Li and Haojun Jiang and Biao Gao and Zhenshan Bing and Xinlong Wang and Alois Knoll}, |
|
|
journal = {arXiv preprint arXiv:2602.08594}, |
|
|
year = {2026} |
|
|
} |
|
|
|
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## License |
|
|
This dataset is released under CDLA-Permissive-2.0. |
|
|
|
|
|
**Important**: External tools/formats referenced in this dataset card (AMASS schema, GENMO, GMR, BeyondMimic, IO-AI tooling) each have their own licenses/terms—please follow the original terms when using those resources. |
|
|
|
|
|
--- |
|
|
|
|
|
## Acknowledgements / References |
|
|
- [AMASS](https://amass.is.tue.mpg.de/) |
|
|
|
|
|
- [GMR retargeting](https://github.com/YanjieZe/GMR) |
|
|
|
|
|
- [BeyondMimic](https://github.com/HybridRobotics/whole_body_tracking) |
|
|
|
|
|
- [GENMO](https://github.com/NVlabs/GENMO) |
|
|
|
|
|
- [IO-AI mocap tools](https://github.com/ioai-tech/io_dev_tools_ros2/tree/master/io_mocap) |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
## Contact / Issues |
|
|
Please open an issue in the relevant code repository or reach out via your project contact channels if you find problems with the dataset structure, conversion scripts, or visualization. |
|
|
|