Datasets:

Modalities:
Video
Size:
< 1K
Libraries:
Datasets
License:
airoa-moma / README.md
ptrkhr's picture
Curate dataset (#3)
07f2dab verified
---
license: cc-by-nc-sa-4.0
task_categories:
- robotics
tags:
- hsr
- airoa
- rebake
---
# HSR Household Service Robot Teleoperation Dataset
## Dataset Overview
[![Static Badge](https://img.shields.io/badge/Baked%20with-rebake%F0%9F%8D%9E-blue?style=plastic&logoSize=200)](https://github.com/airoa-org/rebake)
![Episodes](https://img.shields.io/badge/Episodes-23,762-blue)
![Duration](https://img.shields.io/badge/Duration-87_hours-green)
This dataset contains **23,762 episodes** of household service robot teleoperation data collected using Toyota's Human Support Robot (HSR) platform. The dataset focuses on primitive action (PA) household manipulation tasks performed through human teleoperation, providing high-quality demonstrations for robot learning research.
### Key Statistics
- **Total Episodes**: 23,762
- **Total Frames**: 9,422,911 (87 hours)
- **Task Success Rate**: 97.4% (23,162 successful episodes)
- **Average Episode Length**: 396.55 frames (13.2 seconds)
- **Dataset Version**: 1.1
- **Last Updated**: November 2025
- **Total Size**: ~85 GB
## Task Distribution
The dataset covers 7 primary household task categories:
| Task Category | Episodes | Percentage | Description |
|---------------|----------|------------|-------------|
| **Cloth Manipulation** | 7,573 | 31.9 | Opening towel stand, hanging/folding towels |
| **Coffee Making** | 5,581 | 23.5 | Complete coffee preparation workflow |
| **Dishwashing** | 4,470 | 18.9 | Loading/unloading dishwasher operations |
| **Toast Baking** | 3,166 | 13.3 | Bread preparation and toaster operations |
| **Desk Lamp Control** | 1,270 | 5.3 | Button press and chain pull light control |
| **Slipper Organization** | 1,006 | 4.2 | Arranging slippers in rack |
| **Other Tasks** | 696 | 2.9 | Miscellaneous household tasks |
### Complete Short Horizon Task List
The dataset contains 7 distinct short horizon tasks that combine multiple primitive actions:
| Rank | Short Horizon Task | Episodes | Percentage | Task Category |
|------|--------------------|----------|------------|---------------|
| 1 | **Open the towel stand and hang the towel.** | 7,573 | 31.9 | Cloth Manipulation |
| 2 | **Make coffee** | 5,581 | 23.5 | Coffee Making |
| 3 | **Washing dishes in the dishwasher** | 4,470 | 18.8 | Dishwashing |
| 4 | **Bake a toast** | 3,166 | 13.3 | Toast Baking |
| 5 | **Press the button to turn the desk lamp on and off** | 1,270 | 5.3 | Desk Lamp Control |
| 6 | **Stand the slippers in the slipper rack** | 1,006 | 4.2 | Slipper Organization |
| 7 | **Pull the chain to turn the desk lamp on or off** | 696 | 2.9 | Desk Lamp Control |
### Top Individual Primitive Tasks
1. **Open the towel stand.** (1,297 episodes)
2. **Put the towel into the basket.** (1,295 episodes)
3. **Grab the towel hanging on the towel stand.** (1,292 episodes)
4. **Hang the towel on the towel stand.** (1,276 episodes)
5. **Grab the towel in the basket.** (1,218 episodes)
### Basic Skill Distribution
Analysis of primitive actions reveals the fundamental manipulation skills required:
| Rank | Basic Skill | Occurrences | Percentage | Description |
|------|--------------|-------------|------------|-------------|
| 1 | **Grab** | 3252 | 13.7% | Grasping and holding objects |
| 2 | **Place** | 3197 | 13.5% | Placing objects in specific locations |
| 3 | **Open** | 3182 | 13.4% | Opening containers, doors, stands |
| 4 | **Close** | 1897 | 8.0% | Closing containers, doors, lids |
| 5 | **Put** | 1852 | 7.8% | Putting objects into containers |
| 6 | **Press** | 1542 | 6.5% | Pressing buttons and controls |
| 7 | **Pick** | 1525 | 6.4% | Picking up objects from surfaces |
| 8 | **Pull** | 1464 | 6.2% | Pulling chains, handles, drawers |
| 9 | **Hang** | 1276 | 5.4% | Hanging objects on stands/hooks |
| 10 | **Fold** | 1195 | 5.0% | Folding towel stands and objects |
| 11 | **Push** | 1010 | 4.3% | Pushing buttons, trays, objects |
| 12 | **Take** | 747 | 3.1% | Taking objects from locations |
| 13 | **Approach** | 426 | 1.8% | Moving toward target locations |
| 14 | **Move** | 421 | 1.8% | Moving away from target locations |
| 15 | **Insert** | 343 | 1.4% | Inserting objects into slots |
| 16 | **Remove** | 226 | 1.0% | Removing objects from containers |
| 17 | **Run** | 207 | 0.9% | Run the target objects |
**Total Primitive Actions**: 23,762 across all episodes
The distribution shows a balanced representation of fundamental manipulation skills, with emphasis on:
- **Container manipulation** (Open/Close): 21.4% of all actions
- **Object placement** (Place/Put): 21.2% of all actions
- **Object acquisition** (Grab/Pick/Take): 23.2% of all actions
- **Force application** (Pull/Push/Press): 16.9% of all actions
## Data Structure
### Video Data
- **Camera Setup**: Dual RGB cameras (hand-mounted + head-mounted)
- **Resolution**: 640×480 pixels
- **Framerate**: 30 frames per second
- **Format**: MP4 files
- **Camera Calibration**: Included for both cameras with distortion parameters
### Metadata (`episodes.jsonl`)
Each episode contains comprehensive metadata:
```json
{
"episode_index": 0,
"tasks": ["Pull the chain to turn off the light."],
"length": 729,
"uuid": "a699601f-41e5-4678-865d-d9de37a010ad",
"task_type": "PA",
"task_success": true,
"short_horizon_task": "Pull the chain to turn the desk lamp on or off",
"primitive_action": ["Action sequence"],
"label": "Operator001",
"hsr_id": "robot003",
"location_name": "location001",
"calib": {...},
"version": "1.0",
"git_hash": "v4.0.0"
}
```
#### Key Metadata Fields
- **episode_index**: Unique episode identifier (0-25468)
- **tasks**: List of specific tasks performed in episode
- **length**: Number of timesteps in episode
- **task_success**: Boolean indicating task completion success
- **short_horizon_task**: High-level task description
- **primitive_action**: Detailed action sequence breakdown
- **calib**: Camera calibration parameters for head and hand cameras
- **uuid**: Uniquie identifier of high-level episode.
- **label**: Anonimized operator identifier.
## Hardware Configuration
### Robots
- **Platform**: Toyota Human Support Robot (HSR)
- **Count**: 8 robots (anonymized as robot001-robot008)
- **Distribution**:
- robot002: 8,685 episodes (34.1%)
- robot001: 4,286 episodes (16.8%)
- robot005: 4,132 episodes (16.2%)
- robot004: 3,068 episodes (12.0%)
- Others: <10% each
### Human Operators
- **Count**: 19 teleoperators (anonymized as Operator001-Operator019)
- **Interface**: HSR leader teleoperation system
- **Primary Contributors**:
- Operator015: 12,408 episodes (48.7%)
- Operator009: 3,071 episodes (12.1%)
- Operator003: 2,175 episodes (8.5%)
### Environment
- **Location**: Single laboratory environment (anonymized as location001)
- **Setup**: Controlled household environment with kitchen appliances and furniture
## Data Anonymization
All personally identifiable information has been systematically anonymized using the mapping system defined in `anonymization_mappings.json`:
### Anonymization Mappings
- **Human Operators**: 19 operators → Operator001-Operator019
- **Robot IDs**: 8 HSR units → robot001-robot008
- **Locations**: 1 lab environment → location001
- **Git Hashes**: Development commits → semantic versions (v1.0.0-v12.0.0)
## Technical Specifications
### Camera Calibration
Both cameras include complete calibration parameters:
- **Intrinsic Matrix (K)**: 3×3 camera matrix
- **Distortion Coefficients (D)**: Radial and tangential distortion
- **Projection Matrix (P)**: 3×4 projection matrix
- **Rectification Matrix (R)**: 3×3 rectification matrix
## Usage Guidelines
### Research Applications
- **Robot Learning**: Imitation learning from teleoperation demonstrations
- **Computer Vision**: Multi-view manipulation task understanding
- **Task Planning**: Hierarchical task decomposition analysis
- **Human-Robot Interaction**: Teleoperation interface studies
## Quality Metrics
- **Task Success Rate**: 97.4% overall success rate
- **Episode Length Distribution**: 120-856 frames (avg: 399.73)
- **Data Completeness**: All episodes have corresponding hand and head camera videos
- **Annotation Quality**: Rich task decomposition with primitive action sequences
## Limitations
- **Environment Scope**: Single laboratory setting may limit generalization
- **Task Diversity**: Focus on specific household tasks (7 main categories)
- **Operator Variance**: Uneven distribution across human operators
- **Temporal Scope**: Data collected during specific development phases
## How to Download
Since the dataset have a lot of files, Hugging Face API can hit [rate limit](https://huggingface.co/docs/hub/rate-limits) easily.
```shell
$ hf download airoa-org/airoa-moma --repo-type dataset
...
We had to rate limit you, you hit the quota of 1000 api requests per 5 minutes period. Upgrade to a PRO user or Team/Enterprise organization account (https://hf.co/pricing) to get higher limits. See https://huggingface.co/docs/hub/rate-limits
```
So we recommend you to download with Git over SSH.
First, follow [the official instruction](https://huggingface.co/docs/hub/security-git-ssh) and
set your public SSH key to Hugging Face.
```shell
cd ~/.ssh
ssh-keygen -t ed25519 -C "your_email@example.com" -f <key file>
```
Second, install [Git LFS](https://git-lfs.com/), which manages large files like videos.
After installing Git LFS, you need to execute `git lfs install` once per user.
Then set up SSH Agent otherwise you will be asked SSH key pass phrase per files.
One of the good documents is [an istruction at GitHub Docs](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent).
```shell
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/<key file>
```
Additionally, add an entry at `~/.ssh/config`
```text
Host hf.co
User git
IdentityFile ~/.ssh/<key file>
```
Finally, you can clone;
```shell
git clone git@hf.co:datasets/airoa-org/airoa-moma.git
cd airoa-moma
git lfs pull
```
Depending on your git configuration, `git lfs pull` can be run during `git clone` automatically.
If you haven't pulled LFS managed files, they are just pointer text files.
```shell
$ file videos/chunk-000/observation.image.hand/episode_000000.mp4
videos/chunk-000/observation.image.hand/episode_000000.mp4: ASCII text
```
```shell
$ cat videos/chunk-000/observation.image.hand/episode_000000.mp4
version https://git-lfs.github.com/spec/v1
oid sha256:48277551133b1587c4c02cec6ee41f9a925565cf4d8aa9d0931f1d997d39c0a6
size 8488109
```
Once you pull LFS files, then they become regular files.
```shell
$ file videos/chunk-000/observation.image.hand/episode_000000.mp4
videos/chunk-000/observation.image.hand/episode_000000.mp4: ISO Media, MP4 Base Media v1 [ISO 14496-12:2003]
```
## Change Log
- v1.1: Filter out skeptical episodes
- Too short (< 1.0s) or too long (> 60.s) episodes
- Large jump (> 1.0) at any dimensions of `observation.state`
- Delay longer than 0.3s at any of `observation.state`,
`observation.image.head`, or `observation.image.hand`
## Citation
If you use this dataset in your research, please cite:
```
@article{airoa-moma-2025,
author = {Ryosuke Takanami, Petr Khrapchenkov, Shu Morikuni, Jumpei Arima, Yuta Takaba, Shunsuke Maeda, Takuya Okubo, Genki Sano, Satoshi Sekioka, Aoi Kadoya, Motonari Kambara, Naoya Nishiura, Haruto Suzuki, Takanori Yoshimoto, Koya Sakamoto, Shinnosuke Ono, Yo Ko, Daichi Yashima, Aoi Horo, Tomohiro Motoda, Kensuke Chiyoma, Hiroshi Ito, Koki Fukuda, Akihito Goto, Kazumi Morinaga, Yuya Ikeda, Riko Kawada, Masaki Yoshikawa, Norio Kosuge, Yuki Noguchi, Kei Ota, Tatsuya Matsushima, Yusuke Iwasawa, Yutaka Matsuo, Tetsuya Ogata},
title = {AIRoA MoMa Dataset: A Large-Scale Hierarchical Dataset for Mobile Manipulation},
journal = {arXiv preprint},
year = {2025}
}
```