Add dataset card and paper link for MOTIF
#2
by
nielsr HF Staff - opened
README.md
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- robotics
|
| 4 |
+
tags:
|
| 5 |
+
- lerobot
|
| 6 |
+
- cross-embodiment
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
# MOTIF: Learning Action Motifs for Few-shot Cross-Embodiment Transfer
|
| 10 |
+
|
| 11 |
+
This repository contains a minimal real-world dataset provided to reproduce the interleaved task setting described in the paper [MOTIF: Learning Action Motifs for Few-shot Cross-Embodiment Transfer](https://huggingface.co/papers/2602.13764).
|
| 12 |
+
|
| 13 |
+
[**GitHub**](https://github.com/buduz/MOTIF) | [**Paper**](https://huggingface.co/papers/2602.13764)
|
| 14 |
+
|
| 15 |
+
## Dataset Description
|
| 16 |
+
MOTIF is a framework for few-shot cross-embodiment robotic transfer. It learns reusable **action motifs**—embodiment-agnostic spatiotemporal patterns—that enable efficient policy generalization across different robot embodiments.
|
| 17 |
+
|
| 18 |
+
This example dataset includes:
|
| 19 |
+
- **Embodiments**: ARX5 and Piper.
|
| 20 |
+
- **Tasks**: Two distinct tasks across embodiments.
|
| 21 |
+
- **Format**: The dataset adheres to the [LeRobot](https://github.com/huggingface/lerobot) data format and includes a `modality.json` for detailed modality and annotation definitions (compatible with GR00T N1).
|
| 22 |
+
|
| 23 |
+
## Usage
|
| 24 |
+
|
| 25 |
+
### Download the Dataset
|
| 26 |
+
You can download the dataset locally using the `huggingface-cli`:
|
| 27 |
+
|
| 28 |
+
```bash
|
| 29 |
+
huggingface-cli download \
|
| 30 |
+
--repo-type dataset Crossingz/ARX5_Piper_Few_shot_Example \
|
| 31 |
+
--local-dir ./demo_data
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
### Kinematic Trajectory Canonicalization
|
| 35 |
+
To enable embodiment-agnostic motif learning, raw end-effector trajectories must be canonicalized into a shared reference frame. You can use the processing script provided in the [official repository](https://github.com/buduz/MOTIF):
|
| 36 |
+
|
| 37 |
+
```bash
|
| 38 |
+
python data/process/trajectory_canonicalization.py \
|
| 39 |
+
--dataset_path ./demo_data \
|
| 40 |
+
--save_path ./demo_data_processed
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
## Citation
|
| 44 |
+
If you find this dataset or the MOTIF framework useful, please consider citing:
|
| 45 |
+
|
| 46 |
+
```bibtex
|
| 47 |
+
@article{zhi2025motif,
|
| 48 |
+
title={MOTIF: Learning Action Motifs for Few-shot Cross-Embodiment Transfer},
|
| 49 |
+
author={Zhi, Heng and Tan, Wentao and Zhu, Lei and Li, Fengling and Li, Jingjing and Yang, Guoli and Shen, Heng Tao},
|
| 50 |
+
journal={arXiv preprint arXiv:2602.13764},
|
| 51 |
+
year={2025}
|
| 52 |
+
}
|
| 53 |
+
```
|