task_categories:
- robotics
tags:
- lerobot
- cross-embodiment
MOTIF: Learning Action Motifs for Few-shot Cross-Embodiment Transfer
This repository contains a minimal real-world dataset provided to reproduce the interleaved task setting described in the paper MOTIF: Learning Action Motifs for Few-shot Cross-Embodiment Transfer.
Dataset Description
MOTIF is a framework for few-shot cross-embodiment robotic transfer. It learns reusable action motifs—embodiment-agnostic spatiotemporal patterns—that enable efficient policy generalization across different robot embodiments.
This example dataset includes:
- Embodiments: ARX5 and Piper.
- Tasks: Two distinct tasks across embodiments.
- Format: The dataset adheres to the LeRobot data format and includes a
modality.jsonfor detailed modality and annotation definitions (compatible with GR00T N1).
Usage
Download the Dataset
You can download the dataset locally using the huggingface-cli:
huggingface-cli download \
--repo-type dataset Crossingz/ARX5_Piper_Few_shot_Example \
--local-dir ./demo_data
Kinematic Trajectory Canonicalization
To enable embodiment-agnostic motif learning, raw end-effector trajectories must be canonicalized into a shared reference frame. You can use the processing script provided in the official repository:
python data/process/trajectory_canonicalization.py \
--dataset_path ./demo_data \
--save_path ./demo_data_processed
Citation
If you find this dataset or the MOTIF framework useful, please consider citing:
@article{zhi2025motif,
title={MOTIF: Learning Action Motifs for Few-shot Cross-Embodiment Transfer},
author={Zhi, Heng and Tan, Wentao and Zhu, Lei and Li, Fengling and Li, Jingjing and Yang, Guoli and Shen, Heng Tao},
journal={arXiv preprint arXiv:2602.13764},
year={2025}
}