Datasets:

Languages:
English
ArXiv:
License:
MOVE / README.md
Lucywang720's picture
Update README.md
2ef6e0f verified
---
# 可选:数据集分类(非必填,但建议加)
task_categories:
- robotics # 示例:机器人学,可替换为自己的分类(如nlp、cv等)
# 可选:数据集语言(非必填)
language:
- en # 示例:英语,可替换为zh、fr等
# 必选:gated access 核心配置 - 用户需同意的条款
extra_gated_prompt: "You agree not to use this dataset for commercial purposes, but only for academic research."
# 必选:自定义收集的问卷字段(核心)
extra_gated_fields:
# 自定义字段1:公司/机构(示例)
Company/Organization:
type: text # 字段类型:text(文本框)、country(国家选择器)、select(下拉框)等
description: "填写示例:ETH Zurich、波士顿动力、独立研究者" # 字段说明,引导用户填写
# 自定义字段2:国家(示例)
Country:
type: country # 固定类型,Hugging Face 会自动生成国家下拉列表
description: "填写示例:德国、中国、美国"
# 自定义字段3:使用目的(示例)
Intended use:
type: text
description: "填写示例:模仿学习、策略泛化、双手操作研究"
# 可追加更多自定义字段,比如:
# Research Team:
# type: text
# description: "Your research team name (e.g., Robotics Lab at XXX University)"
license: mit
datasets:
- your-huggingface-org/MOVE
---
<!-- ---
license: mit
datasets:
- your-huggingface-org/MOVE
--- -->
# Dataset Card for MOVE Real-World Manipulation Dataset
## MOVE: Motion-Based Variability Enhancement for Spatial Generalization in Robotic Manipulation
Jointly Released by:
> ### 🎓 清华大学 (Tsinghua University)
> ### 🤖 智源人工智能研究院 (Beijing Academy of Artificial Intelligence - BAAI)
This Hugging Face Dataset Card describes the **Real-World Robotic Manipulation Dataset** collected using the **MOVE (Motion-Based Variability Enhancement)** paradigm, as presented in the paper "MOVE: A Simple Motion-Based Data Collection Paradigm for Spatial Generalization in Robotic Manipulation."
The core value of the MOVE paradigm is the injection of **dynamic feature augmentation** into the environment (objects and camera) during expert demonstrations. This process captures a richer variety of spatial configurations within a single trajectory, significantly improving policy performance, especially in terms of **spatial generalization** to unseen locations and **data efficiency**.
## 💾 Dataset Structure
This dataset focuses on the **Real-World Pick-and-Place** task. All data was collected using the **Piper robotic arm** teleoperated via a **Pika device** under dynamic environment configurations.
### Data Fields
Each trajectory contains a sequence of timesteps, recording the essential observations and state information required for robotic policy learning.
| Field Key | Data Type | Description |
| :--- | :--- | :--- |
| `timestep_id` | `int` | Sequential timestep index within the trajectory. |
| **`camera/color/Camera`** | `PIL.Image` / `ndarray` | **RGB Image Observation**. Due to the MOVE paradigm, the image captures dynamically changing object, target, and camera viewpoints. |
| **`arm/jointStatePosition/joint_single`** | `array[float]` | **Robot Joint State/Action**. The joint positions of the Piper robotic arm, representing the robot's state or executed action at this timestep. |
| **`/arm/jointStatePosition/master`** | `array[float]` | **Master Device State**. The joint or position states of the Pika teleoperation master device, recording the human operator's intent. |
> **Note:** This dataset strictly includes only the three key data streams listed above. It does not include explicit 3D coordinates, world-frame camera poses, or other calculated metadata.
### Data Splits
The real-world dataset is split based on the total number of environment interaction steps (timesteps), allowing for efficiency evaluation:
| Split Name | Task | Total Timesteps | Description |
| :--- | :--- | :--- | :--- |
| `real_world_35k` | Real-World Pick-and-Place (e.g., Orange to Tray) | **35,000** | A challenging, low-data scenario for testing spatial generalization capability. |
| `real_world_75k` | Real-World Pick-and-Place | **75,000** | Used for performance scaling and efficiency comparison against static baselines. |
## 🚀 Key Advantages
* **High Spatial Generalization:** Policies trained on this dynamically augmented data demonstrate superior success rates when tested on spatially randomized, unseen configurations.
* **Superior Data Efficiency:** MOVE datasets enable policies to achieve competitive performance with a significantly lower total number of timesteps compared to datasets collected using the traditional static approach.
## 🎯 Usage
This dataset is an ideal resource for training robust real-world robotic manipulation policies, particularly those that rely on high-generalization visual-motor data.
### Loading the Data
```python
from datasets import load_dataset
# Load the 75k timestep real-world subset
dataset = load_dataset("your-huggingface-org/MOVE", "real_world_75k")
# Access image and state data
first_example = dataset["train"][0]
image = first_example["camera/color/Camera"]
robot_state = first_example["arm/jointStatePosition/joint_single"]
```
## Citation
if you find this work helpful, please consider citing our paper:
```
@misc{wang2025movesimplemotionbaseddata,
title={MOVE: A Simple Motion-Based Data Collection Paradigm for Spatial Generalization in Robotic Manipulation},
author={Huanqian Wang and Chi Bene Chen and Yang Yue and Danhua Tao and Tong Guo and Shaoxuan Xie and Denghang Huang and Shiji Song and Guocai Yao and Gao Huang},
year={2025},
eprint={2512.04813},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2512.04813},
}
```