metadata
license: mit
tags:
- robotics
- reinforcement-learning
- imitation-learning
- robomimic
- mujoco
- d4rl
language:
- en
task_categories:
- robotics
size_categories:
- 1M<n<10M
DMPO Demonstration Datasets
Pre-processed demonstration datasets for DMPO: Dispersive MeanFlow Policy Optimization.
Overview
This repository contains pre-processed demonstration data for pre-training DMPO policies. Each dataset includes trajectory data and normalization statistics.
Dataset Structure
gym/
├── hopper-medium-v2/
├── walker2d-medium-v2/
├── ant-medium-expert-v2/
├── Humanoid-medium-v3/
├── kitchen-complete-v0/
├── kitchen-mixed-v0/
└── kitchen-partial-v0/
robomimic/
├── lift-img/
├── can-img/
├── square-img/
└── transport-img/
Each task folder contains:
train.npz- Training trajectoriesnormalization.npz- Observation and action normalization statistics
Usage
Use the hf:// prefix in config files to auto-download:
train_dataset_path: hf://gym/hopper-medium-v2/train.npz
normalization_path: hf://gym/hopper-medium-v2/normalization.npz
Data Sources
- Gym tasks: Derived from D4RL datasets
- Robomimic tasks: Derived from Robomimic proficient-human demonstrations
Citation
@misc{zou2026stepenoughdispersivemeanflow,
title={One Step Is Enough: Dispersive MeanFlow Policy Optimization},
author={Guowei Zou and Haitao Wang and Hejun Wu and Yukun Qian and Yuhang Wang and Weibing Li},
year={2026},
eprint={2601.20701},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2601.20701},
}
License
MIT License