One Step Is Enough: Dispersive MeanFlow Policy Optimization
Paper
•
2601.20701
•
Published
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Pre-processed demonstration datasets for DMPO: Dispersive MeanFlow Policy Optimization.
This repository contains pre-processed demonstration data for pre-training DMPO policies. Each dataset includes trajectory data and normalization statistics.
gym/
├── hopper-medium-v2/
├── walker2d-medium-v2/
├── ant-medium-expert-v2/
├── Humanoid-medium-v3/
├── kitchen-complete-v0/
├── kitchen-mixed-v0/
└── kitchen-partial-v0/
robomimic/
├── lift-img/
├── can-img/
├── square-img/
└── transport-img/
Each task folder contains:
train.npz - Training trajectoriesnormalization.npz - Observation and action normalization statisticsUse the hf:// prefix in config files to auto-download:
train_dataset_path: hf://gym/hopper-medium-v2/train.npz
normalization_path: hf://gym/hopper-medium-v2/normalization.npz
@misc{zou2026stepenoughdispersivemeanflow,
title={One Step Is Enough: Dispersive MeanFlow Policy Optimization},
author={Guowei Zou and Haitao Wang and Hejun Wu and Yukun Qian and Yuhang Wang and Weibing Li},
year={2026},
eprint={2601.20701},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2601.20701},
}
MIT License