File size: 2,155 Bytes
5b58b74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: mit
tags:
  - robotics
  - reinforcement-learning
  - imitation-learning
  - robomimic
  - mujoco
  - d4rl
language:
  - en
task_categories:
  - robotics
size_categories:
  - 1M<n<10M
---

# DMPO Demonstration Datasets

Pre-processed demonstration datasets for **DMPO: Dispersive MeanFlow Policy Optimization**.

[![Paper](https://img.shields.io/badge/arXiv-2601.20701-B31B1B)](http://arxiv.org/abs/2601.20701)
[![Code](https://img.shields.io/badge/GitHub-dmpo--release-blue)](https://github.com/Guowei-Zou/dmpo-release)
[![Project Page](https://img.shields.io/badge/Project-Page-4285F4)](https://guowei-zou.github.io/dmpo-page/)

## Overview

This repository contains pre-processed demonstration data for pre-training DMPO policies. Each dataset includes trajectory data and normalization statistics.

## Dataset Structure

```
gym/
├── hopper-medium-v2/
├── walker2d-medium-v2/
├── ant-medium-expert-v2/
├── Humanoid-medium-v3/
├── kitchen-complete-v0/
├── kitchen-mixed-v0/
└── kitchen-partial-v0/

robomimic/
├── lift-img/
├── can-img/
├── square-img/
└── transport-img/
```

Each task folder contains:
- `train.npz` - Training trajectories
- `normalization.npz` - Observation and action normalization statistics

## Usage

Use the `hf://` prefix in config files to auto-download:

```yaml
train_dataset_path: hf://gym/hopper-medium-v2/train.npz
normalization_path: hf://gym/hopper-medium-v2/normalization.npz
```

## Data Sources

- **Gym tasks**: Derived from [D4RL](https://github.com/Farama-Foundation/D4RL) datasets
- **Robomimic tasks**: Derived from [Robomimic](https://github.com/ARISE-Initiative/robomimic) proficient-human demonstrations

## Citation

```bibtex
@misc{zou2026stepenoughdispersivemeanflow,
      title={One Step Is Enough: Dispersive MeanFlow Policy Optimization},
      author={Guowei Zou and Haitao Wang and Hejun Wu and Yukun Qian and Yuhang Wang and Weibing Li},
      year={2026},
      eprint={2601.20701},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2601.20701},
}
```

## License

MIT License