Add dataset card and metadata
#1
by nielsr HF Staff - opened
README.md
ADDED
|
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- robotics
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# From Prior to Pro: Efficient Skill Mastery via Distribution Contractive RL Finetuning (DICE-RL)
|
| 7 |
+
|
| 8 |
+
This repository contains the datasets used in the paper [From Prior to Pro: Efficient Skill Mastery via Distribution Contractive RL Finetuning](https://huggingface.co/papers/2603.10263).
|
| 9 |
+
|
| 10 |
+
[**Project Website**](https://zhanyisun.github.io/dice.rl.2026/) | [**GitHub Repository**](https://github.com/zhanyisun/dice-rl)
|
| 11 |
+
|
| 12 |
+
## Dataset Description
|
| 13 |
+
|
| 14 |
+
Distribution Contractive Reinforcement Learning (DICE-RL) is a framework that uses reinforcement learning (RL) to refine pretrained generative robot policies. This repository hosts the data used for pretraining Behavior Cloning (BC) policies and finetuning them with DICE-RL across various Robomimic environments.
|
| 15 |
+
|
| 16 |
+
The data covers both:
|
| 17 |
+
- **Low-dimensional (state-based)** observations.
|
| 18 |
+
- **Image-based (pixel-based)** observations.
|
| 19 |
+
|
| 20 |
+
### Data Splits
|
| 21 |
+
- `ph_pretrain`: Datasets used for pretraining the BC policies for broad behavioral coverage.
|
| 22 |
+
- `ph_finetune`: Datasets used for DICE-RL finetuning. These trajectories are truncated to have exactly one success at the end to ensure consistent value learning.
|
| 23 |
+
|
| 24 |
+
## Dataset Structure
|
| 25 |
+
|
| 26 |
+
The datasets are provided in `numpy` format. Once downloaded, they follow this structure:
|
| 27 |
+
|
| 28 |
+
```
|
| 29 |
+
data_dir/
|
| 30 |
+
└── robomimic
|
| 31 |
+
├── {env_name}-low-dim
|
| 32 |
+
│ ├── ph_pretrain
|
| 33 |
+
│ └── ph_finetune
|
| 34 |
+
└── {env_name}-img
|
| 35 |
+
├── ph_pretrain
|
| 36 |
+
└── ph_finetune
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
Each folder contains:
|
| 40 |
+
- `train.npy`: The trajectory data.
|
| 41 |
+
- `normalization.npz`: Statistics used for data normalization.
|
| 42 |
+
|
| 43 |
+
## Sample Usage
|
| 44 |
+
|
| 45 |
+
To download the datasets as intended by the authors, you can use the script provided in the [official repository](https://github.com/zhanyisun/dice-rl):
|
| 46 |
+
|
| 47 |
+
```console
|
| 48 |
+
bash script/download_hf.sh
|
| 49 |
+
```
|
| 50 |
+
|
| 51 |
+
## Citation
|
| 52 |
+
|
| 53 |
+
```bibtex
|
| 54 |
+
@article{sun2026prior,
|
| 55 |
+
title={From Prior to Pro: Efficient Skill Mastery via Distribution Contractive RL Finetuning},
|
| 56 |
+
author={Sun, Zhanyi and Song, Shuran},
|
| 57 |
+
journal={arXiv preprint arXiv:2603.10263},
|
| 58 |
+
year={2026}
|
| 59 |
+
}
|
| 60 |
+
```
|