Datasets:
Update dataset card: Add paper, project page, code, sample usage, citation, and tags
#2
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -4,19 +4,27 @@ task_categories:
|
|
| 4 |
- robotics
|
| 5 |
tags:
|
| 6 |
- LeRobot
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
configs:
|
| 8 |
- config_name: default
|
| 9 |
data_files: data/*/*.parquet
|
| 10 |
---
|
| 11 |
|
| 12 |
-
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
| 13 |
|
| 14 |
## Dataset Description
|
| 15 |
|
|
|
|
| 16 |
|
| 17 |
-
|
| 18 |
-
- **
|
| 19 |
-
- **
|
| 20 |
- **License:** apache-2.0
|
| 21 |
|
| 22 |
## Dataset Structure
|
|
@@ -182,11 +190,30 @@ This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
|
|
| 182 |
}
|
| 183 |
```
|
| 184 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 185 |
|
| 186 |
## Citation
|
| 187 |
|
| 188 |
**BibTeX:**
|
| 189 |
|
| 190 |
```bibtex
|
| 191 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 192 |
```
|
|
|
|
| 4 |
- robotics
|
| 5 |
tags:
|
| 6 |
- LeRobot
|
| 7 |
+
- manipulation
|
| 8 |
+
- vision-to-action
|
| 9 |
+
- flow-matching
|
| 10 |
+
- av-aloha
|
| 11 |
+
- robomimic
|
| 12 |
+
language:
|
| 13 |
+
- en
|
| 14 |
configs:
|
| 15 |
- config_name: default
|
| 16 |
data_files: data/*/*.parquet
|
| 17 |
---
|
| 18 |
|
| 19 |
+
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot) and contains demonstrations related to the paper [VITA: Vision-to-Action Flow Matching Policy](https://huggingface.co/papers/2507.13231).
|
| 20 |
|
| 21 |
## Dataset Description
|
| 22 |
|
| 23 |
+
This dataset is part of the VITA project, a noise-free and conditioning-free policy learning framework that directly maps visual representations to latent actions using flow matching. It provides data for robotic manipulation tasks from benchmarks like ALOHA and Robomimic.
|
| 24 |
|
| 25 |
+
- **Homepage:** [https://ucd-dare.github.io/VITA/](https://ucd-dare.github.io/VITA/)
|
| 26 |
+
- **Paper:** [VITA: Vision-to-Action Flow Matching Policy](https://huggingface.co/papers/2507.13231)
|
| 27 |
+
- **Code:** [https://github.com/ucd-dare/VITA](https://github.com/ucd-dare/VITA)
|
| 28 |
- **License:** apache-2.0
|
| 29 |
|
| 30 |
## Dataset Structure
|
|
|
|
| 190 |
}
|
| 191 |
```
|
| 192 |
|
| 193 |
+
## Sample Usage
|
| 194 |
+
|
| 195 |
+
This dataset is designed to be used with the [VITA framework](https://github.com/ucd-dare/VITA), which utilizes [LeRobot](https://github.com/huggingface/lerobot) for data handling. The VITA GitHub repository provides a `convert.py` script to preprocess Hugging Face datasets into an optimized offline Zarr format for faster training.
|
| 196 |
+
|
| 197 |
+
To preprocess this dataset for use with VITA, first set up the VITA environment as described in its [GitHub README's Getting Started section](https://github.com/ucd-dare/VITA#getting-started).
|
| 198 |
+
|
| 199 |
+
Once the environment is set up, you can convert this Hugging Face dataset (e.g., `iantc104/av_aloha_sim_hook_package` as an example repository ID from the VITA project) to the offline Zarr format by running the `convert.py` script:
|
| 200 |
+
|
| 201 |
+
```bash
|
| 202 |
+
cd VITA/gym-av-aloha/scripts
|
| 203 |
+
python convert.py -r iantc104/av_aloha_sim_hook_package
|
| 204 |
+
```
|
| 205 |
+
|
| 206 |
+
The processed dataset will be stored in `./VITA/gym-av-aloha/outputs`, ready for training with VITA's scripts.
|
| 207 |
|
| 208 |
## Citation
|
| 209 |
|
| 210 |
**BibTeX:**
|
| 211 |
|
| 212 |
```bibtex
|
| 213 |
+
@article{gao2025vita,
|
| 214 |
+
title={VITA: Vision-to-Action Flow Matching Policy},
|
| 215 |
+
author={Gao, Dechen and Zhao, Boqi and Lee, Andrew and Chuang, Ian and Zhou, Hanchu and Wang, Hang and Zhao, Zhe and Zhang, Junshan and Soltani, Iman},
|
| 216 |
+
journal={arXiv preprint arXiv:2507.13231},
|
| 217 |
+
year={2025}
|
| 218 |
+
}
|
| 219 |
```
|