Datasets:
Update dataset card: add paper, project page, code, sample usage, flow-matching tag, and citation
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -4,19 +4,21 @@ task_categories:
|
|
| 4 |
- robotics
|
| 5 |
tags:
|
| 6 |
- LeRobot
|
|
|
|
| 7 |
configs:
|
| 8 |
- config_name: default
|
| 9 |
data_files: data/*/*.parquet
|
| 10 |
---
|
| 11 |
|
| 12 |
-
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
| 13 |
|
| 14 |
## Dataset Description
|
| 15 |
|
|
|
|
| 16 |
|
| 17 |
-
|
| 18 |
-
- **
|
| 19 |
-
- **
|
| 20 |
- **License:** apache-2.0
|
| 21 |
|
| 22 |
## Dataset Structure
|
|
@@ -119,7 +121,7 @@ This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
|
|
| 119 |
"video.channels": 3,
|
| 120 |
"has_audio": false
|
| 121 |
}
|
| 122 |
-
}
|
| 123 |
"timestamp": {
|
| 124 |
"dtype": "float32",
|
| 125 |
"shape": [
|
|
@@ -159,11 +161,58 @@ This dataset was created using [LeRobot](https://github.com/huggingface/lerobot)
|
|
| 159 |
}
|
| 160 |
```
|
| 161 |
|
|
|
|
| 162 |
|
| 163 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 164 |
|
| 165 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 166 |
|
| 167 |
```bibtex
|
| 168 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 169 |
```
|
|
|
|
| 4 |
- robotics
|
| 5 |
tags:
|
| 6 |
- LeRobot
|
| 7 |
+
- flow-matching
|
| 8 |
configs:
|
| 9 |
- config_name: default
|
| 10 |
data_files: data/*/*.parquet
|
| 11 |
---
|
| 12 |
|
| 13 |
+
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). It contains data associated with the paper [VITA: Vision-to-Action Flow Matching Policy](https://huggingface.co/papers/2507.13231).
|
| 14 |
|
| 15 |
## Dataset Description
|
| 16 |
|
| 17 |
+
This dataset is associated with the paper [VITA: Vision-to-Action Flow Matching Policy](https://huggingface.co/papers/2507.13231). VITA introduces a noise-free and conditioning-free policy learning framework that directly maps visual representations to latent actions using flow matching. This dataset comprises the data used for evaluating VITA on 8 simulation and 2 real-world tasks from ALOHA and Robomimic.
|
| 18 |
|
| 19 |
+
- **Homepage:** [https://ucd-dare.github.io/VITA/](https://ucd-dare.github.io/VITA/)
|
| 20 |
+
- **Paper:** [https://huggingface.co/papers/2507.13231](https://huggingface.co/papers/2507.13231)
|
| 21 |
+
- **Code:** [https://github.com/ucd-dare/VITA](https://github.com/ucd-dare/VITA)
|
| 22 |
- **License:** apache-2.0
|
| 23 |
|
| 24 |
## Dataset Structure
|
|
|
|
| 121 |
"video.channels": 3,
|
| 122 |
"has_audio": false
|
| 123 |
}
|
| 124 |
+
},\
|
| 125 |
"timestamp": {
|
| 126 |
"dtype": "float32",
|
| 127 |
"shape": [
|
|
|
|
| 161 |
}
|
| 162 |
```
|
| 163 |
|
| 164 |
+
## Sample Usage
|
| 165 |
|
| 166 |
+
The datasets are designed to be used with the VITA codebase, which extends [LeRobot](https://github.com/huggingface/lerobot) for optimized preprocessing and training.
|
| 167 |
+
|
| 168 |
+
First, set up the VITA environment as described in the [Github repository](https://github.com/ucd-dare/VITA):
|
| 169 |
+
```bash
|
| 170 |
+
git clone git@github.com:ucd-dare/VITA.git
|
| 171 |
+
cd VITA
|
| 172 |
+
conda create --name vita python==3.10
|
| 173 |
+
conda activate vita
|
| 174 |
+
conda install cmake
|
| 175 |
+
pip install -e .
|
| 176 |
+
pip install -r requirements.txt
|
| 177 |
+
# Install LeRobot dependencies
|
| 178 |
+
cd lerobot
|
| 179 |
+
pip install -e .
|
| 180 |
+
# Install ffmpeg for dataset processing
|
| 181 |
+
conda install -c conda-forge ffmpeg
|
| 182 |
+
```
|
| 183 |
+
|
| 184 |
+
Set the dataset storage path:
|
| 185 |
+
```bash
|
| 186 |
+
echo 'export FLARE_DATASETS_DIR=<PATH_TO_VITA>/gym-av-aloha/outputs' >> ~/.bashrc
|
| 187 |
+
# Reload bashrc
|
| 188 |
+
source ~/.bashrc
|
| 189 |
+
conda activate vita
|
| 190 |
+
```
|
| 191 |
|
| 192 |
+
You can list available datasets (hosted on Hugging Face) using the conversion script:
|
| 193 |
+
```bash
|
| 194 |
+
cd gym-av-aloha/scripts
|
| 195 |
+
python convert.py --ls
|
| 196 |
+
```
|
| 197 |
+
|
| 198 |
+
To convert a Hugging Face dataset to the optimized offline Zarr format for faster training (this may take >10 minutes), for example:
|
| 199 |
+
```bash
|
| 200 |
+
python convert.py -r iantc104/av_aloha_sim_hook_package
|
| 201 |
+
```
|
| 202 |
+
Converted datasets will be stored in the path specified by `FLARE_DATASETS_DIR`.
|
| 203 |
+
|
| 204 |
+
To train a policy using a task (e.g., `hook_package`) with the VITA framework:
|
| 205 |
+
```bash
|
| 206 |
+
python flare/train.py policy=vita task=hook_package session=test
|
| 207 |
+
```
|
| 208 |
+
|
| 209 |
+
## Citation
|
| 210 |
|
| 211 |
```bibtex
|
| 212 |
+
@article{gao2025vita,
|
| 213 |
+
title={VITA: Vision-to-Action Flow Matching Policy},
|
| 214 |
+
author={Gao, Dechen and Zhao, Boqi and Lee, Andrew and Chuang, Ian and Zhou, Hanchu and Wang, Hang and Zhao, Zhe and Zhang, Junshan and Soltani, Iman},
|
| 215 |
+
journal={arXiv preprint arXiv:2507.13231},
|
| 216 |
+
year={2025}
|
| 217 |
+
}
|
| 218 |
```
|