Datasets:
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- flow-matching
configs:
- config_name: default
data_files: data/*/*.parquet
This dataset was created using LeRobot.
Dataset Description
This dataset contains demonstration data for robotic tasks, utilized by VITA: Vision-to-Action Flow Matching Policy. VITA is a noise-free and conditioning-free policy learning framework that directly maps visual representations to latent actions using flow matching. It treats latent visual representations as the source of the flow, eliminating the need for conditioning. VITA is evaluated on 8 simulation and 2 real-world tasks from ALOHA and Robomimic, outperforming or matching state-of-the-art generative policies while achieving faster inference.
- Homepage: https://ucd-dare.github.io/VITA/
- Paper: https://huggingface.co/papers/2507.13231
- Code: https://github.com/ucd-dare/VITA
- License: Apache 2.0
Sample Usage
To get started with VITA and use this dataset, follow these steps to set up the environment, preprocess datasets, and train a policy, as described in the VITA GitHub repository.
First, clone the repository and set up the conda environment:
git clone git@github.com:ucd-dare/VITA.git
cd VITA
conda create --name vita python==3.10
conda activate vita
conda install cmake
pip install -e .
pip install -r requirements.txt
# Install LeRobot dependencies
cd lerobot
pip install -e .
# Install ffmpeg for dataset processing
conda install -c conda-forge ffmpeg
Set the dataset storage path (replace <PATH_TO_VITA> with the absolute path to your cloned VITA directory):
echo 'export FLARE_DATASETS_DIR=<PATH_TO_VITA>/gym-av-aloha/outputs' >> ~/.bashrc
source ~/.bashrc
conda activate vita
Install benchmark dependencies for AV-ALOHA and/or Robomimic as needed:
# For AV-ALOHA
cd gym-av-aloha
pip install -e .
# For Robomimic
cd gym-robomimic
pip install -e .
To convert a HuggingFace dataset to offline zarr format (e.g., iantc104/av_aloha_sim_hook_package):
cd gym-av-aloha/scripts
python convert.py -r iantc104/av_aloha_sim_hook_package
The converted datasets will be stored in ./gym-av-aloha/outputs.
To train a VITA policy, use the flare/train.py script:
python flare/train.py policy=vita task=hook_package session=test
You can customize training with various flags, for example:
# Use a specific GPU
python flare/train.py policy=vita task=hook_package session=test device=cuda:2
# Change online validation frequency and episodes
python flare/train.py policy=vita task=hook_package session=test \
val.val_online_freq=2000 val.eval_n_episodes=10
Dataset Structure
{
"codebase_version": "v2.0",
"robot_type": "unknown",
"total_episodes": 206,
"total_frames": 25650,
"total_tasks": 1,
"total_videos": 206,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:206"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.image": {
"dtype": "video",
"shape": [
96,
96,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 10.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.state": {
"dtype": "float32",
"shape": [
2
],
"names": {
"motors": [
"motor_0",
"motor_1"
]
}
},
"action": {
"dtype": "float32",
"shape": [
2
],
"names": {
"motors": [
"motor_0",
"motor_1"
]
}
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"next.success": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
Citation
BibTeX:
@article{gao2025vita,
title={VITA: Vision-to-Action Flow Matching Policy},
author={Gao, Dechen and Zhao, Boqi and Lee, Andrew and Chuang, Ian and Zhou, Hanchu and Wang, Hang and Zhao, Zhe and Zhang, Junshan and Soltani, Iman},
journal={arXiv preprint arXiv:2507.13231},
year={2025}
}