Datasets:
license: apache-2.0
task_categories:
- robotics
language:
- en
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
This dataset was created using LeRobot.
Dataset Description
This dataset provides the robotic trajectories and observations used in the paper VITA: Vision-to-Action Flow Matching Policy. VITA introduces a noise-free and conditioning-free policy learning framework that directly maps visual representations to latent actions using flow matching, enabling faster inference for robotic manipulation tasks. The datasets are built on LeRobot Hugging Face formats and optimized into offline zarr for faster training.
- Homepage: https://ucd-dare.github.io/VITA/
- Paper: https://huggingface.co/papers/2507.13231
- Code: https://github.com/ucd-dare/VITA
- License: apache-2.0
Dataset Structure
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 175,
"total_frames": 26266,
"total_tasks": 1,
"total_videos": 350,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 20,
"splits": {
"train": "0:175"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": null
},
"action.delta": {
"dtype": "float32",
"shape": [
7
],
"names": null
},
"action.absolute": {
"dtype": "float32",
"shape": [
7
],
"names": null
},
"observation.state": {
"dtype": "float32",
"shape": [
43
],
"names": null
},
"observation.environment_state": {
"dtype": "float32",
"shape": [
14
],
"names": null
},
"observation.images.agentview_image": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 20.0,
"video.height": 256,
"video.width": 256,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.robot0_eye_in_hand_image": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 20.0,
"video.height": 256,
"video.width": 256,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
Sample Usage
This dataset is designed to be used with the VITA codebase, which extends LeRobot. Below are examples for converting datasets to an optimized zarr format and training a VITA policy.
First, ensure the VITA repository is cloned and setup, and the FLARE_DATASETS_DIR environment variable is set as described in the VITA GitHub repository.
Dataset Preprocessing
To list available datasets:
cd gym-av-aloha/scripts
python convert.py --ls
To convert a HuggingFace dataset to an offline zarr format (e.g., av_aloha_sim_hook_package):
python convert.py -r iantc104/av_aloha_sim_hook_package
Training a VITA Policy
Once the dataset is converted, you can train a VITA policy using the flare module from the VITA codebase:
python flare/train.py policy=vita task=hook_package session=test
You can override default configurations as needed:
# Example: Use a specific GPU
python flare/train.py policy=vita task=hook_package session=test device=cuda:2
# Example: Change online validation frequency and episodes
python flare/train.py policy=vita task=hook_package session=test \
val.val_online_freq=2000 val.eval_n_episodes=10
# Example: Run an ablation
python flare/train.py policy=vita task=hook_package session=ablate \
policy.vita.decode_flow_latents=False wandb.notes=ablation
Citation
BibTeX:
@article{gao2025vita,
title={VITA: Vision-to-Action Flow Matching Policy},
author={Gao, Dechen and Zhao, Boqi and Lee, Andrew and Chuang, Ian and Zhou, Hanchu and Wang, Hang and Zhao, Zhe and Zhang, Junshan and Soltani, Iman},
journal={arXiv preprint arXiv:2507.13231},
year={2025}
}