| | --- |
| | license: apache-2.0 |
| | task_categories: |
| | - robotics |
| | tags: |
| | - LeRobot |
| | - flow-matching |
| | configs: |
| | - config_name: default |
| | data_files: data/*/*.parquet |
| | --- |
| | |
| | This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). It contains data associated with the paper [VITA: Vision-to-Action Flow Matching Policy](https://huggingface.co/papers/2507.13231). |
| |
|
| | ## Dataset Description |
| |
|
| | This dataset is associated with the paper [VITA: Vision-to-Action Flow Matching Policy](https://huggingface.co/papers/2507.13231). VITA introduces a noise-free and conditioning-free policy learning framework that directly maps visual representations to latent actions using flow matching. This dataset comprises the data used for evaluating VITA on 8 simulation and 2 real-world tasks from ALOHA and Robomimic. |
| |
|
| | - **Homepage:** [https://ucd-dare.github.io/VITA/](https://ucd-dare.github.io/VITA/) |
| | - **Paper:** [https://huggingface.co/papers/2507.13231](https://huggingface.co/papers/2507.13231) |
| | - **Code:** [https://github.com/ucd-dare/VITA](https://github.com/ucd-dare/VITA) |
| | - **License:** apache-2.0 |
| |
|
| | ## Dataset Structure |
| |
|
| | [meta/info.json](meta/info.json): |
| | ```json |
| | { |
| | "codebase_version": "v2.1", |
| | "robot_type": null, |
| | "total_episodes": 192, |
| | "total_frames": 22305, |
| | "total_tasks": 1, |
| | "total_videos": 384, |
| | "total_chunks": 1, |
| | "chunks_size": 1000, |
| | "fps": 20, |
| | "splits": { |
| | "train": "0:192" |
| | }, |
| | "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", |
| | "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", |
| | "features": { |
| | "action": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 7 |
| | ], |
| | "names": null |
| | }, |
| | "action.delta": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 7 |
| | ], |
| | "names": null |
| | }, |
| | "action.absolute": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 7 |
| | ], |
| | "names": null |
| | }, |
| | "observation.state": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 43 |
| | ], |
| | "names": null |
| | }, |
| | "observation.environment_state": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 14 |
| | ], |
| | "names": null |
| | }, |
| | "observation.images.agentview_image": { |
| | "dtype": "video", |
| | "shape": [ |
| | 256, |
| | 256, |
| | 3 |
| | ], |
| | "names": [ |
| | "height", |
| | "width", |
| | "channel" |
| | ], |
| | "info": { |
| | "video.height": 256, |
| | "video.width": 256, |
| | "video.codec": "av1", |
| | "video.pix_fmt": "yuv420p", |
| | "video.is_depth_map": false, |
| | "video.fps": 20, |
| | "video.channels": 3, |
| | "has_audio": false |
| | } |
| | }, |
| | "observation.images.robot0_eye_in_hand_image": { |
| | "dtype": "video", |
| | "shape": [ |
| | 256, |
| | 256, |
| | 3 |
| | ], |
| | "names": [ |
| | "height", |
| | "width", |
| | "channel" |
| | ], |
| | "info": { |
| | "video.height": 256, |
| | "video.width": 256, |
| | "video.codec": "av1", |
| | "video.pix_fmt": "yuv420p", |
| | "video.is_depth_map": false, |
| | "video.fps": 20, |
| | "video.channels": 3, |
| | "has_audio": false |
| | } |
| | },\ |
| | "timestamp": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "frame_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "episode_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "task_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | } |
| | } |
| | } |
| | ``` |
| |
|
| | ## Sample Usage |
| |
|
| | The datasets are designed to be used with the VITA codebase, which extends [LeRobot](https://github.com/huggingface/lerobot) for optimized preprocessing and training. |
| |
|
| | First, set up the VITA environment as described in the [Github repository](https://github.com/ucd-dare/VITA): |
| | ```bash |
| | git clone git@github.com:ucd-dare/VITA.git |
| | cd VITA |
| | conda create --name vita python==3.10 |
| | conda activate vita |
| | conda install cmake |
| | pip install -e . |
| | pip install -r requirements.txt |
| | # Install LeRobot dependencies |
| | cd lerobot |
| | pip install -e . |
| | # Install ffmpeg for dataset processing |
| | conda install -c conda-forge ffmpeg |
| | ``` |
| |
|
| | Set the dataset storage path: |
| | ```bash |
| | echo 'export FLARE_DATASETS_DIR=<PATH_TO_VITA>/gym-av-aloha/outputs' >> ~/.bashrc |
| | # Reload bashrc |
| | source ~/.bashrc |
| | conda activate vita |
| | ``` |
| |
|
| | You can list available datasets (hosted on Hugging Face) using the conversion script: |
| | ```bash |
| | cd gym-av-aloha/scripts |
| | python convert.py --ls |
| | ``` |
| |
|
| | To convert a Hugging Face dataset to the optimized offline Zarr format for faster training (this may take >10 minutes), for example: |
| | ```bash |
| | python convert.py -r iantc104/av_aloha_sim_hook_package |
| | ``` |
| | Converted datasets will be stored in the path specified by `FLARE_DATASETS_DIR`. |
| |
|
| | To train a policy using a task (e.g., `hook_package`) with the VITA framework: |
| | ```bash |
| | python flare/train.py policy=vita task=hook_package session=test |
| | ``` |
| |
|
| | ## Citation |
| |
|
| | ```bibtex |
| | @article{gao2025vita, |
| | title={VITA: Vision-to-Action Flow Matching Policy}, |
| | author={Gao, Dechen and Zhao, Boqi and Lee, Andrew and Chuang, Ian and Zhou, Hanchu and Wang, Hang and Zhao, Zhe and Zhang, Junshan and Soltani, Iman}, |
| | journal={arXiv preprint arXiv:2507.13231}, |
| | year={2025} |
| | } |
| | ``` |