metadata
license: apache-2.0
task_categories:
- robotics
tags:
- vision-language-action
- vlm
- zero-shot
This repository contains the OTTER dataset, used in the paper OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction.
Project Page: https://ottervla.github.io/
Code: https://github.com/Max-Fu/otter
Dataset Download:
The OTTER dataset is hosted on Hugging Face and is in TFDS format. It supports pre-training on Open X-Embodiment. You can download the datasets using the Hugging Face CLI:
# first install huggingface-cli
pip install -U "huggingface_hub[cli]"
# download the datasets
mkdir -p dataset
pushd dataset
huggingface-cli download mlfu7/icrt_pour --repo-type dataset --local-dir .
huggingface-cli download mlfu7/icrt_drawer --repo-type dataset --local-dir .
huggingface-cli download mlfu7/icrt_poke --repo-type dataset --local-dir .
huggingface-cli download mlfu7/icrt_pickplace_1 --repo-type dataset --local-dir .
huggingface-cli download mlfu7/icrt_stack_mul_tfds --repo-type dataset --local-dir .
huggingface-cli download mlfu7/icrt_pickplace --repo-type dataset --local-dir .
popd
A converted LeRobot version of the dataset is available for fine-tuning Pi0 (here). Fine-tuning scripts are provided here.