Please provide smaller subsets of vlabench_primitive_ft_dataset

#2
by jwchang417 - opened

Hi, and thank you for releasing the vlabench_primitive_ft_dataset!

The dataset is incredibly valuable for fine-tuning VLA models. However, the current release format—using 77 split archive files (totaling ~800GB)—makes it very challenging to use for those of us with limited disk space (e.g., 1TB SSDs).

While the full dataset contains 10 primitive tasks with 500 episodes each, it would be extremely helpful if you could provide smaller pre-divided subsets, such as:

  • 100 episodes per task, or
  • per-task individual datasets (e.g., vlabench_primitive_place_red_cube), or
  • a lightweight variant like vlabench_primitive_mini.

This would allow researchers and developers to:

  • Test pipelines without requiring huge disk space,
  • Validate integration and preprocessing without processing the full dataset,
  • Work within constrained academic or personal compute environments.

A vlabench_primitive_mini (e.g., ~5-10GB total) with a fixed number of HDF5 episodes per task would go a long way in making this dataset more accessible and flexible.

Thank you again for your excellent work on VLABench!

VLABench org

Sorry for the late reply, I seem to have missed the notification email from HF lol.
Thank you very much for your suggestions! I recently uploaded the finetune dataset in RLDS format to the HF repository. After resizing and JPEG encoding, the dataset is only 34GB, and it can be directly loaded by the current data framework for RLDS format. The datasetid is 'VLABench/vlabench_primitive_rlds_resize224'. In the next few weeks, I will update some additional datasets and model checkpoints. Stay tuned!

Sign up or log in to comment