LIBERO-Plus: In-depth Robustness Analysis of Vision-Language-Action Models
Paper β’ 2510.13626 β’ Published β’ 47
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
This dataset provides primitive annotations for robotic manipulation demonstrations used to train and evaluate the NS-VLA framework. Each demonstration trajectory is segmented and labeled with structured manipulation primitives.
| Primitive | Description | Frequency |
|---|---|---|
pick |
Grasp a target object | 44.4% |
place_in |
Place object inside a container | 25.0% |
place_on |
Place object on a surface | 16.7% |
close |
Close an appliance (e.g., microwave) | 5.6% |
place_rel |
Place relative to another object | β |
turn_on |
Activate an appliance (e.g., stove) | β |
open |
Open an appliance door | β |
NS-VLA-Dataset/
βββ libero/
β βββ spatial/ # LIBERO-Spatial task annotations
β βββ object/ # LIBERO-Object task annotations
β βββ goal/ # LIBERO-Goal task annotations
β βββ long/ # LIBERO-Long task annotations
βββ calvin/
β βββ ABC_D/ # CALVIN ABCβD split annotations
βββ metadata.json # Dataset statistics and splits
Each annotation file is a JSON with the following structure:
{
"task_id": "libero_spatial_01",
"instruction": "put the white mug on the left plate",
"primitives": [
{"op": "pick", "args": {"object": "white_mug"}, "start": 0, "end": 45},
{"op": "place_on", "args": {"object": "white_mug", "support": "left_plate"}, "start": 46, "end": 102}
],
"total_steps": 102
}
β οΈ Note: Dataset files will be released upon paper acceptance. Please check back soon.
from datasets import load_dataset
dataset = load_dataset("Zuzuzzy/NS-VLA-Dataset")
@article{zhu2026nsvla,
title={NS-VLA: Towards Neuro-Symbolic Vision-Language-Action Models},
author={Zhu, Ziyue and Wu, Shangyang and Zhao, Shuai and Zhao, Zhiqiu and Li, Shengjie and Wang, Yi and Li, Fang and Luo, Haoran},
journal={arXiv preprint arXiv:XXXX.XXXXX},
year={2026}
}
This dataset is released under the Apache 2.0 License.