The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

NS-VLA Dataset: Primitive-Annotated Robotic Manipulation Data

arXiv GitHub Project Page

Dataset Description

This dataset provides primitive annotations for robotic manipulation demonstrations used to train and evaluate the NS-VLA framework. Each demonstration trajectory is segmented and labeled with structured manipulation primitives.

Primitive Vocabulary

Primitive Description Frequency
pick Grasp a target object 44.4%
place_in Place object inside a container 25.0%
place_on Place object on a surface 16.7%
close Close an appliance (e.g., microwave) 5.6%
place_rel Place relative to another object β€”
turn_on Activate an appliance (e.g., stove) β€”
open Open an appliance door β€”

Dataset Structure

NS-VLA-Dataset/
β”œβ”€β”€ libero/
β”‚   β”œβ”€β”€ spatial/          # LIBERO-Spatial task annotations
β”‚   β”œβ”€β”€ object/           # LIBERO-Object task annotations
β”‚   β”œβ”€β”€ goal/             # LIBERO-Goal task annotations
β”‚   └── long/             # LIBERO-Long task annotations
β”œβ”€β”€ calvin/
β”‚   └── ABC_D/            # CALVIN ABCβ†’D split annotations
└── metadata.json         # Dataset statistics and splits

Annotation Format

Each annotation file is a JSON with the following structure:

{
  "task_id": "libero_spatial_01",
  "instruction": "put the white mug on the left plate",
  "primitives": [
    {"op": "pick", "args": {"object": "white_mug"}, "start": 0, "end": 45},
    {"op": "place_on", "args": {"object": "white_mug", "support": "left_plate"}, "start": 46, "end": 102}
  ],
  "total_steps": 102
}

Usage

⚠️ Note: Dataset files will be released upon paper acceptance. Please check back soon.

from datasets import load_dataset

dataset = load_dataset("Zuzuzzy/NS-VLA-Dataset")

Associated Benchmarks

  • LIBERO β€” Language-conditioned robotic manipulation
  • LIBERO-Plus β€” Robustness evaluation with perturbations
  • CALVIN β€” Long-horizon language-conditioned manipulation

Citation

@article{zhu2026nsvla,
  title={NS-VLA: Towards Neuro-Symbolic Vision-Language-Action Models},
  author={Zhu, Ziyue and Wu, Shangyang and Zhao, Shuai and Zhao, Zhiqiu and Li, Shengjie and Wang, Yi and Li, Fang and Luo, Haoran},
  journal={arXiv preprint arXiv:XXXX.XXXXX},
  year={2026}
}

License

This dataset is released under the Apache 2.0 License.

Downloads last month
18

Paper for zuzuzzy/NS-VLA-Dataset