| # LIBERO-R Datasets | |
| <p align="center"> | |
| <a href="https://yilin-wu98.github.io/steering-reasoning-vla/"><img src="https://img.shields.io/badge/Project-Page-blue" alt="Project Page"></a> | |
| <a href="https://www.arxiv.org/abs/2510.16281"><img src="https://img.shields.io/badge/arXiv-2510.16281-b31b1b" alt="arXiv"></a> | |
| <a href="https://github.com/NVlabs/actalign"><img src="https://img.shields.io/badge/Code-Instructions-purple" alt="Code"></a> | |
| <a href="https://github.com/NVlabs/Libero-10-r"><img src="https://img.shields.io/badge/Reasoning-Benchmark-green" alt="Reasoning Benchmark"></a> | |
| </p> | |
| This repository contains chain-of-thought (CoT) textual reasoning annotations for the [LIBERO-100](https://libero-project.github.io/) robot manipulation benchmark datasets. | |
| Read the paper for more details on data labeling: [Yilin Wu, Anqi Li, Tucker Hermans, Fabio Ramos, Andrea Bajcsy, Claudia Pérez-D'Arpino: Do What You Say: Steering Vision-Language-Action Models via Runtime Reasoning-Action Alignment Verification, ArXiv 2025](https://www.arxiv.org/abs/2510.16281) | |
| ## Overview | |
| The annotations provide step-by-step reasoning traces that decompose robot manipulation tasks into: | |
| - **Plan**: A numbered list of sub-tasks to complete the instruction | |
| - **What I have done**: Progress tracking of completed sub-tasks | |
| - **Now I need to do**: The immediate next action to take | |
| ## Repository Structure | |
| ``` | |
| libero-r-datasets/ | |
| ├── libero-10-r/ # LIBERO-10 dataset with reasoning annotations | |
| │ ├── cot_simple.json # Chain-of-thought annotations | |
| │ ├── data/chunk-000/ # Episode parquet files | |
| │ ├── meta/ # Dataset metadata (episodes.jsonl, info.json, stats.json, tasks.jsonl) | |
| │ └── README.md # Original dataset README | |
| │ | |
| ├── libero-100-basket-r/ # Basket-related subset of LIBERO-100 | |
| │ ├── cot_simple.json # Chain-of-thought annotations | |
| │ ├── data/chunk-000/ # Episode parquet files | |
| │ └── meta/ # Dataset metadata | |
| │ | |
| ├── libero-100-r/ # Full LIBERO-100 dataset with reasoning annotations | |
| │ ├── cot_simple.json # Chain-of-thought annotations | |
| │ ├── data/chunk-000..004/ # Episode parquet files | |
| │ ├── meta/ # Dataset metadata | |
| │ └── README.md # Original dataset README | |
| │ | |
| └── README.md # This file | |
| ``` | |
| ## Annotation Format | |
| The `cot_simple.json` files contain reasoning annotations indexed by episode number. Each episode has multiple segments corresponding to different phases of task execution. | |
| ### Example Annotation | |
| For an episode with instruction: *"put the white mug on the left plate and put the yellow and white mug on the right plate"* | |
| **Segment at task start (steps 0-12):** | |
| ``` | |
| Plan: 1. pick up the white mug | |
| 2. place the white mug on the left plate | |
| 3. pick up the yellow and white mug | |
| 4. place the yellow and white mug on the right plate | |
| What I have done: Nothing. | |
| Now I need to do: pick up the white mug | |
| ``` | |
| **Segment after first sub-task (steps 91-101):** | |
| ``` | |
| Plan: 1. pick up the white mug | |
| 2. place the white mug on the left plate | |
| 3. pick up the yellow and white mug | |
| 4. place the yellow and white mug on the right plate | |
| What I have done: 1. pick up the white mug | |
| Now I need to do: place the white mug on the left plate | |
| ``` | |
| **Segment at task completion (steps 291+):** | |
| ``` | |
| Plan: 1. pick up the white mug | |
| 2. place the white mug on the left plate | |
| 3. pick up the yellow and white mug | |
| 4. place the yellow and white mug on the right plate | |
| What I have done: 1. pick up the white mug | |
| 2. place the white mug on the left plate | |
| 3. pick up the yellow and white mug | |
| 4. place the yellow and white mug on the right plate | |
| Now I need to do: Nothing. Task complete. | |
| ``` | |
| ### JSON Structure | |
| ```json | |
| { | |
| "episode_id": { | |
| "episode_start_interval": [start_step, end_step], | |
| "segments": [ | |
| { | |
| "start_step": 0, | |
| "end_step": 12, | |
| "content": "Current reasoning state...", | |
| "updated_content": "Updated reasoning after this segment...", | |
| "updated_content_w_instruction": "Full text including instruction..." | |
| } | |
| ] | |
| } | |
| } | |
| ``` | |
| ## Citation | |
| If you use these annotations, please cite: | |
| ```bibtex | |
| @article{wu2025saysteeringvisionlanguageactionmodels, | |
| title={Do What You Say: Steering Vision-Language-Action Models via Runtime Reasoning-Action Alignment Verification}, | |
| author={Yilin Wu and Anqi Li and Tucker Hermans and Fabio Ramos and Andrea Bajcsy and Claudia P\'{e}rez-D'Arpino}, | |
| year={2025}, | |
| eprint={2510.16281}, | |
| archivePrefix={arXiv}, | |
| url={https://arxiv.org/abs/2510.16281}, | |
| } | |
| ``` | |