libero-r-datasets / README.md
cdarpino's picture
release
61ac0be

LIBERO-R Datasets

Project Page arXiv Code Reasoning Benchmark

This repository contains chain-of-thought (CoT) textual reasoning annotations for the LIBERO-100 robot manipulation benchmark datasets.

Read the paper for more details on data labeling: Yilin Wu, Anqi Li, Tucker Hermans, Fabio Ramos, Andrea Bajcsy, Claudia Pérez-D'Arpino: Do What You Say: Steering Vision-Language-Action Models via Runtime Reasoning-Action Alignment Verification, ArXiv 2025

Overview

The annotations provide step-by-step reasoning traces that decompose robot manipulation tasks into:

  • Plan: A numbered list of sub-tasks to complete the instruction
  • What I have done: Progress tracking of completed sub-tasks
  • Now I need to do: The immediate next action to take

Repository Structure

libero-r-datasets/
├── libero-10-r/                    # LIBERO-10 dataset with reasoning annotations
│   ├── cot_simple.json             # Chain-of-thought annotations
│   ├── data/chunk-000/             # Episode parquet files
│   ├── meta/                       # Dataset metadata (episodes.jsonl, info.json, stats.json, tasks.jsonl)
│   └── README.md                   # Original dataset README
│
├── libero-100-basket-r/            # Basket-related subset of LIBERO-100
│   ├── cot_simple.json             # Chain-of-thought annotations
│   ├── data/chunk-000/             # Episode parquet files
│   └── meta/                       # Dataset metadata
│
├── libero-100-r/                   # Full LIBERO-100 dataset with reasoning annotations
│   ├── cot_simple.json             # Chain-of-thought annotations
│   ├── data/chunk-000..004/        # Episode parquet files 
│   ├── meta/                       # Dataset metadata
│   └── README.md                   # Original dataset README
│
└── README.md                       # This file

Annotation Format

The cot_simple.json files contain reasoning annotations indexed by episode number. Each episode has multiple segments corresponding to different phases of task execution.

Example Annotation

For an episode with instruction: "put the white mug on the left plate and put the yellow and white mug on the right plate"

Segment at task start (steps 0-12):

Plan: 1. pick up the white mug
      2. place the white mug on the left plate
      3. pick up the yellow and white mug
      4. place the yellow and white mug on the right plate
What I have done: Nothing.
Now I need to do: pick up the white mug

Segment after first sub-task (steps 91-101):

Plan: 1. pick up the white mug
      2. place the white mug on the left plate
      3. pick up the yellow and white mug
      4. place the yellow and white mug on the right plate
What I have done: 1. pick up the white mug
Now I need to do: place the white mug on the left plate

Segment at task completion (steps 291+):

Plan: 1. pick up the white mug
      2. place the white mug on the left plate
      3. pick up the yellow and white mug
      4. place the yellow and white mug on the right plate
What I have done: 1. pick up the white mug
                  2. place the white mug on the left plate
                  3. pick up the yellow and white mug
                  4. place the yellow and white mug on the right plate
Now I need to do: Nothing. Task complete.

JSON Structure

{
  "episode_id": {
    "episode_start_interval": [start_step, end_step],
    "segments": [
      {
        "start_step": 0,
        "end_step": 12,
        "content": "Current reasoning state...",
        "updated_content": "Updated reasoning after this segment...",
        "updated_content_w_instruction": "Full text including instruction..."
      }
    ]
  }
}

Citation

If you use these annotations, please cite:

@article{wu2025saysteeringvisionlanguageactionmodels,
      title={Do What You Say: Steering Vision-Language-Action Models via Runtime Reasoning-Action Alignment Verification}, 
      author={Yilin Wu and Anqi Li and Tucker Hermans and Fabio Ramos and Andrea Bajcsy and Claudia P\'{e}rez-D'Arpino},
      year={2025},
      eprint={2510.16281},
      archivePrefix={arXiv},
      url={https://arxiv.org/abs/2510.16281}, 
}