Datasets:

Formats:
parquet
ArXiv:
License:
LIBERO-X / README.md
wanggd-meituan's picture
Upload folder using huggingface_hub
66e0cfe verified
metadata
license: cc-by-4.0
task_categories:
  - robotics
tags:
  - LeRobot
configs:
  - config_name: default
    data_files: data/*/*.parquet

This dataset was created using LeRobot (commit 12f5263).

Dataset Description

Stay tuned for the full release!

LIBERO-X introduces finer-grained task-level extensions to expose models to diverse task formulations and workspace configurations, includeing 2,520 demonstrations, 600 tasks, and 100 scenes, ensuring broad generalization across diverse scenarios, featuring:

  • Multi-Task Scene Design: Each scene averages 6 distinct tasks, a significant increase compared to the original LIBERO dataset’s average of 2.6 tasks per scene, enabling more complex and realistic multi-objective learning.

  • Attribute-Conditioned Manipulation: Actions are explicitly conditioned on fine-grained object properties (e.g., size, color, texture) beyond broad categories.

  • Spatial Relationship Reasoning: Tasks extend beyond target localization to require understanding and reasoning about spatial relationships among objects, including left/right, front/back, and near/far.

  • Human Demonstration Collection: All trajectories were human-collected via VR teleoperation using a Meta Quest 3.

Dataset Structure

meta/info.json:

{
    "codebase_version": "v2.1",
    "robot_type": "panda",
    "total_episodes": 2520,
    "total_frames": 889277,
    "total_tasks": 428,
    "total_videos": 0,
    "total_chunks": 3,
    "chunks_size": 1000,
    "fps": 10,
    "splits": {
        "train": "0:2520"
    },
    "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
    "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
    "features": {
        "image": {
            "dtype": "image",
            "shape": [
                256,
                256,
                3
            ],
            "names": [
                "height",
                "width",
                "channel"
            ]
        },
        "wrist_image": {
            "dtype": "image",
            "shape": [
                256,
                256,
                3
            ],
            "names": [
                "height",
                "width",
                "channel"
            ]
        },
        "state": {
            "dtype": "float32",
            "shape": [
                8
            ],
            "names": [
                "state"
            ]
        },
        "actions": {
            "dtype": "float32",
            "shape": [
                7
            ],
            "names": [
                "actions"
            ]
        },
        "timestamp": {
            "dtype": "float32",
            "shape": [
                1
            ],
            "names": null
        },
        "frame_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "episode_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        },
        "task_index": {
            "dtype": "int64",
            "shape": [
                1
            ],
            "names": null
        }
    }
}

Citation

@article{wang2026libero,
  title={LIBERO-X: Robustness Litmus for Vision-Language-Action Models},
  author={Wang, Guodong and Zhang, Chenkai and Liu, Qingjie and Zhang, Jinjin and Cai, Jiancheng and Liu, Junjie and Liu, Xinmin},
  journal={arXiv preprint arXiv:2602.06556},
  year={2026}
}