metadata
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- LIBERO
- manipulation
- robotics-simulation
configs:
- config_name: default
data_files: data/*.parquet
LIBERO_OBJECT Dataset
This is a subset of the lerobot/libero dataset containing only the LIBERO_OBJECT tasks.
Note: This dataset contains ~25% of the original lerobot/libero dataset (10 out of 40 tasks), making it:
- Faster to download (~500MB instead of ~2GB)
- Quicker to train on
- More suitable for quick experimentation
Dataset Description
LIBERO_OBJECT is part of the LIBERO benchmark - a simulation benchmark for robot learning. LIBERO_OBJECT consists of 10 long-horizon manipulation tasks that require:
- Multi-step reasoning
- Object manipulation
- Scene understanding
- Tool use
Tasks Included
This dataset contains tasks 0-9 from the original dataset:
- Put the white mug on the left plate and put the chocolate pudding in the top drawer
- Put the white mug on the plate and put the chocolate pudding in the top drawer
- Put the yellow and white mug in the microwave and put the moka pot on the stove
- Turn on the stove and put the moka pot on it
- Put both the alphabet soup and the cream cheese in the basket
- Put both the alphabet soup and the tomato sauce in the basket
- Put both moka pots on the stove
- Put both the cream cheese box and the butter in the bottom drawer of the cabinet
- Put the black bowl in the bottom drawer of the cabinet
- Pick up the book and place it in the back compartment of the cabinet
Dataset Structure
- Total Episodes: 379
- Total Frames: 101,469
- Total Tasks: 10
- Robot: Franka Panda
- Resolution: 256x256 RGB
- FPS: 10
Features
observation.state: Robot joint state (8 DoF)observation.images.image: Main camera view (video)observation.images.image2: Secondary camera view (video)action: 7-DoF robot actionstask_index: Task ID (0-9 for LIBERO_OBJECT)
Usage
Using LeRobot (Recommended for Robotics)
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
# Load the dataset with video support
ds = LeRobotDataset("ases200q2/libero_object")
# Access episodes with video frames
for i in range(len(ds)):
sample = ds[i]
# sample['observation.images.image'] - numpy array of shape (H, W, C)
# sample['observation.state'] - robot state
# sample['action'] - action
Using Hugging Face Datasets
from datasets import load_dataset
# Load the dataset
ds = load_dataset("ases200q2/libero_object")
train_data = ds["train"]
# Access individual frames
for frame in train_data:
state = frame["observation.state"] # robot joint state
action = frame["action"] # robot action
# Note: Videos are stored externally as MP4 files
# Use LeRobotDataset for automatic video loading
For Training with LeRobot
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
from torch.utils.data import DataLoader
# Load dataset
dataset = LeRobotDataset("ases200q2/libero_object")
# Create dataloader
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
# Training loop
for batch in dataloader:
# batch contains observations and actions
pass
Citation
If you use this dataset, please cite the original LIBERO paper:
@article{libero2023,
title={LIBERO: Simulation Benchmark for Long-Horizon Robot Manipulation},
author={Li, Wenhao and Mo, Kumchol and Chiu, Hung-Jui and Li, Zhenjia and Xu, Huazhe and Zhu, Yixin and Bolei, Zhou and Fei-Fei, Li and others},
journal={arXiv preprint arXiv:2310.12956},
year={2023}
}
License
Apache 2.0