Datasets:

Formats:
parquet
Libraries:
Datasets
Dask
License:
smol-libero / README.md
jadechoghari's picture
jadechoghari HF Staff
Update README.md
24b61cc verified
metadata
license: apache-2.0
task_categories:
  - robotics
tags:
  - LeRobot
configs:
  - config_name: default
    data_files: data/*/*.parquet

This dataset was created using LeRobot.

Dataset Card for Smol-LIBERO

Dataset Summary

Smol-LIBERO is a compact version of the LIBERO benchmark, built to make experimentation fast and accessible.
At just 1.79 GB (compared to ~34 GB for the full LIBERO), it contains fewer trajectories and cameras while keeping the same multimodal structure.

Each sample includes:

  • Images from two fixed cameras
  • Two types of robot state (end-effector pose + gripper, and full 7-DoF joint positions)
  • Actions (7-DoF joint commands)

This setup is especially useful for comparing low-dimensional state inputs with high-dimensional visual inputs, or combining them in multimodal training.


Dataset Structure

Data Fields

  • observation.images.image: 256×256×3 RGB image (camera 1)
  • observation.images.image2: 256×256×3 RGB image (camera 2)
  • observation.state (8 floats): end-effector Cartesian pose + gripper
    [x, y, z, roll, pitch, yaw, gripper, gripper]
  • observation.state.joint (7 floats): full joint angles
    [joint_1, …, joint_7]
  • action (7 floats): target joint commands

Why is it smaller than LIBERO?

  • Fewer trajectories/tasks → subset of the full benchmark
  • Only two camera views → reduced visual redundancy
  • Reduced total frames → shorter episodes or lower FPS

That’s why Smol-LIBERO is 1.79 GB instead of 34 GB.


Intended Uses

  • Quick prototyping and debugging
  • Comparing joint-space vs. Cartesian state inputs
  • Training small VLA baselines before scaling to LIBERO

Limitations

  • Smaller task and visual diversity compared to LIBERO
  • Only two fixed camera views
  • May not fully represent generalization behavior on larger benchmarks

Citation

BibTeX:

[More Information Needed]