|
|
|
|
|
|
|
|
**LIBERO** is a benchmark designed to study **lifelong robot learning**. The idea is that robots wonβt just be pretrained once in a factory, theyβll need to keep learning and adapting with their human users over time. This ongoing adaptation is called **lifelong learning in decision making (LLDM)**, and itβs a key step toward building robots that become truly personalized helpers.
|
|
|
|
|
|
- π [LIBERO paper](https://arxiv.org/abs/2306.03310)
|
|
|
- π» [Original LIBERO repo](https://github.com/Lifelong-Robot-Learning/LIBERO)
|
|
|
|
|
|
To make progress on this challenge, LIBERO provides a set of standardized tasks that focus on **knowledge transfer**: how well a robot can apply what it has already learned to new situations. By evaluating on LIBERO, different algorithms can be compared fairly and researchers can build on each otherβs work.
|
|
|
|
|
|
LIBERO includes **five task suites**:
|
|
|
|
|
|
- **LIBERO-Spatial (`libero_spatial`)** β tasks that require reasoning about spatial relations.
|
|
|
- **LIBERO-Object (`libero_object`)** β tasks centered on manipulating different objects.
|
|
|
- **LIBERO-Goal (`libero_goal`)** β goal-conditioned tasks where the robot must adapt to changing targets.
|
|
|
- **LIBERO-90 (`libero_90`)** β 90 short-horizon tasks from the LIBERO-100 collection.
|
|
|
- **LIBERO-Long (`libero_10`)** β 10 long-horizon tasks from the LIBERO-100 collection.
|
|
|
|
|
|
Together, these suites cover **130 tasks**, ranging from simple object manipulations to complex multi-step scenarios. LIBERO is meant to grow over time, and to serve as a shared benchmark where the community can test and improve lifelong learning algorithms.
|
|
|
|
|
|

|
|
|
|
|
|
|
|
|
|
|
|
At **LeRobot**, we ported [LIBERO](https://github.com/Lifelong-Robot-Learning/LIBERO) into our framework and used it mainly to **evaluate [SmolVLA](https://huggingface.co/docs/lerobot/en/smolvla)**, our lightweight Vision-Language-Action model.
|
|
|
|
|
|
LIBERO is now part of our **multi-eval supported simulation**, meaning you can benchmark your policies either on a **single suite of tasks** or across **multiple suites at once** with just a flag.
|
|
|
|
|
|
To Install LIBERO, after following LeRobot official instructions, just do:
|
|
|
`pip install -e ".[libero]"`
|
|
|
|
|
|
|
|
|
|
|
|
Evaluate a policy on one LIBERO suite:
|
|
|
|
|
|
```bash
|
|
|
lerobot-eval \
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
- `
|
|
|
- `
|
|
|
- `
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Benchmark a policy across multiple suites at once:
|
|
|
|
|
|
```bash
|
|
|
lerobot-eval \
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
- Pass a comma-separated list to `
|
|
|
|
|
|
|
|
|
|
|
|
When using LIBERO through LeRobot, policies interact with the environment via **observations** and **actions**:
|
|
|
|
|
|
- **Observations**
|
|
|
- `observation.state` β proprioceptive features (agent state).
|
|
|
- `observation.images.image` β main camera view (`agentview_image`).
|
|
|
- `observation.images.image2` β wrist camera view (`robot0_eye_in_hand_image`).
|
|
|
|
|
|
β οΈ **Note:** LeRobot enforces the `.images.*` prefix for any multi-modal visual features. Always ensure that your policy config `input_features` use the same naming keys, and that your dataset metadata keys follow this convention during evaluation.
|
|
|
If your data contains different keys, you must rename the observations to match what the policy expects, since naming keys are encoded inside the normalization statistics layer.
|
|
|
This will be fixed with the upcoming Pipeline PR.
|
|
|
|
|
|
- **Actions**
|
|
|
- Continuous control values in a `Box(-1, 1, shape=(7,))` space.
|
|
|
|
|
|
We also provide a notebook for quick testing:
|
|
|
Training with LIBERO
|
|
|
|
|
|
|
|
|
|
|
|
When training on LIBERO tasks, make sure your dataset parquet and metadata keys follow the LeRobot convention.
|
|
|
|
|
|
The environment expects:
|
|
|
|
|
|
- `observation.state` β 8-dim agent state
|
|
|
- `observation.images.image` β main camera (`agentview_image`)
|
|
|
- `observation.images.image2` β wrist camera (`robot0_eye_in_hand_image`)
|
|
|
|
|
|
β οΈ Cleaning the dataset upfront is **cleaner and more efficient** than remapping keys inside the code.
|
|
|
To avoid potential mismatches and key errors, we provide a **preprocessed LIBERO dataset** that is fully compatible with the current LeRobot codebase and requires no additional manipulation:
|
|
|
π [HuggingFaceVLA/libero](https://huggingface.co/datasets/HuggingFaceVLA/libero)
|
|
|
|
|
|
For reference, here is the **original dataset** published by Physical Intelligence:
|
|
|
π [physical-intelligence/libero](https://huggingface.co/datasets/physical-intelligence/libero)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
lerobot-train \
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
LeRobot uses MuJoCo for simulation. You need to set the rendering backend before training or evaluation:
|
|
|
|
|
|
- `export MUJOCO_GL=egl` β for headless servers (e.g. HPC, cloud)
|
|
|
|
|
|
|
|
|
|
|
|
We reproduce the results of Οβ.β
on the LIBERO benchmark using the LeRobot implementation. We take the Physical Intelligence LIBERO base model (`pi05_libero`) and finetune for an additional 6k steps in bfloat16, with batch size of 256 on 8 H100 GPUs using the [HuggingFace LIBERO dataset](https://huggingface.co/datasets/HuggingFaceVLA/libero).
|
|
|
|
|
|
The finetuned model can be found here:
|
|
|
|
|
|
- **Οβ.β
LIBERO**: [lerobot/pi05_libero_finetuned](https://huggingface.co/lerobot/pi05_libero_finetuned)
|
|
|
|
|
|
We then evaluate the finetuned model using the LeRobot LIBERO implementation, by running the following command:
|
|
|
|
|
|
```bash
|
|
|
lerobot-eval \
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
**Note:** We set `n_action_steps=10`, similar to the original OpenPI implementation.
|
|
|
|
|
|
|
|
|
|
|
|
We obtain the following results on the LIBERO benchmark:
|
|
|
|
|
|
| Model | LIBERO Spatial | LIBERO Object | LIBERO Goal | LIBERO 10 | Average |
|
|
|
|
|
|
|
| **Οβ.β
** | 97.0 | 99.0 | 98.0 | 96.0 | **97.5** |
|
|
|
|
|
|
These results are consistent with the original [results](https://github.com/Physical-Intelligence/openpi/tree/main/examples/libero
|
|
|
|
|
|
| Model | LIBERO Spatial | LIBERO Object | LIBERO Goal | LIBERO 10 | Average |
|
|
|
|
|
|
|
| **Οβ.β
** | 98.8 | 98.2 | 98.0 | 92.4 | **96.85** |
|
|
|
|