Files changed (1) hide show
  1. README.md +42 -9
README.md CHANGED
@@ -1,24 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
  <h1 align="center" style="font-size: 75px; font-weight: bold; margin-top: 30px;">
2
- πŸ“Š SafeLIBERO Benchmark <br>
3
- <span style="font-size: 20px;"> <a href="https://vlsa-aegis.github.io/benchmark.html"><img src="https://img.shields.io/badge/-Detailed_Overview-3776AB?logo=readthedocs&logoColor=white" alt="Detailed Overview" height="25"></a>
4
- <a href="https://vlsa-aegis.github.io/"><img src="https://img.shields.io/badge/-Video_Demos-FF0000?logo=youtube&logoColor=white" alt="Video Demos" height="25"></a>
5
- </span>
6
  </h1>
 
 
 
 
 
 
 
 
7
  <p align="center">
8
- <img src="https://github.com/songqiaohu/pictureandgif/blob/main/safelibero_overview.png?raw=true" alt="overview" width="600">
9
- </p>
10
 
11
- **SafeLIBERO** is a benchmark desinged to evaluate model performance in complex environments. It extends each LIBERO suite by selecting **four representative tasks**, with each task further divided into two scenarios with different safety levels, determined by the degree of interference introduced by an added obstacle: **Level I**: Scenarios where the obstacle is positioned in close proximity to the target object; **Level II**: Scenarios where the obstacle is located further away but obstructs the movement. It is worth noting that for some tasks, the distinction between these two intervention levels may be less obvious. Within each scenario, the positions of obstacles and other objects are randomized within a small range over 50 episodes to ensure robustness and diversity. A diverse set of everyday objects is used as obstacles, including **moka pots**, **storage boxes**, **milk cartons**, **wine bottles**, **mugs**, and **books**. Overall, SafeLIBERO consists of **4 suites comprising 16 tasks and 32 scenarios**, resulting in a total of **1,600 evaluation episodes**.
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  ---
14
 
15
- ### πŸ“š Contents
 
16
  - [Benchmark Tasks](#-benchmark-tasks)
17
  - [Installation](#-installation)
18
  - [Running Evaluation](#-running-evaluation)
19
  - [Automated Collision Check](#-automated-collision-check)
20
  - [Scene Generation Logic](#-scene-generation-logic)
21
- - [Publications Using this Benchmark](#-publications-using-this-benchmark)
 
 
22
  ---
23
 
24
  ### πŸ“ Benchmark Tasks
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - robotics
5
+ - reinforcement-learning
6
+ - safety
7
+ - benchmark
8
+ - libero
9
+ - simulation
10
+ pretty_name: SafeLIBERO
11
+ ---
12
+
13
  <h1 align="center" style="font-size: 75px; font-weight: bold; margin-top: 30px;">
14
+ πŸ“Š SafeLIBERO Benchmark
 
 
 
15
  </h1>
16
+
17
+ <div align="center">
18
+ <a href="https://vlsa-aegis.github.io/benchmark.html"><img src="https://img.shields.io/badge/-Detailed_Overview-3776AB?logo=readthedocs&logoColor=white" alt="Detailed Overview" height="25"></a>
19
+ <a href="https://vlsa-aegis.github.io/"><img src="https://img.shields.io/badge/-Video_Demos-FF0000?logo=youtube&logoColor=white" alt="Video Demos" height="25"></a>
20
+ </div>
21
+
22
+ <br>
23
+
24
  <p align="center">
25
+ <img src="https://github.com/songqiaohu/pictureandgif/blob/main/safelibero_overview.png?raw=true" alt="SafeLIBERO Overview" width="800">
26
+ </p>
27
 
28
+ ## πŸ“– Overview
29
+
30
+ **SafeLIBERO** is a benchmark designed to evaluate robotic model performance in complex, safety-critical environments. It extends each LIBERO suite by selecting **four representative tasks**, with each task further divided into two scenarios varying by safety level based on obstacle interference:
31
+
32
+ * **Level I**: Scenarios where the obstacle is positioned in **close proximity** to the target object.
33
+ * **Level II**: Scenarios where the obstacle is located further away but **obstructs the movement path**.
34
+
35
+ > [!NOTE]
36
+ > For some tasks, the distinction between these two intervention levels may be subtle.
37
+
38
+ **Key Features:**
39
+ * **Randomization:** Within each scenario, obstacle and object positions are randomized within a small range over **50 episodes** to ensure robustness.
40
+ * **Diverse Obstacles:** Includes everyday objects such as **moka pots, storage boxes, milk cartons, wine bottles, mugs, and books**.
41
+ * **Scale:** Consists of **4 suites**, **16 tasks**, and **32 scenarios**, totaling **1,600 evaluation episodes**.
42
 
43
  ---
44
 
45
+ ## πŸ“š Contents
46
+
47
  - [Benchmark Tasks](#-benchmark-tasks)
48
  - [Installation](#-installation)
49
  - [Running Evaluation](#-running-evaluation)
50
  - [Automated Collision Check](#-automated-collision-check)
51
  - [Scene Generation Logic](#-scene-generation-logic)
52
+ - [Publications](#-publications-using-this-benchmark)
53
+ - [Citation](#-citation)
54
+
55
  ---
56
 
57
  ### πŸ“ Benchmark Tasks