william-yue commited on
Commit
4844da9
·
verified ·
1 Parent(s): 6560265

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -3
README.md CHANGED
@@ -1,3 +1,52 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: opentau
3
+ tags:
4
+ - robotics
5
+ - vla
6
+ - pi05
7
+ - libero
8
+ - manipulation
9
+ - flow-matching
10
+ - pytorch
11
+ base_model: williamyue/pi05_base
12
+ license: apache-2.0
13
+ datasets:
14
+ - physical-intelligence/libero
15
+ repo_url: https://github.com/TensorAuto/OpenTau
16
+ ---
17
+
18
+ # pi0.5-libero-finetune
19
+
20
+ A **pi0.5 (π₀.₅)** Vision-Language-Action (VLA) model, finetuned on the **LIBERO** robotic manipulation benchmark. This model is designed to follow natural language instructions to perform manipulation tasks in a tabletop environment, leveraging the OpenTau framework.
21
+
22
+ **For full documentation, evaluation results, and inference code, please visit the repository:**
23
+ <br>
24
+ 👉 **[https://github.com/TensorAuto/OpenTau](https://github.com/TensorAuto/OpenTau)**
25
+
26
+ ---
27
+
28
+ ## Model Details
29
+
30
+ ### Description
31
+ - **Model Type:** Vision-Language-Action (VLA) Model
32
+ - **Base Architecture:** π₀.₅ (pi0.5) by Physical Intelligence
33
+ - **Backbone:** PaliGemma-2B (VLM) + Gemma-300M (Action Expert)
34
+ - **Training Data:** LIBERO (Lifelong Robot Learning) Benchmark
35
+ - **Framework:** OpenTau
36
+
37
+ ### Architecture
38
+ The pi0.5 architecture uses a flow-matching-based policy designed for open-world generalization. It combines a Visual Language Model (VLM) for high-level semantic understanding with a smaller "action expert" model that generates continuous joint trajectories (50-step action chunks) via flow matching.
39
+
40
+ ---
41
+
42
+ ## Training and Evaluation
43
+
44
+ ### Dataset
45
+ This model was finetuned on the **LIBERO** benchmark dataset. The LIBERO suite consists of human-teleoperated demonstrations for tabletop manipulation, covering:
46
+ - **Spatial Generalization** (libero_spatial)
47
+ - **Object Generalization** (libero_object)
48
+ - **Goal Generalization** (libero_goal)
49
+ - **Long-Horizon Tasks** (libero_100)
50
+
51
+ ### Results
52
+ For detailed usage instructions, success rates, baseline comparisons, and evaluation protocols, please refer to the [OpenTau GitHub Repository](https://github.com/TensorAuto/OpenTau).