ML-GOD commited on
Commit
5deec1b
·
verified ·
1 Parent(s): 42ae0f6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: opentau
3
+ tags:
4
+ - robotics
5
+ - vla
6
+ - pi05
7
+ - robocasa
8
+ - manipulation
9
+ - flow-matching
10
+ - pytorch
11
+ base_model: williamyue/pi05_base
12
+ license: apache-2.0
13
+ datasets:
14
+ - robocasa/NavigateKitchen
15
+ repo_url: https://github.com/TensorAuto/OpenTau
16
+ ---
17
+
18
+ # Robocasa_navigatekitchen
19
+
20
+ A **pi0.5 (π₀.₅)** Vision-Language-Action (VLA) model, finetuned on the **ROBOCASA** robotic manipulation/navigation benchmark using the **OpenTau** training framework. This model is designed to follow natural language instructions to perform navigation tasks in a simulated kitchen environment.
21
+
22
+ **For full documentation, evaluation results, and inference code, please visit the repository:**
23
+ <br>
24
+ 👉 **[https://github.com/TensorAuto/OpenTau](https://github.com/TensorAuto/OpenTau)**
25
+
26
+ ---
27
+
28
+ ## Model Details
29
+
30
+ ### Description
31
+ - **Model Type:** Vision-Language-Action (VLA) Model
32
+ - **Base Architecture:** π₀.₅ (pi0.5) by Physical Intelligence
33
+ - **Backbone:** PaliGemma-3B (VLM) + Gemma-300M (Action Expert)
34
+ - **Training Data:** Robocasa Benchmark
35
+ - **Framework:** OpenTau
36
+
37
+ ### Architecture
38
+ The pi0.5 architecture uses a flow-matching-based policy designed for open-world generalization. It combines a Visual Language Model (VLM) for high-level semantic understanding with a smaller "action expert" model that generates continuous joint trajectories (10-step action chunks) via flow matching.
39
+
40
+ ---
41
+
42
+ ## Training and Evaluation
43
+
44
+ ### Dataset
45
+ This model was finetuned on the **Robocasa** benchmark dataset. The Robocasa suite consists of human-teleoperated and mimicgen demonstrations for manipulation and navigation, covering:
46
+ - **Navigate Kitchen** (Atomic)
47
+
48
+ ### Results
49
+ Training on 100 Human demonstrations, our model achieves **97%** success rate on Navigate Kitchen tasks.
50
+ For detailed usage instructions, success rates, baseline comparisons, and evaluation protocols, please refer to the [OpenTau GitHub Repository](https://github.com/TensorAuto/OpenTau).