william-yue commited on
Commit
bcda19c
·
verified ·
1 Parent(s): dbc69a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -35,7 +35,7 @@ A **pi0.5 (π₀.₅)** Vision-Language-Action (VLA) model, finetuned on the **L
35
  - **Framework:** OpenTau
36
 
37
  ### Architecture
38
- The pi0.5 architecture uses a flow-matching-based policy designed for open-world generalization. It combines a Visual Language Model (VLM) for high-level semantic understanding with a smaller "action expert" model that generates continuous joint trajectories (50-step action chunks) via flow matching.
39
 
40
  ---
41
 
@@ -46,7 +46,7 @@ This model was finetuned on the **LIBERO** benchmark dataset. The LIBERO suite c
46
  - **Spatial Generalization** (libero_spatial)
47
  - **Object Generalization** (libero_object)
48
  - **Goal Generalization** (libero_goal)
49
- - **Long-Horizon Tasks** (libero_100)
50
 
51
  ### Results
52
  For detailed usage instructions, success rates, baseline comparisons, and evaluation protocols, please refer to the [OpenTau GitHub Repository](https://github.com/TensorAuto/OpenTau).
 
35
  - **Framework:** OpenTau
36
 
37
  ### Architecture
38
+ The pi0.5 architecture uses a flow-matching-based policy designed for open-world generalization. It combines a Visual Language Model (VLM) for high-level semantic understanding with a smaller "action expert" model that generates continuous joint trajectories (10-step action chunks) via flow matching.
39
 
40
  ---
41
 
 
46
  - **Spatial Generalization** (libero_spatial)
47
  - **Object Generalization** (libero_object)
48
  - **Goal Generalization** (libero_goal)
49
+ - **Long-Horizon Tasks** (libero_10)
50
 
51
  ### Results
52
  For detailed usage instructions, success rates, baseline comparisons, and evaluation protocols, please refer to the [OpenTau GitHub Repository](https://github.com/TensorAuto/OpenTau).