YuankaiLuo commited on
Commit
d153a76
·
verified ·
1 Parent(s): 18fbd48

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -5
README.md CHANGED
@@ -17,19 +17,16 @@ base_model:
17
 
18
  **Paper:** *Luo et al., 2026, “SimVLA: A Simple VLA Baseline for Robotic Manipulation”* ([arXiv:2602.18224](https://arxiv.org/pdf/2602.18224))
19
 
 
20
 
21
  ![image](https://cdn-uploads.huggingface.co/production/uploads/68a33c8b55e8a89ca25b2988/T1gf_onZ6_L0vNQ-5Zd_0.png)
22
 
23
- ## Overview
24
-
25
  Vision-Language-Action (VLA) models have emerged as a promising paradigm for general-purpose robotic manipulation, leveraging large-scale pre-training to achieve strong performance. The field has rapidly evolved with additional spatial priors and diverse architectural innovations. However, these advancements are often accompanied by varying training recipes and implementation details, which can make it challenging to disentangle the precise source of empirical gains.
26
 
27
  In this work, we introduce SimVLA, a streamlined baseline designed to establish a transparent reference point for VLA research. By strictly decoupling perception from control—using a standard vision-language backbone and a lightweight action head—and standardizing critical training dynamics, we demonstrate that a minimal design can achieve state-of-the-art performance. Despite having only 0.5B parameters, SimVLA outperforms multi-billion-parameter models on standard simulation benchmarks without robot pretraining. SimVLA also reaches on-par real-robot performance compared to π0.5. Our results establish SimVLA as a robust, reproducible baseline that enables clear attribution of empirical gains to future architectural innovations.
28
 
29
  **Project website:** [https://frontierrobo.github.io/SimVLA](https://frontierrobo.github.io/SimVLA/)
30
 
31
-
32
- ---
33
  ## Citation
34
  ```bibtex
35
  @misc{luo2026simvlasimplevlabaseline,
@@ -42,7 +39,7 @@ In this work, we introduce SimVLA, a streamlined baseline designed to establish
42
  url={https://arxiv.org/abs/2602.18224},
43
  }
44
  ```
45
- ---
46
  ## Links
47
 
48
  - **Paper:** [arXiv:2602.18224](https://arxiv.org/pdf/2602.18224)
 
17
 
18
  **Paper:** *Luo et al., 2026, “SimVLA: A Simple VLA Baseline for Robotic Manipulation”* ([arXiv:2602.18224](https://arxiv.org/pdf/2602.18224))
19
 
20
+ ## Overview
21
 
22
  ![image](https://cdn-uploads.huggingface.co/production/uploads/68a33c8b55e8a89ca25b2988/T1gf_onZ6_L0vNQ-5Zd_0.png)
23
 
 
 
24
  Vision-Language-Action (VLA) models have emerged as a promising paradigm for general-purpose robotic manipulation, leveraging large-scale pre-training to achieve strong performance. The field has rapidly evolved with additional spatial priors and diverse architectural innovations. However, these advancements are often accompanied by varying training recipes and implementation details, which can make it challenging to disentangle the precise source of empirical gains.
25
 
26
  In this work, we introduce SimVLA, a streamlined baseline designed to establish a transparent reference point for VLA research. By strictly decoupling perception from control—using a standard vision-language backbone and a lightweight action head—and standardizing critical training dynamics, we demonstrate that a minimal design can achieve state-of-the-art performance. Despite having only 0.5B parameters, SimVLA outperforms multi-billion-parameter models on standard simulation benchmarks without robot pretraining. SimVLA also reaches on-par real-robot performance compared to π0.5. Our results establish SimVLA as a robust, reproducible baseline that enables clear attribution of empirical gains to future architectural innovations.
27
 
28
  **Project website:** [https://frontierrobo.github.io/SimVLA](https://frontierrobo.github.io/SimVLA/)
29
 
 
 
30
  ## Citation
31
  ```bibtex
32
  @misc{luo2026simvlasimplevlabaseline,
 
39
  url={https://arxiv.org/abs/2602.18224},
40
  }
41
  ```
42
+
43
  ## Links
44
 
45
  - **Paper:** [arXiv:2602.18224](https://arxiv.org/pdf/2602.18224)