| # A Pragmatic VLA Foundation Model | |
| <p align="center"> | |
| <img src="assets/Teaser.png" width="100%"> | |
| </p> | |
| **LingBot-VLA** has focused on **Pragmatic**: | |
| - **Large-scale Pre-training Data**: 20,000 hours of real-world | |
| data from 9 popular dual-arm robot configurations. | |
| - **Strong Performance**: Achieve clear superiority over competitors on simulation and real-world benchmarks. | |
| - **Training Efficiency**: Represent a 1.5 ∼ 2.8× (depending on the relied VLM base model) speedup over existing VLA-oriented codebases. | |
| --- | |
| ## Model Sources | |
| - Repository: https://github.com/robbyant/lingbot-vla | |
| - Paper: A Pragmatic VLA Foundation Model | |
| - Project Page: https://technology.robbyant.com/lingbot-vla | |
| ## Related Models | |
| | Model Name | Huggingface | ModelScope | Description | | |
| | :--- | :---: | :---: | :---: | | |
| | LingBot-VLA-4B | [🤗 lingbot-vla-4b](https://huggingface.co/robbyant/lingbot-vla-4b) | [🤖 lingbot-vla-4b](https://modelscope.cn/models/Robbyant/lingbot-vla-4b) | LingBot-VLA *w/o* Depth| | |
| | LingBot-VLA-4B-Depth | [🤗 lingbot-vla-4b-depth](https://huggingface.co/robbyant/lingbot-vla-4b-depth) | [🤖 lingbot-vla-4b-depth](https://modelscope.cn/models/Robbyant/lingbot-vla-4b-depth) | LingBot-VLA *w/* Depth | | |
| --- | |
| ## Citation | |
| ```bibtex | |
| @article{wu2026pragmatic, | |
| title={A Pragmatic VLA Foundation Model}, | |
| author={Wei Wu and Fan Lu and Yunnan Wang and Shuai Yang and Shi Liu and Fangjing Wang and Shuailei Ma and He Sun and Yong Wang and Zhenqi Qiu and Houlong Xiong and Ziyu Wang and Shuai Zhou and Yiyu Ren and Kejia Zhang and Hui Yu and Jingmei Zhao and Qian Zhu and Ran Cheng and Yong-Lu Li and Yongtao Huang and Xing Zhu and Yujun Shen and Kecheng Zheng}, | |
| journal={arXiv preprint arXiv:2601.18692}, | |
| year={2026} | |
| } | |
| ``` | |
| --- | |
| ## License Agreement | |
| This project is licensed under the [Apache-2.0 License](LICENSE). | |
| ## Acknowledgement | |
| This codebase is builded on the [VeOmni](https://arxiv.org/abs/2508.02317) project. Thanks for their excellent work! |