YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

A Pragmatic VLA Foundation Model

LingBot-VLA has focused on Pragmatic:

  • Large-scale Pre-training Data: 20,000 hours of real-world data from 9 popular dual-arm robot configurations.
  • Strong Performance: Achieve clear superiority over competitors on simulation and real-world benchmarks.
  • Training Efficiency: Represent a 1.5 ∼ 2.8× (depending on the relied VLM base model) speedup over existing VLA-oriented codebases.

Model Sources

Related Models

Model Name Huggingface ModelScope Description
LingBot-VLA-4B   🤗 lingbot-vla-4b 🤖 lingbot-vla-4b LingBot-VLA w/o Depth
LingBot-VLA-4B-Depth 🤗 lingbot-vla-4b-depth 🤖 lingbot-vla-4b-depth LingBot-VLA w/ Depth

Citation

@article{wu2026pragmatic,
  title={A Pragmatic VLA Foundation Model},
  author={Wei Wu and Fan Lu and Yunnan Wang and Shuai Yang and Shi Liu and Fangjing Wang and Shuailei Ma and He Sun and Yong Wang and Zhenqi Qiu and Houlong Xiong and Ziyu Wang and Shuai Zhou and Yiyu Ren and Kejia Zhang and Hui Yu and Jingmei Zhao and Qian Zhu and Ran Cheng and Yong-Lu Li and Yongtao Huang and Xing Zhu and Yujun Shen and Kecheng Zheng},
  journal={arXiv preprint arXiv:2601.00000},
  year={2026}
}

License Agreement

This project is licensed under the Apache-2.0 License.

Acknowledgement

This codebase is builded on the VeOmni project. Thanks for their excellent work!

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including robbyant/lingbot-vla-4b-depth

Paper for robbyant/lingbot-vla-4b-depth