Add robotics pipeline tag and license

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +20 -12
README.md CHANGED
@@ -1,4 +1,6 @@
1
  ---
 
 
2
  tags:
3
  - robotics
4
  - vision-language-action models
@@ -6,27 +8,33 @@ tags:
6
 
7
  # VLANeXt: Recipes for Building Strong VLA Models
8
 
9
- [![arXiv](https://img.shields.io/badge/arXiv-2602.18532-b31b1b.svg)](https://arxiv.org/abs/2602.18532)
10
  [![Project Page](https://img.shields.io/badge/Project-Page-green)](https://dravenalg.github.io/VLANeXt)
11
- [![VLANeXt Code](https://img.shields.io/badge/GitHub-VLANeXt-black)](https://github.com/DravenALG/VLANeXt)
12
  [![Awesome VLA](https://img.shields.io/badge/GitHub-AwesomeVLA-black)](https://github.com/DravenALG/awesome-vla)
13
 
14
-
15
- Welcome to the official Hugging Face repository for **VLANeXt**! This repository hosts the checkpoints for evaluation on the LIBERO and LIBERO-plus benchmark suites.
16
 
17
  ## 📖 Abstract
18
 
19
- Following the rise of large foundation models, Vision–Language–Action models (VLAs) emerged, leveraging strong visual and language understanding for general-purpose policy learning. Yet, the current VLA landscape remains fragmented and exploratory. Although many groups have proposed their own VLA models, inconsistencies in training protocols and evaluation settings make it difficult to identify which design choices truly matter. To bring structure to this evolving space, we reexamine the VLA design space under a unified framework and evaluation setup. Starting from a simple VLA baseline similar to RT-2 and OpenVLA, we systematically dissect design choices along three dimensions: foundational components, perception essentials, and action modelling perspectives. From this study, we distill 12 key findings that together form a practical recipe for building strong VLA models. The outcome of this exploration is a simple yet effective model, VLANeXt. VLANeXt outperforms prior state-of-the-art methods on the LIBERO and LIBERO-plus benchmarks and demonstrates strong generalization in real-world experiments. We will release a unified, easy-to-use codebase that serves as a common platform for the community to reproduce our findings, explore the design space, and build new VLA variants on top of a shared foundation.
20
-
 
 
 
 
21
  ## 📚 Citation
22
 
23
- If you find VLANeXt useful for your research or applications, please cite our paper using the following BibTeX:
24
 
25
  ```bibtex
26
- @article{wu2026vlanext,
27
- title={VLANeXt: Recipes for Building Strong VLA Models},
28
- author={Xiao-Ming Wu and Bin Fan and Kang Liao and Jian-Jian Jiang and Runze Yang and Yihang Luo and Zhonghua Wu and Wei-Shi Zheng and Chen Change Loy},
29
- journal={arXiv preprint arXiv:2602.18532}
30
- }
 
31
  ```
32
 
 
 
 
1
  ---
2
+ license: other
3
+ pipeline_tag: robotics
4
  tags:
5
  - robotics
6
  - vision-language-action models
 
8
 
9
  # VLANeXt: Recipes for Building Strong VLA Models
10
 
11
+ [![arXiv](https://img.shields.io/badge/arXiv-2602.18532-b31b1b.svg)](https://huggingface.co/papers/2602.18532)
12
  [![Project Page](https://img.shields.io/badge/Project-Page-green)](https://dravenalg.github.io/VLANeXt)
13
+ [![GitHub](https://img.shields.io/badge/GitHub-VLANeXt-black)](https://github.com/DravenALG/VLANeXt)
14
  [![Awesome VLA](https://img.shields.io/badge/GitHub-AwesomeVLA-black)](https://github.com/DravenALG/awesome-vla)
15
 
16
+ VLANeXt is a Vision-Language-Action (VLA) model designed for general-purpose robotic policy learning. By systematically reexamining the VLA design space, the authors distill a set of 12 practical findings that significantly improve model performance and generalization across benchmarks like LIBERO and LIBERO-plus.
 
17
 
18
  ## 📖 Abstract
19
 
20
+ Following the rise of large foundation models, Vision–Language–Action models (VLAs) emerged, leveraging strong visual and language understanding for general-purpose policy learning. Yet, the current VLA landscape remains fragmented and exploratory. VLANeXt reexamines the VLA design space under a unified framework and evaluation setup, dissecting design choices along three dimensions: foundational components, perception essentials, and action modelling perspectives. The resulting model outperforms prior state-of-the-art methods and demonstrates strong generalization in real-world experiments.
21
+
22
+ ## 🛠️ Usage
23
+
24
+ This repository hosts the checkpoints for evaluation on the LIBERO and LIBERO-plus benchmark suites. For environment setup, training, and evaluation instructions, please refer to the official [VLANeXt GitHub repository](https://github.com/DravenALG/VLANeXt).
25
+
26
  ## 📚 Citation
27
 
28
+ If you find VLANeXt useful for your research or applications, please cite the paper:
29
 
30
  ```bibtex
31
+ @article{wu2026vlanext,
32
+ title={VLANeXt: Recipes for Building Strong VLA Models},
33
+ author={Xiao-Ming Wu and Bin Fan and Kang Liao and Jian-Jian Jiang and Runze Yang and Yihang Luo and Zhonghua Wu and Wei-Shi Zheng and Chen Change Loy},
34
+ journal={arXiv preprint arXiv:2602.18532},
35
+ year={2026}
36
+ }
37
  ```
38
 
39
+ ## 🗞️ License
40
+ This project is licensed under the [NTU S-Lab License 1.0](https://github.com/DravenALG/VLANeXt/blob/main/LICENSE).