Robotics
Safetensors
beingh
vla
nielsr HF Staff commited on
Commit
da50b52
Β·
verified Β·
1 Parent(s): bb31ffc

Add arXiv metadata and improve model card

Browse files

Hi! This PR improves the model card by:
- Adding the `arxiv` ID to the metadata to link the model with its research paper on the Hugging Face Hub.
- Updating the header image to use an absolute URL to ensure it renders correctly on the hub.
- Ensuring the GitHub repository and project page are clearly linked for better discoverability.

Files changed (1) hide show
  1. README.md +10 -11
README.md CHANGED
@@ -1,11 +1,12 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - BeingBeyond/Being-H05-2B
5
- tags:
6
- - vla
7
- - robotics
8
  pipeline_tag: robotics
 
 
 
 
9
  ---
10
 
11
  # Being-H0.5: Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization
@@ -17,7 +18,7 @@ pipeline_tag: robotics
17
  <div align="center">
18
 
19
  [![Blog](https://img.shields.io/badge/Blog-Being--H05-green)](https://research.beingbeyond.com/being-h05)
20
- [![Paper](https://img.shields.io/badge/arXiv-Paper-b31b1b.svg)](https://arxiv.org/pdf/2601.12993)
21
  [![Models](https://img.shields.io/badge/πŸ€—%20Hugging%20Face-Models-yellow)](https://huggingface.co/collections/BeingBeyond/being-h05)
22
  [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](./LICENSE)
23
 
@@ -34,9 +35,9 @@ Being-H0.5 is a foundational VLA model that scales human-centric learning with U
34
 
35
  ## News
36
 
37
- - **[2026-01-20]**: We publish the **Being-H0.5**! Check our [Paper](https://arxiv.org/pdf/2601.12993) for technical details and [Hugging Face Model Collections](https://huggingface.co/collections/BeingBeyond/being-h05) for pretrained and post-trained models. πŸ”₯πŸ”₯πŸ”₯
38
  - **[2025-08-02]**: We release the **Being-H0** codebase and pretrained models! Check our [Hugging Face Model Collections](https://huggingface.co/collections/BeingBeyond/being-h0) for more details. πŸ”₯πŸ”₯πŸ”₯
39
- - **[2025-07-21]**: We publish **Being-H0**! Check our paper [here](https://arxiv.org/pdf/2507.15597). 🌟🌟🌟
40
 
41
  ## Model Checkpoints
42
 
@@ -162,8 +163,6 @@ Being-H05 builds on the following excellent open-source projects:
162
  - [LIBERO](https://github.com/Lifelong-Robot-Learning/LIBERO): Benchmark for lifelong robot learning
163
  - [RoboCasa](https://github.com/robocasa/robocasa): Large-scale simulation benchmark for everyday tasks
164
 
165
- We thank the authors for their contributions to the robotics and machine learning communities.
166
-
167
  ## License
168
 
169
  Copyright (c) 2026 BeingBeyond Ltd. and/or its affiliates.
@@ -190,8 +189,8 @@ If you find our work useful, please consider citing us and give a star to our re
190
  ```bibtex
191
  @article{beingbeyond2025beingh0,
192
  title={Being-h0: vision-language-action pretraining from large-scale human videos},
193
- author={Luo, Hao and Feng, Yicheng and Zhang, Wanpeng and Zheng, Sipeng and Wang, Ye and Yuan, Haoqi and Liu, Jiazheng and Xu, Chaoyi and Jin, Qin and Lu, Zongqing},
194
  journal={arXiv preprint arXiv:2507.15597},
195
  year={2025}
196
  }
197
- ```
 
1
  ---
 
2
  base_model:
3
  - BeingBeyond/Being-H05-2B
4
+ license: apache-2.0
 
 
5
  pipeline_tag: robotics
6
+ tags:
7
+ - vla
8
+ - robotics
9
+ arxiv: 2601.12993
10
  ---
11
 
12
  # Being-H0.5: Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization
 
18
  <div align="center">
19
 
20
  [![Blog](https://img.shields.io/badge/Blog-Being--H05-green)](https://research.beingbeyond.com/being-h05)
21
+ [![Paper](https://img.shields.io/badge/arXiv-Paper-b31b1b.svg)](https://arxiv.org/abs/2601.12993)
22
  [![Models](https://img.shields.io/badge/πŸ€—%20Hugging%20Face-Models-yellow)](https://huggingface.co/collections/BeingBeyond/being-h05)
23
  [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](./LICENSE)
24
 
 
35
 
36
  ## News
37
 
38
+ - **[2026-01-20]**: We publish the **Being-H0.5**! Check our [Paper](https://arxiv.org/abs/2601.12993) for technical details and [Hugging Face Model Collections](https://huggingface.co/collections/BeingBeyond/being-h05) for pretrained and post-trained models. πŸ”₯πŸ”₯πŸ”₯
39
  - **[2025-08-02]**: We release the **Being-H0** codebase and pretrained models! Check our [Hugging Face Model Collections](https://huggingface.co/collections/BeingBeyond/being-h0) for more details. πŸ”₯πŸ”₯πŸ”₯
40
+ - **[2025-07-21]**: We publish **Being-H0**! Check our paper [here](https://arxiv.org/abs/2507.15597). 🌟🌟🌟
41
 
42
  ## Model Checkpoints
43
 
 
163
  - [LIBERO](https://github.com/Lifelong-Robot-Learning/LIBERO): Benchmark for lifelong robot learning
164
  - [RoboCasa](https://github.com/robocasa/robocasa): Large-scale simulation benchmark for everyday tasks
165
 
 
 
166
  ## License
167
 
168
  Copyright (c) 2026 BeingBeyond Ltd. and/or its affiliates.
 
189
  ```bibtex
190
  @article{beingbeyond2025beingh0,
191
  title={Being-h0: vision-language-action pretraining from large-scale human videos},
192
+ author={Luo, Hao and Feng, Yicheng prestige, Zhang, Wanpeng, Zheng, Sipeng, Wang, Ye, Yuan, Haoqi, Liu, Jiazheng, Xu, Chaoyi, Jin, Qin, Lu, Zongqing},
193
  journal={arXiv preprint arXiv:2507.15597},
194
  year={2025}
195
  }
196
+ ```